id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.06781 | Two-field Screening and its Cosmological Dynamics | We consider the screening of the axio-dilaton fields when both the dilaton
and the axion couple to matter with Yukawa couplings. We analyse the screening
of the dilaton in the vicinity of a compact object and find that this can only
take place when special boundary conditions at infinity are imposed. We study
the cosmological dynamics of the axio-dilaton system when coupled to matter
linearly and find that the special boundary conditions at infinity, which
guarantee the screening of compact objects, do not generically emerge from
cosmology. We analyse the background cosmology and the cosmological
perturbations at late time in these models and show that the Baryon Acoustic
Oscillations constrain the coupling of the dilaton to matter to be smaller than
in its natural supergravity realisation. Moreover we find that the Hubble rate
in the present Universe could deviate from the normalised Planck value,
although by an amount too small to account for the $H_0$ tension, and that the
growth of structure is generically reduced compared to $\Lambda$CDM. | Philippe Brax, Ayoub Ouazzani | 2023-07-13T14:42:06Z | http://arxiv.org/abs/2307.06781v1 | # Two-field Screening and its Cosmological Dynamics
###### Abstract
We consider the screening of the axio-dilaton fields when both the dilaton and the axion couple to matter with Yukawa couplings. We analyse the screening of the dilaton in the vicinity of a compact object and find that this can only take place when special boundary conditions at infinity are imposed. We study the cosmological dynamics of the axio-dilaton system when coupled to matter linearly and find that the special boundary conditions at infinity, which guarantee the screening of compact objects, do not generically emerge from cosmology. We analyse the background cosmology and the cosmological perturbations at late time in these models and show that the Baryon Acoustic Oscillations constrain the coupling of the dilaton to matter to be smaller than in its natural supergravity realisation. Moreover we find that the Hubble rate in the present Universe could deviate from the normalised Planck value, although by an amount too small to account for the \(H_{0}\) tension, and that the growth of structure is generically reduced compared to \(\Lambda\)CDM.
## I Introduction
General Relativity (GR) passes all the current gravitational tests in the solar system with flying colours[1]. Yet there are still tensions on larger scales which may eventually necessitate to challenge Einstein's theory of gravity. These are related to two classes of observations. First, there are long standing astrophysical results that imply the existence of dark matter [2]. Second, there is now robust evidence in favour of the acceleration of the expansion of the Universe [3; 4]. The latter is best accounted for in GR by adding a cosmological constant to the Lagrangian of the theory. However, understanding the origin of this constant energy density, e.g. from quantum field theoretic considerations, has proven to be fraught with difficulties [5]. Modifying GR is not an easy task either [6]. We know from Lovelock's theorem [7] and, in the effective field theory context, from Weinberg's theorem [8] that GR is unique in four dimensions provided Lorentz invariance and the masslessness of the graviton hold [9].
One way to modify GR is to add additional fields. This is the route taken by scalar-tensor theories [10] in which one or more additional scalar fields are added to the setting and couple to matter. Such scalar fields appear in many proposed dynamical models attempting to account for the late cosmic acceleration [11]. They also find theoretical motivation in the fact that they appear naturally in UV-complete theories such as string theory [12; 13]. In particular, the swampland conjectures favour an explanation of the late-time acceleration of the Universe resulting from the non-trivial dynamics of string moduli [14]. We will present the axio-dilaton which is one such scalar-tensor theories whose origin can be traced to a compactification of ten to four dimensions [13].
Unfortunately, most scalar-tensor theories are typically ruled out by solar system tests of gravity. In their most naive form, consistency of scalar-tensor theories either requires them to have a small coupling to matter or the scalar must be stabilised with large masses leading to short ranged interactions [15].
This can be illustrated with the Brans-Dicke theory (BD) and the action [1; 16]
\[\mathcal{S}_{\rm BD}=\int d^{4}x\sqrt{-g}M_{p}^{2}\left(\frac{ \mathcal{R}}{2}-\frac{1}{2}\partial^{\mu}\varphi\partial_{\mu}\varphi-V( \varphi)\right)\\ +\mathcal{S}_{m}\left(g^{J}_{\mu\nu},\,\psi_{m}\right) \tag{1}\]
where \(\psi_{m}\) represents the ordinary matter fields and the Jordan frame metric is given by
\[g^{J}_{\mu\nu}=A^{2}(\varphi)g_{\mu\nu}\quad\text{with}\quad A(\varphi)=e^{ \mathfrak{g}\varphi} \tag{2}\]
where \(\mathfrak{g}\) is a coupling constant. The scalar couples to matter only via the metric \(g^{J}_{\mu_{\nu}}\). The metric \(g_{\mu\nu}\) is the Einstein metric for which the Einstein equations take their usual form, i.e. matter "feels" the geometry of the Jordan frame.
Constraints from solar system tests like those using measurements by the Cassini probe [17] can be understood using the Parameterized Post-Newtonian (PPN) formalism. It is enough here to define two important PPN parameters. We introduce the gravitational mass of the source \(M\), and the PPN parameters \(\gamma_{PPN}\), \(\beta_{PPN}\) via the following parameterisation of the Jordan metric element in isotropic coordinates around a given compact object
\[ds_{J}^{2}=-\left(1-\frac{2GM}{r}+2\beta_{PPN}\left(\frac{GM}{r }\right)^{2}+\mathcal{O}\left(\frac{1}{r^{3}}\right)\right)dt^{2}\] \[+\left(1+2\gamma_{PPN}\frac{GM}{r}+\mathcal{O}\left(\frac{1}{r^{ 2}}\right)\right)\left(dr^{2}+r^{2}d\Omega^{2}\right)\.\]
In GR \(\gamma_{PPN}=\beta_{PPN}=1\) whilst for the Brans-Dicke theory the deviation from GR is captured by \(\gamma_{PPN}\), i.e. \(\beta_{PPN}=1\) and [13]
\[\gamma_{PPN}=\frac{1-2\mathfrak{g}^{2}}{1+2\mathfrak{g}^{2}}. \tag{4}\]
The Cassini probe on the other hand gives [17]
\[|\gamma_{PPN}-1|<2.3\times 10^{-5}. \tag{5}\]
This constrains the coupling \(\mathfrak{g}^{2}\) to be less than \(10^{-5}\).
Larger deviations from GR can be reached in certain regimes when the theories are subject to a _screening mechanism_, i.e. a modification which allows the theory to evade solar system tests while having an order unity coupling to matter whose role is important on large scales. There are now at least three types of screening mechanisms for single field models, i.e. the chameleon, K-montage and Vainshtein mechanisms [18; 19; 20]. They all rely on higher derivatives and/or non-linearities of the kinetic terms and interacting potentials of the models [15]. In [13], a new screening mechanism was introduced, which relies on a second field that has a non-zero but small coupling to matter. The mechanism depends crucially on the interplay between the scalar profiles inside and outside matter [21; 22]. Here, we consider the situations where the two fields, the dilaton and the axion, have a linear coupling to matter that we denote by \(\kappa\) and \(\epsilon\). We focus on the small \(\epsilon\) regime and vary \(\kappa\) from small values up to unity which corresponds to its value in supergravity where the dilaton plays the role of the volume modulus of string compactifications [23]. We find that for \(\kappa=1\), screening of the dilaton around compact objects only takes place when the fields take particular values at infinity. These values should emerge from the cosmological dynamics. We then focus on the cosmology of these values where both \(\kappa\) and the values of the coupling \(\epsilon\) to both baryons and cold dark matter are varied. Generically, the field values in the present Universe do not satisfy the screening conditions. In fact, one must resort to yet unknown screening mechanisms for the axio-dilaton system in order to accommodate both local solar system tests and cosmological constraints such as the Baryon Acoustic Oscillations [24].
The coupling of the axion to dark matter \(\epsilon_{C}\) plays a crucial role cosmologically and can be of order unity. On the other hand, we find that the coupling of the dilation \(\kappa\) must be reduced locally to small values in order to pass solar system tests. In this paper, we will consider two likely scenarios. The first one is that the coupling \(\kappa\) is determined locally to be extremely small, of the order of \(10^{-3}\), and that most of the cosmological dynamics are due to the coupling of the axion \(\epsilon_{C}\) in a manner reminiscent of coupled quintessence [25] although in a multi-field setting here [26]. This could be achieved if another field \(\chi\) drove the coupling \(\kappa\) to such values dynamically in the whole Universe. Another possibility could be that the coupling \(\kappa\) is small locally but allowed to take larger values cosmologically. This could happen if the field \(\chi\) only made \(\kappa\) small locally. In this second scenario, we find that the BAO constraints on late time cosmology are so stringent that \(\kappa\) cannot be taken of order unity as in the original supergravity model. In this setting, we consider the allowed deviations of \(H_{0}\) from their values as calibrated by the Planck satellite experiment and find that only a few percent of discrepancy are allowed. This is much less than the current \(H_{0}\) tension [27; 28]. Finally, we notice that the linear growth in these models is reduced compared to the \(\Lambda\)CDM case, despite the existence of attractive fifth forces due to the dilaton and the axion. This could provide a solution to the \(\sigma_{8}\) tension [29] where the observed amount of clustering is reduced compared to the expected one from early times [30; 31]. The precise study of this possibility is left to future work.
The scenario that we present here enlarges the usual single-field class of models for late time cosmology. Such multi-field generalisations could prove useful in view of future measurement and present cosmological tensions [32].
The paper is arranged as follows. In section II, we present the axio-dilaton model. In section III, we consider the screening of a compact objects. We then discuss the cosmology of these models in section IV.
## II The Axio-dilaton theory
### The Lagrangian
The axio-dilaton theory contains two scalar fields, the dilaton \(\tau>0\) and the axion \(a\). The dilaton \(\tau\) couples to matter only through the Jordan frame metric while \(a\) is directly coupled to matter. The difference will be made clear below where we will construct an effective metric which will mediate the coupling of both scalars to matter. The action of the theory is
\[\mathcal{S}=\int d^{4}x\sqrt{-g}M_{p}^{2}\left[\frac{\mathcal{R}}{2}-\frac{3 }{4}\left(\frac{\partial^{\mu}\tau\partial_{\mu}\tau+\partial^{\mu}a\partial _{\mu}a}{\tau^{2}}\right)\right]+\mathcal{S}_{m}\, \tag{6}\]
with
\[\mathcal{S}_{m}=\mathcal{S}_{m}(g^{J}_{\mu\nu},\,a,\,\psi_{m}), \qquad g^{J}_{\mu_{\nu}}=A(\tau)^{2}g_{\mu\nu}\] \[A(\tau)=\tau^{-\kappa/2}. \tag{7}\]
\(\kappa\) is a coupling constant. The theory studied in [13] corresponds to \(\kappa=1\) and is associated to a supergravity model of string theory origin with a Kahler potential \(K=-3\ln\bigl{(}\mathcal{T}+\overline{\mathcal{T}}\bigr{)}\) and a coupling to matter determined by \(A=e^{K/6}\). In this setting, the dilaton can be seen as the volume modulus of a 6d compactification of string theory from 10d to 4d. Introducing \(\kappa\) enables one to tune the matter-dilaton coupling and make the solar system tests of gravity easier to satisfy. We will see that screening is compulsory for the model to pass the solar system tests of gravitation.
The two fields can be viewed as the real and imaginary parts of a complex field \(\mathcal{T}=\frac{1}{2}(\tau+ia)\) whose Lagrangian is
\[\mathcal{L}=\sqrt{-g}M_{p}^{2}\left(\frac{\mathcal{R}}{2}-\frac{3\partial^{ \mu}\mathcal{T}\partial_{\mu}\overline{\mathcal{T}}}{(\mathcal{T}+\overline{ \mathcal{T}})^{2}}\right)+\mathcal{L}_{m}. \tag{8}\]
In this model, we define the usual stress-energy tensor and the coupling of \(a\) to matter
\[T^{\mu\nu}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta S_{m}}{\delta g_{\mu\nu}},\ {\cal A}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta S_{m}}{\delta a}. \tag{9}\]
We will use for the trace the notation \(T\equiv g_{\mu\nu}T^{\mu\nu}\). Notice that the matter action depends on the axion field in a non-trivial manner and not via the Jordan metric. This will have drastic consequences that we will unravel below.
There are two Klein-Gordon equations for the two scalar fields, i.e.
\[\Box\tau-\frac{1}{\tau}(\partial_{\mu}\tau\partial^{\mu}\tau-\partial_{\mu}a \partial^{\mu}a)-\kappa\frac{\tau}{3M_{p}^{2}}T=0\, \tag{10}\]
and
\[\Box a-\frac{2}{\tau}\partial_{\mu}\tau\partial^{\mu}a+\frac{\tau^{2}}{3M_{p} ^{2}}{\cal A}=0. \tag{11}\]
It is convenient to introduce the dilaton field \(\varphi\) such that \(\tau=e^{\varphi}\) leading to
\[\Box\varphi+(\partial_{\mu}a\partial^{\mu}a)e^{-2\varphi}-\frac{\kappa}{3M_{p }^{2}}T=0. \tag{12}\]
The Einstein equation is simply
\[{\cal R}_{\mu\nu}-\frac{3}{2\tau^{2}}(\partial_{\mu}\tau\partial_{\nu}\tau+ \partial_{\mu}a\partial_{\nu}a)-\frac{1}{M_{p}^{2}}(T_{\mu\nu}-\frac{1}{2}Tg_ {\mu\nu})=0. \tag{13}\]
where we have separated the matter energy momentum tensor from the scalar one.
### The symmetries
In the absence of matter, the theory is invariant under a \(SL(2,\mathbb{R})\) group whose origin can be traced back to supergravity. Indeed, the kinetic term of the fields is invariant under
\[{\cal T}\longrightarrow\frac{a{\cal T}-ib}{ic{\cal T}+d}\qquad\text{provided} \qquad ad-bd=1 \tag{14}\]
corresponding to a Kahler transformation of the theory. There are thus three conserved currents, corresponding to the dimension of the symmetry group in the absence of matter. We can choose as a basis for these currents
* The axion shift symmetry \({\cal T}\to{\cal T}-ib\) (\(a=c=0\), \(d=1\)): \[J_{A}^{\mu}=\frac{\partial^{\mu}a}{\tau^{2}}\.\] (15)
* The rescaling symmetry \({\cal T}\to a{\cal T}\) (\(b=c=0\), \(d=1\)): \[J_{S}^{\mu}=\frac{\partial^{\mu}\tau}{\tau}+\frac{a\partial^{\mu}a}{\tau^{2} }\.\] (16)
* The non-linear symmetry \({\cal T}\to{\cal T}-ic{\cal T}^{2}\) (\(a=d=1\), \(b=0\), \(c\ll 1\)): \[J_{N}^{\mu}=\frac{\tau^{2}-a^{2}}{\tau^{2}}\partial^{\mu}a-2a\frac{\partial^{ \mu}\tau}{\tau}\.\] (17)
From the Klein-Gordon equations we can directly obtain the (non-)conservation laws
\[\nabla_{\mu}J_{A}^{\mu}=-\frac{{\cal A}}{3M_{p}^{2}}\,\qquad \nabla_{\mu}J_{S}^{\mu}=\frac{\kappa T-a{\cal A}}{3M_{p}^{2}}\,\] \[\nabla_{\mu}J_{N}^{\mu}=\frac{(a^{2}-\tau^{2}){\cal A}-2a\kappa T }{3M_{p}^{2}}\.\]
As can be seen, matter breaks the whole symmetry group as none of the three currents is conserved anymore.
When \({\cal S}_{m}\) does not depend on \(a\), i.e. \({\cal A}=0\), the axio-dilation theory is equivalent to a Brans-Dicke theory. Indeed when \({\cal A}=0\) we have the axion solution \(a=\text{cste}\), and so Eq.(12) becomes
\[\Box\varphi-\frac{\kappa}{3M_{p}^{2}}T=0. \tag{19}\]
Similarly, the BD Lagrangian (50) gives the Klein-Gordon equation
\[\Box\varphi_{\text{BD}}+\frac{\mathfrak{g}}{M_{p}^{2}}T=0. \tag{20}\]
Matching the two Weyl factors \(e^{-\kappa\varphi/2}=e^{\mathfrak{g}\varphi_{\text{BD}}}\) we get \(\varphi_{\text{BD}}=(-\kappa/2\mathfrak{g})\varphi\). Combining the two Klein-Gordon equations
\[\left(\frac{-\kappa}{2\mathfrak{g}}+\frac{3\mathfrak{g}}{\kappa}\right)\Box \varphi=0\implies\mathfrak{g}^{2}=\kappa^{2}/6. \tag{21}\]
When \(\kappa=1\), we find that the coupling reduces to \(1/\sqrt{6}\), i.e. the same as for \(f(R)\) and massive gravity.
## III Non-relativistic source and screening
Screening requires to study the gravitational physics around objects like the Sun. We model the Sun and other compact objects such as the Earth or the Moon as non-relativistic sources, i.e the only non-zero component of their stress-energy tensor is \(T_{00}\equiv\rho\). We also assume that they are static and spherically symmetric. We look for space-time solutions with the same symmetries that we chart with isotropic coordinates
\[g_{\mu\nu}dx^{\mu}dx^{\nu}=-e^{2u(r)}dt^{2}+e^{2w(r)}\left(dr^{2}+r^{2}d\Omega^ {2}\right). \tag{22}\]
The Jordan metric is obtained by multiplying this line element by the coupling function \(A^{2}\).
### Exterior solution
We will simplify the setting by considering non-relativistic objects with a small Newtonian potential. As a result, we will approximate the Klein-Gordon equations using a flat background \(\mathbf{g}=\mathbf{\eta}\) where \(\eta_{\mu\nu}\) is the Minkowski metric tensor. The validity of the approximation is evaluated a posteriori.
In the following, primes will refer to derivatives with respect to \(r\). We assume spherical symmetry. The resulting Klein-Gordon equations are
\[\tau^{\prime\prime}+\frac{2\tau^{\prime}}{r}-\frac{\tau^{\prime 2}-{a^{\prime}}^{2} }{\tau}+\kappa\frac{\tau\rho}{3M_{p}^{2}}=0\, \tag{23}\]
and
\[a^{\prime\prime}+\frac{2a^{\prime}}{r}-\frac{2\tau^{\prime}a^{\prime}}{\tau}+ \frac{\tau^{2}\mathcal{A}}{3M_{p}^{2}}=0. \tag{24}\]
Here \(\rho\) is the matter density inside or outside the object. Instead of using these equations directly, we can use the equations for the currents (18). In the absence of matter, i.e. outside the objects, the currents are conserved
\[r^{2}\frac{a^{\prime}}{\tau^{2}} =C_{A}\, \tag{25}\] \[r^{2}\left(\frac{\tau^{\prime}}{\tau}+\frac{aa^{\prime}}{\tau^ {2}}\right) =C_{S}\,\] (26) \[r^{2}\left(\frac{(\tau^{2}-a^{2})a^{\prime}}{\tau^{2}}-\frac{2 \sigma\tau^{\prime}}{\tau}\right) =C_{N}. \tag{27}\]
These constants are fixed by the conditions inside the source.
We are interested in the case where \(a\neq\) cste. So \(C_{A}\neq 0\). Defining \(\gamma\equiv C_{A}\), \(\alpha\equiv C_{S}/C_{A}\), \(\beta\equiv(C_{S}/C_{A})^{2}+C_{N}/C_{A}\), we obtain
\[\tau^{2}+(a-\alpha)^{2}=\beta^{2}. \tag{28}\]
where \(\tau\) and \(a\) thus evolve on a circle in the \(\tau-a\) plane. We can thus eliminate \(\tau\) in Eq.(25) and get
\[a^{\prime}=\frac{\gamma}{r^{2}}(\beta^{2}-(a-\alpha)^{2}). \tag{29}\]
We integrate this to obtain the axion profile
\[a=\alpha-\beta\tanh X(r)\quad\text{with}\quad X(r)=\frac{\gamma\beta}{r}+ \delta\, \tag{30}\]
where \(\delta\) is a new integration constant. Using again Eq.(25) we finally get
\[\tau=\frac{\beta}{\cosh X(r)}. \tag{31}\]
Some of the integration constants are fixed by the boundary conditions inside the source. Indeed, from the currents (18), we see that
\[\gamma=C_{A}=-\frac{1}{3M_{p}^{2}}\int_{0}^{R}drr^{2}\mathcal{A}(r)\, \tag{32}\]
\[\gamma\alpha=C_{S}=-\frac{1}{3M_{p}^{2}}\int_{0}^{R}drr^{2}(\kappa\rho(r)+a(r) \mathcal{A}(r)). \tag{33}\]
On the other hand, \(\beta\) and \(\delta\) can only be fixed by the values of the fields at infinity
\[a_{\infty}=\alpha-\beta\tanh\delta,\ \beta=\tau_{\infty}\cosh\delta=\sqrt{ \tau_{\infty}^{2}+(\alpha-a_{\infty})^{2}}. \tag{34}\]
As shown in appendix A, these solutions in the flat background approximation are valid as long as
\[r\gg GM\qquad\text{and}\qquad r\gg|\gamma\beta|. \tag{35}\]
The first condition corresponds to being far from the Schwarzschild radius.
### Screening
We first investigate screening in the Jordan frame where we will extract the PPN parameters from the Jordan metric
\[g_{\mu\nu}^{J}dx^{\mu}dx^{\nu}=A^{2}g_{\mu\nu}dx^{\mu}dx^{\nu}\] \[=-\left(1-\frac{2GM_{g}}{r}+2\beta_{PPN}\left(\frac{GM_{g}}{r} \right)^{2}+\mathcal{O}\left(\frac{1}{r^{3}}\right)\right)dt^{2}\] \[+\left(1+2\gamma_{PPN}\frac{GM_{g}}{r}+\mathcal{O}\left(\frac{1}{ r^{2}}\right)\right)(dr^{2}+r^{2}d\Omega^{2}). \tag{36}\]
where \(A^{2}=1/\tau^{\kappa}\) and \(g_{\mu\nu}\) is given by (22). \(M_{g}\) is the gravitational mass as defined in the PPN formalism. We have that \(M_{g}\neq M=\int_{\text{source}}\rho\), i.e. the mass of the object in the Jordan frame is renormalised by the presence of the scalar fields.
In the absence of fields, the Einstein frame metric \(g_{\mu\nu}\) is the Schwarzschild metric. In the presence of the fields the Einstein equations are modified and we expand
\[e^{2u} =1-\frac{2l}{r}+\frac{2l^{2}}{r^{2}}+\mathcal{O}\left(\frac{1}{r^{ 3}}\right)\, \tag{37}\] \[e^{2w} =1+\frac{2l}{r}+\mathcal{O}\left(\frac{1}{r^{3}}\right). \tag{38}\]
where \(l=GM\). Expanding the conformal factor \(A^{2}\) in inverse powers of the distance, we have
\[A^{2}=A_{\infty}^{2}\left(1-\alpha_{1}/r+\alpha_{2}/r^{2}+\mathcal{O}\left( \frac{1}{r^{3}}\right)\right)\, \tag{39}\]
from which we deduce that
\[\gamma_{PPN} =\frac{1-\frac{\alpha_{1}}{2l}}{1+\frac{\alpha_{1}}{2l}}\, \tag{40}\] \[\beta_{PPN} =\frac{l^{2}+l\alpha_{1}+\frac{1}{2}\alpha_{2}}{(l+\frac{1}{2} \alpha_{1})^{2}}. \tag{41}\]
For the axio-dilaton theory the conformal factor is given by
\[A^{2}=\tau^{-\kappa}=\left(\frac{\cosh\!\left(\frac{\beta\gamma}{r}+\delta\right) }{\beta}\right)^{\kappa} \tag{42}\]
where we have used the explicit solution for \(\tau(r)\). Expanding in \(1/r\) we find the coefficients
\[\alpha_{1} =-\kappa\beta\gamma\tanh\delta\qquad,\] \[\alpha_{2} =\frac{\kappa\beta^{2}\gamma^{2}}{2}((\kappa-1)\tanh^{2}\delta+1 ). \tag{43}\]
In the following, we will always choose the following ansatz following [13]
\[\mathcal{A}=-\epsilon T\, \tag{44}\]
where \(\epsilon\) is a small constant and \(T\) the trace of the energy momentum tensor. In the case of static sources this reduces to \(\mathcal{A}=\epsilon\rho\). As we shall see, this choice is not innocent as this will bring back the two-field model within the realm of the scalar-tensor theories with an effective coupling to both the dilaton and the axion. We will make this explicit in the following. From Eq.(32) we obtain
\[\gamma=-\frac{2\epsilon GM}{3}=-\frac{2\epsilon l}{3}. \tag{45}\]
and in the Jordan frame the PPN parameters become
\[\gamma_{PPN}=\frac{3-\epsilon\kappa\beta\tanh\delta}{3+\epsilon\kappa\beta \tanh\delta}\, \tag{46}\]
and
\[\beta_{PPN}=1+\frac{\kappa\epsilon^{2}\beta^{2}(1-\tanh^{2}\delta)}{(3+\kappa \epsilon\beta\tanh\delta)^{2}}\.\]
Hence by taking \(\epsilon\) small GR is recovered as long as \(\beta\) does not increase accordingly. This is the essence of this new type of screening which corresponds to a non-uniform limit \(\epsilon\to 0\). Indeed, when \(\epsilon=0\) the model is equivalent to a Brans-Dicke model with a coupling \(\kappa/\sqrt{6}\) which needs to be small enough to satisfy the solar system tests of gravity. But it turns out that the non-uniform limit \(\epsilon\to 0\) requires a particular choice of the boundary conditions at infinity which are not generic. We will come back to this point in the next section.
### The interior solution
We now solve Eq.(23) and Eq.(24) inside the source of radius \(R\), with the boundary conditions \(\tau^{\prime}(0)=a^{\prime}(0)=0\) at the origin. We assume a uniform density inside the body as a simplifying assumption and a coupling \(\mathcal{A}\) with the same profile as \(\rho\)
\[\begin{cases}\rho\equiv\rho_{0}=\frac{M}{\frac{4}{3}\pi R^{3}}\ \.\\ \mathcal{A}=\epsilon\rho_{0}\end{cases} \tag{47}\]
The dynamical equations cannot be solved exactly. We will get perturbative solutions in \(\epsilon\) as we have seen that a small coupling \(\epsilon\) is required to screen in the Jordan frame.
#### ii.3.1 The dilaton dynamics
The equation for the current \(J^{\mu}_{A}\) from Eq.(18) gives
\[\left(r^{2}\frac{a^{\prime}}{\tau^{2}}\right)^{\prime}=-\frac{r^{2}\mathcal{ A}}{3M_{p}^{2}}. \tag{48}\]
Assuming regularity for the fields at the origin, we obtain
\[\frac{a^{\prime}(r)}{\tau^{2}(r)}=-\frac{\epsilon\rho_{0}}{3M_{p}^{2}r^{2}} \frac{r^{3}}{3}. \tag{49}\]
We define \(m^{2}=\rho_{0}/3M_{p}^{2}\) leading to
\[a^{\prime}=-\frac{\epsilon m^{2}}{3}r\tau^{2}. \tag{50}\]
We can substitute this expression in Eq.(23) to get
\[\tau^{\prime\prime}+\frac{2\tau^{\prime}}{r}-\frac{(\tau^{\prime})^{2}}{\tau} +\frac{\epsilon^{2}m^{4}}{9}r^{2}\tau^{3}+\kappa m^{2}\tau=0. \tag{51}\]
In terms of the dilaton \(\varphi=\ln\tau\), the Klein-Gordon equation Eq.(12) then becomes
\[\varphi^{\prime\prime}+\frac{2}{r}\varphi^{\prime}+\frac{\epsilon^{2}m^{4}}{ 9}r^{2}e^{2\varphi}+\kappa m^{2}=0 \tag{52}\]
where we impose the boundary equation \(\varphi^{\prime}(0)=0\).
#### ii.3.2 Perturbative expansion in \(\epsilon\)
Up to now, everything is exact. We now expand in powers of \(\epsilon\):
\[\varphi =\varphi^{(0)}+\epsilon\varphi^{(1)}+\ldots\, \tag{53}\] \[a =a^{(0)}+\epsilon a^{(1)}+\ldots\, \tag{54}\]
and impose the boundary conditions at each order.
The advantage of this perturbative method is that the problematic term \(e^{2\varphi}\) in Eq.(52) appears only at second order in \(\epsilon\). Indeed we get at orders \(0\) and \(1\)
\[\varphi^{(0)}{}^{\prime\prime}+\frac{2}{r}\varphi^{(0)}{}^{\prime }+\kappa m^{2} =0\, \tag{55}\] \[\varphi^{(1)}{}^{\prime\prime}+\frac{2}{r}\varphi^{(1)}{}^{\prime } =0. \tag{56}\]
The solution for \(\varphi^{(1)}\) is then
\[\varphi^{(1)}=c_{0}+\frac{c_{1}}{r}. \tag{57}\]
For \(\varphi^{(0)}\) we have
\[\varphi^{(0)}=d_{0}+\frac{d_{1}}{r}-\frac{\kappa m^{2}}{6}r^{2}. \tag{58}\]
The boundary conditions \({\varphi^{(n)}}^{\prime}(0)=0\), impose that the fields are regular at \(0\), and so we have no \(1/r\) term, i.e. \(c_{1}=d_{1}=0\), and finally
\[\tau=(\tau_{0}+\epsilon\tau_{1})e^{-\frac{\kappa m^{2}}{6}r^{2}}+\ldots \tag{59}\]
where we have redefined the constants of integration. Inside the source and for small \(\epsilon\), \(\tau\) decreases exponentially fast.
We can now obtain \(a\). Using Eq.(50) we have
\[{a^{(0)}}^{\prime}+\epsilon{a^{(1)}}^{\prime}=\epsilon\frac{\tau_{0}^{2}}{2} \left(e^{-\frac{\kappa m^{2}}{3}r^{2}}\right)^{\prime}\, \tag{60}\]
and therefore
\[a^{(0)} =\cste\, \tag{61}\] \[a^{(1)} =\frac{\tau_{0}^{2}}{2}e^{-\frac{\kappa m^{2}}{3}r^{2}}+\cste. \tag{62}\]
The axion field is then given by
\[a=a_{0}+\epsilon\left(a_{1}+\frac{\tau_{0}^{2}}{2}e^{-\frac{\kappa m^{2}}{3}r ^{2}}\right)+\ldots. \tag{63}\]
We see that the axion and dilaton fields only evolve when \(\epsilon\neq 0\).
#### iv.3.3 Matching to the exterior solution
The continuity of \(\tau\) and \(a\) at \(r=R\) reads
\[(\tau_{0}+\epsilon\tau_{1})e^{-\frac{\kappa m^{2}}{6}R^{2}} =\frac{\beta}{\cosh(\frac{\beta\gamma}{R}+\delta)}\, \tag{64}\] \[a_{0}+\epsilon\left(a_{1}+\frac{\tau_{0}^{2}}{2}e^{-\frac{\kappa m ^{2}}{3}R^{2}}\right) =\alpha-\beta\tanh(\frac{\beta\gamma}{R}+\delta). \tag{65}\]
We use the continuity equations for \(\varphi^{\prime}=\tau^{\prime}/\tau\) and \(a^{\prime}/\tau^{2}\)
\[-\frac{\kappa m^{2}}{3}R =\tanh(\frac{\beta\gamma}{R}+\delta)\frac{\beta\gamma}{R^{2}}\, \tag{66}\] \[-\frac{\epsilon\kappa\tau_{0}^{2}m^{2}}{3(\tau_{0}+\epsilon\tau_ {1})^{2}}R =\frac{\gamma}{R^{2}}. \tag{67}\]
We have thus eight integration constants, i.e. \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) for the exterior solution and \(\tau_{0}\), \(\tau_{1}\), \(a_{0}\), \(a_{1}\) for the interior solution. Recall that
\[\gamma =-\frac{\epsilon m^{2}}{3}R^{3}\, \tag{68}\] \[\gamma\alpha =-m^{2}\int_{0}^{R}drr^{2}(\kappa+\epsilon a(r)). \tag{69}\]
So \(\gamma\) is fixed independently of the others and \(\alpha\) depends on \(a_{0}\) and \(a_{1}\). We are left with \(6\) parameters. With a system of \(4\) continuity constraints as given above, we end up with \(2\) degrees of freedom. These can be parameterised by the values of the fields at infinity \(\tau_{\infty}\) and \(a_{\infty}\) which determine the full solution.
### Screening revisited
We can now revisit the conditions under which the gravitational deviation from GR in the Jordan frame is small. Using
\[\gamma\alpha=-\frac{1}{3M_{p}^{2}}\int_{0}^{R}drr^{2}(\kappa\rho(r)+a(r){\cal A }(r))\, \tag{70}\]
and \(\gamma=-\epsilon m^{2}R^{3}/3\) we obtain the identity
\[\frac{-\epsilon m^{2}R^{3}}{3}\alpha=-m^{2}\left(\kappa\frac{R^{ 3}}{3}+\epsilon\int_{0}^{R}drr^{2}a^{(0)}(r)\right.\\ +\left.\epsilon^{2}\int_{0}^{R}drr^{2}a^{(1)}(r)\right). \tag{71}\]
As a result, the expansion of \(\alpha\) is singular in the limit \(\epsilon\ll 1\) and becomes
\[\alpha=\frac{\kappa}{\epsilon}+a_{0}+\epsilon a_{1}+\ldots \tag{72}\]
implying that the outside solution is very sensitive to small values of \(\epsilon\). In particular, we see that the limit of small \(\epsilon\ll 1\) leads to a large value for the exterior axion field. Using Eq.(34) we obtain
\[\beta=\sqrt{\tau_{\infty}^{2}+\left(\frac{\kappa}{\epsilon}+a_{0}-a_{\infty}^ {(0)}\right)^{2}} \tag{73}\]
where we have neglected the terms of order \(\epsilon\). Now unless \(a_{\infty}\) turns out to be of order \(1/\epsilon\) and cancels exactly the term in \(\kappa/\epsilon\), we find that for generic boundary values at infinity
\[\beta=\frac{\kappa}{\epsilon}+a_{0}-a_{\infty}^{(0)}+\ldots \tag{74}\]
when \(\kappa={\cal O}(1)\) and \(\epsilon\ll 1\). The matching conditions at \(r=R\) simplify as we notice that \(m^{2}R^{2}/2=G_{N}M/R\) where \(M\) is the mass of the object. This is nothing but the Newtonian potential of the compact object at its surface which is always small in our Newtonian approximation, its value being close to \(10^{-6}\) for the Sun. We then deduce using (65) that \(a_{0}=a_{\infty}^{(0)}\) to leading order and similarly
\[\tanh\delta=1-\frac{\epsilon^{2}}{\kappa}\frac{\tau_{0}^{2}}{2}+\ldots \tag{75}\]
implying that \(\delta\) is always large. As a result we have \(\epsilon\beta\tanh\delta=\kappa+\ldots\). and the PPN parameters are simply
the ones of a scalar tensor theory with a coupling \(\kappa/\sqrt{6}\)
\[\gamma_{PPN}=\frac{3-\kappa^{2}}{3+\kappa^{2}}+\mathcal{O}(\epsilon)\, \tag{76}\]
and
\[\beta_{PPN}=1+\mathcal{O}(\epsilon^{2}). \tag{77}\]
In this limit \(\beta_{PPN}\) can be arbitrarily close to 1 for small \(\epsilon\), but not \(\gamma_{PPN}\). Small deviations of \(\gamma_{PPN}\) from unity are only achieved for very small \(\kappa\), a result which does not differ from Brans-Dicke's and signals that screening does not take place. Screening can only take place when
\[a_{\infty}^{(0)}-a_{0}=\frac{\kappa}{\epsilon}. \tag{78}\]
which corresponds to a specific choice for the axion field at infinity. As the theory has no scalar potential, the value of the axion field at infinity is not obtained by minimising an effective potential like in the chameleon mechanism. Hence, the boundary value of the axion field must emerge from the cosmological dynamics. We will study this below.
Our analysis has assumed that \(\mathcal{A}=\epsilon\rho\). As soon as the dependence of the matter action on the axion is weak, i.e. \(\mathcal{A}/\rho\sim\epsilon\ll 1\), the same qualitative results follow as \(\gamma\) will be of order \(\epsilon\) and \(\gamma\alpha\) of order \(\kappa\). This reasoning is independent of the details of the model inside the source.
### The effective metric
As we have seen, the generic absence of screening leads to the coupling of a compact of object to be equivalent to the one of a point particle with coupling \(\kappa/\sqrt{6}\). This is the coupling of matter to the dilaton. The fact that the axion couples to the matter action too implies that compact objects do not follow the geodesics of the Jordan metric but the ones of an effective metric whose presence can be inferred from the small field expansion
\[\delta S_{m}=-\int d^{4}x\sqrt{-g}\left(\frac{\partial\ln A}{\partial\varphi} \delta\varphi-\frac{\epsilon}{2}\delta a\right)T \tag{79}\]
where the variation of the fields is taken around the background values for the dilaton and the axion. Notice that the axion and the dilaton fields both couple to the trace of the energy momentum tensor. Let us define
\[B(\varphi,a)=A(\varphi)e^{-ca/2} \tag{80}\]
then the coupling to matter can be written as
\[\delta S_{m}=-\int d^{4}x\sqrt{-g}\left(\frac{\partial\ln B}{\partial\varphi} \delta\varphi+\frac{\partial\ln B}{\partial a}\delta a\right)T \tag{81}\]
corresponding to the coupling of a two-field scalar-tensor theory where the effective metric is
\[g^{\rm eff}_{\mu\nu}=B^{2}(\varphi,a)g_{\mu\nu}. \tag{82}\]
As a result, compact objects evolve along the geodesics of the effective metric and not the Jordan metric.
This can be confirmed by analysing the geodesic equations for pressureless matter for the axio-dilaton theories. Indeed the Klein-Gordon equations and the Bianchi identity \(\nabla^{\mu}(R_{\mu\nu}-\frac{R}{2}g_{\mu\nu})=0\) imply the non-conservation equation
\[\nabla_{\mu}T^{\mu\nu}=\frac{1}{2}(\kappa T\partial^{\nu}\varphi-\mathcal{A} \partial^{\nu}a). \tag{83}\]
For non-relativistic matter, the energy momentum tensor is simply
\[T_{\mu\nu}=\rho u_{\mu}u_{\nu}\quad\text{with}\quad u_{\mu}u^{\mu}=-1. \tag{84}\]
In this section, dots will be denoting the time derivative along the particle lines defined by \(u^{\mu}\), i.e. \(\dot{X}\equiv\nabla_{u}X=u^{\mu}\nabla_{\mu}X\). For scalar quantities \(X\), this also corresponds to the derivative with respect to the proper time of a particle moving with velocity \(u^{\mu}\). We define the local Hubble rate as \(3h\equiv\nabla_{\mu}u^{\mu}\). The non-conservation equation then becomes
\[\dot{\rho}u^{\mu}+3h\rho u^{\mu}=\frac{1}{2}(\kappa\rho\partial^{\mu}\varphi+ \mathcal{A}\partial^{\mu}a). \tag{85}\]
Contracting with \(u_{\mu}\) and using \(u^{\mu}u_{\mu}=-1\) we get the generalised continuity equation
\[\dot{\rho}+3h\rho=-\frac{1}{2}(\kappa\rho\dot{\varphi}+\mathcal{A}\hat{a}). \tag{86}\]
We recognise the coupling function \(B\)
\[B\equiv e^{-\frac{1}{2}(\kappa\varphi+\epsilon a)} \tag{87}\]
when \(\epsilon=\text{cste}\). We can define a conserved density \(\rho_{\rm con}\) in the Einstein frame such that
\[\rho=B\rho_{\rm con} \tag{88}\]
as
\[\dot{\rho}_{\rm con}+3h\rho_{\rm con}=0. \tag{89}\]
This is the conserved matter density in the axion-dilaton setting. Combining Eq.(85) and Eq.(86), we obtain the modified Newton's Law
\[\dot{u}^{\mu}-\frac{1}{2}(\kappa\dot{\varphi}+\epsilon\hat{a})u^{\mu}=\frac{1 }{2}(\kappa\partial^{\mu}\varphi+\epsilon\partial^{\mu}a). \tag{90}\]
This reads
\[\dot{u}^{\mu}+\frac{d\ln B}{d\eta}u^{\mu}=-\partial^{\mu}\ln B. \tag{91}\]
where \(\eta\) is the proper time. Defining by \(m_{0}\) the mass of the particles, we find that the effective mass of these particles in the Einstein frame is dressed by the scalar field and becomes
\[m=B(\varphi,a)m_{0}. \tag{92}\]
This follows from the identification \(\rho=m\delta^{(4)}(x^{\mu}-x^{\mu}(\tau))\) along the particle's trajectory and \(\rho_{\rm con}=m_{0}\delta^{(4)}(x^{\mu}-x^{\mu}(\tau))\). The momentum of each particle becomes
\[p^{\mu}=mu^{\mu}. \tag{93}\]
Newton's law then becomes
\[\dot{p}^{\mu}=-m\partial^{\mu}\ln B. \tag{94}\]
As a result, a force deriving from the potential \(\ln B\) is exerted on each particle whose mass is also field dependent. For instance in the non-relativistic limit and in the presence of gravity, Newton's law becomes
\[\frac{dp^{i}}{dt}=-m\partial^{i}\Phi \tag{95}\]
where \(\Phi_{N}\) is Newton's potential and
\[\Phi=\Phi_{N}+\ln B \tag{96}\]
combines the effects of gravity and the scalar field. This modification of Newton's law is nothing but the one which can be derived from the coupling of matter to the effective metric \(g^{\rm eff}_{\mu\nu}\). As a result we have confirmed that compact objects do not follow the geodesics of the Jordan metric but the one of the effective metric. We will analyse the cosmological consequences of this result below.
### The effective charge
Let us come back to the effective scalar charge carried by a compact object. We have seen that in the \(\epsilon\ll 1\) limit and unless the fields at infinity take special values, which should be adjusted cosmologically, the objects are not screened. Far away from a given object we expect the acceleration of another object due to the scalar field to fall off as
\[a^{i}\simeq-\frac{2Q^{2}G_{N}M}{r^{3}}r^{i} \tag{97}\]
where \(Q\) is the scalar charge of both objects. Here \(M\) is the mass of the object responsible for the acceleration of the second body. The charges of both objects are equal as no screening takes place. Using
\[\ln B=-\frac{1}{2}(\kappa\varphi+\epsilon a)\supset\frac{1}{2}(\kappa\ln\cosh X (r)+\epsilon\beta\tanh X(r)) \tag{98}\]
and identifying this to \(-2Q^{2}G_{N}M/r\) at large distance we find that far away from the object
\[2Q^{2}G_{N}M=-\frac{\gamma\beta}{2}(\kappa\tanh\delta+\epsilon\beta(1-\tanh^{ 2}\delta)) \tag{99}\]
and for \(\epsilon\ll 1\) we retrieve that
\[Q=\frac{\kappa}{\sqrt{6}} \tag{100}\]
up to corrections of order \(\epsilon^{2}\). The resulting interaction including gravity is equivalent to rescaling Newton's constant as
\[G_{\rm eff}=(1+2Q^{2})G_{N} \tag{101}\]
with \(\Phi=-G_{\rm eff}M/r\). As expected in this limit corresponding to \(\epsilon\ll\kappa\), the coupling of the axion field to matter becomes negligible for far-away objects and the coupling is the same as in the Jordan frame.
In conclusion, we find that \(\kappa\lesssim 10^{-3}\) for the Cassini test to be evaded. This is a very small value which could only be avoided if the cosmological values of the axion and dilaton fields were tuned cosmologically.
### Numerical integration
In this section, we will find numerical solutions of the equations around a massive sphere.
#### iv.7.1 Setting the numerical problem
The Klein-Gordon equations with the constant source inside a ball of radius \(R\) have the form
\[\tau^{\prime\prime}+\frac{2\tau^{\prime}}{r}-\frac{(\tau^{\prime})^{2}}{\tau} +\frac{(a^{\prime})^{2}}{\tau}+\frac{\kappa\rho_{0}\tau}{3M_{p}^{2}}\theta(r) =0\, \tag{102}\]
\[a^{\prime\prime}+\frac{2a^{\prime}}{r}-\frac{2a^{\prime}\tau^{\prime}}{\tau} +\frac{\epsilon\rho_{0}\tau^{2}}{3M_{p}^{2}}\theta(r)=0\, \tag{103}\]
where \(\theta(r)\) is the step function that goes from 1 to 0 at the radius of the source \(R\). We introduce the characteristic length \(L=\sqrt{3M_{p}^{2}/\rho_{0}}=m^{-1}\), and write \(r=\hat{r}L\). We obtain the dimensionless equations:
\[\tau^{\prime\prime}+\frac{2\tau^{\prime}}{\hat{r}}-\frac{(\tau^{\prime})^{2}}{ \tau}+\frac{(a^{\prime})^{2}}{\tau}+\kappa\theta(\hat{r})\tau=0\, \tag{104}\]
\[a^{\prime\prime}+\frac{2a^{\prime}}{\hat{r}}-\frac{2a^{\prime}\tau^{\prime}}{ \tau}+\epsilon\theta(\hat{r})\tau^{2}=0\, \tag{105}\]
which we solve with the initial conditions \(\tau^{\prime}(0)=a^{\prime}(0)=0\). We also regularise the step function, e.g.: \(\hat{\theta}(\hat{r})=\frac{1}{2}(\tanh\Bigl{(}N\frac{\hat{R}-\hat{r}}{\hat{R} }\Bigr{)}+1)\) to have a transition of width \(\sim\hat{R}/N\). Notice that \(\hat{R}=R/L=\sqrt{2GM/R}\), i.e the square root of the ratio of the Schwarzschild radius to the radius of the source. For the Sun \(\hat{R}_{\odot}=2.05\times 10^{-3}\).
#### iv.7.2 Results
We see in the example in Fig.1 that we obtain a perfect match with the exterior solution, but also with the first
order interior solution. The variations of the fields are relatively small, i.e. of order \(10^{-6}\) relative to the value at the centre. For other sets of parameters the variation increases with the initial value at the centre and of course the value of \(\kappa\) and \(\epsilon\).
The equations that we have used in the flat metric approximation are valid only for
\[r\gg GM\quad,\quad r\gg|\gamma\beta|. \tag{106}\]
With \(R=\hat{R}L\), we have
\[r\gg GM\iff\hat{r}\gg\frac{\hat{R}^{3}}{2}. \tag{107}\]
For the Sun, \(\hat{R}^{3}_{\odot}\sim 10^{-8}\). One can trust our description of the static solution apart from a small region around the origin. Similarly, we have \(\gamma=-\epsilon\hat{R}^{3}L/3\), so we get
\[r\gg|\gamma\beta|\iff\hat{r}\gg\hat{R}^{3}\frac{\epsilon\beta}{3}\, \tag{108}\]
which excludes a very small region around the origin.
## IV Axio-dilaton cosmology
### Two fluid model
In the following, we will concentrate on models where two fluids are present, i.e. the baryons and Cold Dark Matter (CDM) with couplings determined by \(\kappa\) to the dilaton and \(\epsilon_{B,C}\) to the axion. We have seen that \(\kappa\) and \(\epsilon_{B}\), i.e. the coupling of the baryons to the axion, must be small to comply with the solar system tests unless the fields take special values at infinity. This translates into a choice of boundary conditions for the fields on cosmological scales now. We will see that this situation is not generic and that starting from initial conditions in the radiation era which do not perturb the background cosmology, the cosmological dynamics do not drive the fields to special values now.
The energy-momentum tensor of matter is taken to be
\[T^{B,C}_{\mu\nu}=\rho_{B,C}u^{B,C}_{\mu}u^{B,C}_{\nu} \tag{109}\]
corresponding to the baryons \(B\) and CDM \(C\). We will also assume here for definiteness
\[\mathcal{A}=-\epsilon_{C}T_{C}-\epsilon_{B}T_{B} \tag{110}\]
where \(T_{B,C}=-\rho_{B,C}\) are the baryonic and CDM densities respectively. The matter energy momentum tensors are not conserved but satisfy the non-conservation equations
\[\nabla_{\mu}T^{\mu\nu}_{i}=\frac{1}{2}(\kappa T_{i}\partial^{\nu}\varphi+ \epsilon_{i}T_{i}\partial^{\nu}a) \tag{111}\]
where \(i=B,C\). This implies that the total energy momentum tensor
\[T^{\mu\nu}=T^{\mu\nu}_{B}+T^{\mu\nu}_{C} \tag{112}\]
satisfies the non-conservation equation
\[\nabla_{\mu}T^{\mu\nu}=\frac{1}{2}(\kappa T\partial^{\nu}\varphi-\mathcal{A} \partial^{\nu}a) \tag{113}\]
which is a consequence of the Bianchi identity.
In this section, we will denote the time derivative along the particle lines, defined by \(u^{\mu}_{i}\), by \(d/d\tau_{i}=u^{\mu}_{i}\nabla_{\mu}\) We define the local Hubble rate as \(3h_{i}\equiv\nabla_{\mu}u^{\mu}_{i}\). Notice that the covariant derivatives are calculated in the Einstein frame, hence this is the local Hubble rate along the particle lines as measured using the geometry of the Einstein frame. The non-conservation equations for each species then become
\[\frac{d\rho_{i}}{d\tau_{i}}+3h_{i}\rho_{i}=-\frac{1}{2}(\kappa\dot{\varphi}+ \epsilon_{i}\dot{a})\rho_{i}. \tag{114}\]
We define the coupling function \(B_{i}\)
\[B_{i}\equiv e^{-\frac{1}{2}(\kappa\varphi+\epsilon_{i}a)} \tag{115}\]
when \(\epsilon_{i}=\text{cste}\). We can now introduce a conserved density \(\rho_{\text{con,i}}\) in the Einstein frame such that
\[\rho_{i}=B_{i}\rho_{\text{con,i}} \tag{116}\]
and
\[\frac{d\rho_{\text{con,i}}}{d\tau_{i}}+3h_{i}\rho_{\text{con,i}}=0. \tag{117}\]
This is the conserved matter density in the axio-dilaton setting. Similarly we obtain the modified Newton's Law
\[\frac{du^{\mu}_{i}}{d\tau_{i}}+\frac{d\chi_{i}}{d\tau_{i}}u^{\mu}_{i}=-\partial ^{\mu}\chi_{i} \tag{118}\]
where
\[\chi_{i}\equiv\ln B_{i}. \tag{119}\]
For each species, we can define an effective metric
\[g^{i}_{\mu\nu}=B_{i}^{2}g_{\mu\nu} \tag{120}\]
which corresponds to the Jordan frame for the given species. As \(B_{B}\neq B_{C}\), we see that the Jordan frames for CDM and the baryons do not coincide.
We will apply this formalism first to the background cosmological case and then to the cosmological perturbations.
### Spatially flat cosmology
We are interested in the cosmology of a homogeneous and isotropic Universe in the presence of the axio-dilaton fields. The FLRW (Friedmann-Lemaitre-Robertson-Walker) metric reads
\[\begin{split} g_{\mu\nu}dx^{\mu}dx^{\nu}&=-dt^{2}+R ^{2}(t)\gamma_{ij}dx^{i}dx^{j}\\ &=-dt^{2}+R^{2}(t)(\frac{dr^{2}}{1-kr^{2}}+r^{2}d\Omega^{2})\end{split} \tag{121}\]
where \(R\) is the scale factor. We define as usual the Hubble rate as \(H=\dot{R}/R\). In the following we will focus on the spatially flat case \(k=0\).
We assume that the fields are irrelevant in the early Universe up until some redshift \(z_{i}\) which will be typically the matter-radiation equality. Indeed, in the radiation era the matter density is negligible and therefore the fields are hardly influenced by their matter couplings. As a result they remain constant if their initial velocities vanish. This also guarantees that the influence of the axio-dilaton system on Big Bang Nucleosynthesis (BBN) is minimal. We will choose the initial conditions for the fields \(\varphi\), \(a\) and their derivatives such that they vanish at \(z_{i}\). This does not correspond to the values of the fields for which screening takes place. On the other hand, this entails a vanishing energy density for the axion and dilatons initially.
Each species has its own Jordan frame with the background metric
\[g^{i}_{\mu\nu}dx^{\mu}dx^{\nu}=-dt_{i}^{2}+R_{i}^{2}dx^{i}dx_{i} \tag{122}\]
where the cosmic time and the scale factor are defined by
\[dt_{i}=B_{i}dt,\ \ R_{i}=B_{i}R. \tag{123}\]
As CDM is not subject to the precision tests in the solar system, we will allow for large values of \(\epsilon_{C}\). Moreover we will consider the effects on the scale factor in the Jordan frame of the baryons \(R_{B}=B_{B}R\) corresponding to the conservation of baryonic matter. We will use the convention that \(R_{B}=1\) today and identify the redshift as detected from transition lines of atoms by
\[1+z=R_{B}^{-1}. \tag{124}\]
Although we will focus on the dynamics in the baryon frame, we will first study the equations of motion in the Einstein frame.
### The Klein-Gordon equations
We look for time-dependent solutions for the scalar fields \(\tau(t)\) and \(a(t)\) of the Klein-Gordon equations
\[\ddot{\tau}+3H\dot{\tau}-\frac{\dot{\tau}^{2}-\dot{a}^{2}}{\tau}-\frac{\kappa \tau\rho}{3M_{p}^{2}}=0\, \tag{125}\]
and
\[\ddot{a}+3H\dot{a}-\frac{2\dot{\tau}\dot{a}}{\tau}-\frac{\tau^{2}\mathcal{A}} {3M_{p}^{2}}=0. \tag{126}\]
Here the total matter density is \(\rho=\rho_{B}+\rho_{C}\) and \(\mathcal{A}=\epsilon_{B}\rho_{B}+\epsilon_{C}\rho_{C}\). Notice that these equations are valid both in the radiation and matter eras. In the radiation era, the source terms depend on the matter density only as the trace of the radiation energy momentum tensor vanishes. As a first approximation, we will neglect the source terms in the radiation era. This implies that \(\dot{a}\approx 0\) and \(\dot{\phi}\approx 0\) and the field hardly move during the radiation era. In our numerical analysis, we will be interested in the physics in the matter era and will fix the initial conditions at matter-radiation equality. A detailed analysis of the model from the end of inflation through the radiation to the matter era is left for future work. We will consider that the fields start evolving significantly when the matter era begins.
### The continuity equations
The energy density and pressure carried by the two scalar fields are given by
\[\rho_{f}=\frac{3M_{p}^{2}}{4}(\frac{\dot{\tau}^{2}+\dot{a}^{2}}{\tau^{2}})\, \tag{127}\]
and
\[p_{f}=\frac{3M_{p}^{2}}{4}(\frac{\dot{\tau}^{2}+\dot{a}^{2}}{\tau^{2}})\, \tag{128}\]
corresponding to a perfect fluid with equation of state \(\omega_{f}=1\). Using the Bianchi identity and the Einstein equation in the Einstein frame, we obtain that the total energy is conserved, i.e.
\[\dot{\rho}+\dot{\rho}_{f}+3H(\rho+\rho_{f}+P+P_{f})=0\, \tag{129}\]
where the pressure terms have been included. This leads to
\[\dot{\rho}+\dot{\rho}_{f}+3H(\rho+2\rho_{f})=0. \tag{130}\]
Using the identity
\[\dot{\rho}_{f}=\frac{3M_{p}^{2}}{4}\frac{2}{\tau^{3}}(\tau\dot{\tau}\ddot{\tau}+ \tau\dot{a}\ddot{a}-\dot{\tau}^{3}-\dot{\tau}\dot{a}^{2}) \tag{131}\]
and the expression of \(\ddot{\tau}\) and \(\ddot{a}\) in the Klein-Gordon equations Eq.(125) and Eq.(126), we get
\[\dot{\rho}_{f}+6H\rho_{f}=\frac{1}{2}\left(\kappa\rho\frac{\dot{\tau}}{\tau}+ \mathcal{A}\dot{a}\right). \tag{132}\]
Finally using the continuity equation we obtain
\[\dot{\rho}+3H\rho+\frac{1}{2}(\kappa\rho\dot{\varphi}+\mathcal{A}\dot{a})=0 \tag{133}\]
as we have argued previously. This is associated to the non-conservation equations per species
\[\dot{\rho}_{i}+3H\rho_{i}+\frac{1}{2}\rho_{i}(\kappa\dot{\varphi}+\epsilon_{i }\dot{a})=0. \tag{134}\]
This can be integrated exactly and leads to
\[\rho=\rho_{B}+\rho_{C}=B_{B}\frac{\rho_{0B}}{R^{3}}+B_{C}\frac{\rho_{0C}}{R^{ 3}}. \tag{135}\]
Notice that if we define
\[\rho=B\frac{\rho_{0}}{R^{3}} \tag{136}\]
then \(B=\lambda_{B}B_{B}+\lambda_{C}B_{C}\) where the fraction of baryons and CDM are \(\lambda_{i}=\frac{\rho_{0}}{\rho_{0}}\) where \(\rho_{0}=\rho_{0B}+\rho_{0C}\). In practice we have \(\lambda_{B}=\frac{\Omega_{0B}}{\Omega_{0B}+\Omega_{0C}}\) where \(\Omega_{0B}\simeq 0.022\) and \(\Omega_{0C}\simeq 0.12\) from the Planck experiment [3]. Here and in the following we normalise the density \(\rho_{0}\) to the Planck value as deduced from early time physics compared to the late time effects on the matter density that we will study below.
### The Friedmann equations
The Friedmann equation is obtained from the (00) component of the Einstein equation and becomes
\[H^{2}=\frac{\rho}{3M_{p}^{2}}+\frac{\rho_{f}}{3M_{p}^{2}}+\frac{\rho_{\Lambda }}{3M_{p}^{2}}\, \tag{137}\]
where we have introduced a cosmological constant associated to the energy density \(\rho_{\Lambda}\). The Raychaudhuri equation from the (\(ii\)) Einstein equation reads
\[\frac{\ddot{R}}{R}=-\frac{1}{6M_{p}^{2}}(\rho+\rho_{f}+\rho_{\Lambda}+3(P+P_{ f}+P_{\Lambda}))\, \tag{138}\]
leading to
\[\frac{\ddot{R}}{R}=-\frac{1}{6M_{p}^{2}}(\rho+4\rho_{f}-2\rho_{\Lambda}). \tag{139}\]
These equations are defined in the Einstein frame and will be transformed into the effective frame for baryons below.
We thus have the following system of differential equations for \(R\), \(\tau\) and \(a\) only:
\[\ddot{\tau}+3H\dot{\tau}-\frac{\dot{\tau}^{2}-\dot{a}^{2}}{\tau}- \kappa\frac{\rho(\tau,a)}{3M_{p}^{2}}\tau=0\, \tag{140}\] \[\ddot{a}+3H\dot{a}-\frac{2\dot{\tau}\dot{a}}{\tau}-\epsilon\frac {\rho(\tau,a)}{3M_{p}^{2}}\tau^{2}=0\,\] \[\dot{R}=H(\tau,a)R\,\]
with the Friedmann equation
\[H^{2}=\frac{\rho}{3M_{p}^{2}}+\frac{\rho_{f}}{3M_{p}^{2}}+\frac{ \rho_{\Lambda}}{3M_{p}^{2}}\, \tag{141}\] \[\rho=B_{B}\frac{\rho_{0B}}{R^{3}}+B_{C}\frac{\rho_{0C}}{R^{3}}, \ \rho_{f}=\frac{3M_{p}^{2}}{4}\left(\frac{\dot{\tau}^{2}+\dot{a}^{2}}{\tau^{2}} \right)\,\] \[\frac{d\ln B_{i}}{dt}=-\frac{1}{2}(\kappa\dot{\varphi}+\epsilon_ {i}\dot{a})\.\]
We will solve these equations numerically for different values of \(\epsilon\) and \(\kappa\).
### Dynamics in the effective baryon frame
In the baryon frame, the Hubble rate is
\[H_{B}\equiv\frac{d\ln R_{B}}{dt_{B}}=\frac{H}{B_{B}}+\frac{d\chi_{B}}{dt_{B}}. \tag{142}\]
The conserved baryon density in the baryon frame is simply
\[\tilde{\rho}_{B}\equiv\rho_{\text{con}B}=\frac{\rho_{0B}}{R_{B}^{3}}. \tag{143}\]
In this frame, CDM is not conserved but an observer fitting the evolution of the Universe with a prior that CDM is also conserved in the same frame as the baryons would identify the conserved CDM density as
\[\tilde{\rho}_{C}=\frac{\rho_{0C}}{R_{B}^{3}}, \tag{144}\]
and would write an effective Friedmann equation in the baryonic frame
\[H_{B}^{2}\equiv\frac{8\pi G_{B}}{3}\tilde{\rho}_{B}+\frac{8\pi G_{C}}{3} \tilde{\rho}_{C}+\frac{8\pi G_{N}}{3}\tilde{\rho}_{\Lambda}\, \tag{145}\]
where we have used \(8\pi G_{N}=1/M_{p}^{2}\). This allows one to identify the effective Newton constants \(G_{B,C}\) and the
dark energy component \(\bar{\rho}_{\rm A}\). None of these constants are constant in the baryon frame. In practice, the effective Newton constants are determined by
\[G_{B}=B_{B}^{2}G_{N},\ G_{C}=B_{B}B_{C}G_{N} \tag{146}\]
i.e. the two Newtonian constants evolve differently. The dark energy component is simply defined as the complement to the baryon and CDM contributions in the baryonic Friedmann equation.
As long as the fields do not evolve rapidly, i.e. at the beginning of the matter era we have \(H_{B}\approx H/B_{B}\). The dark energy component becomes
\[\tilde{\rho}_{\rm A}=\frac{\rho_{f}+\rho_{\rm A}}{B_{B}^{2}}. \tag{147}\]
In the late Universe, this identification is not valid anymore and a numerical integration of the equations of motion is necessary.
The same Friedmann equation in the baryon frame can be written as
\[\Omega_{B}^{B}+\Omega_{C}^{B}+\Omega_{\Lambda}^{B}=1 \tag{148}\]
where the energy fractions \(\Omega_{i}^{B}\), \(i=B,C,\Lambda\) are identified in the baryon frame and are such that
\[\Omega_{B,C}^{B}=\frac{8\pi G_{B,C}\tilde{\rho}_{B,C}}{3H_{B}^{2}},\ \Omega_{ \Lambda}^{B}=\frac{8\pi G_{N}\tilde{\rho}_{\Lambda}}{3H_{B}^{2}}. \tag{149}\]
The deviations of the energy fractions from \(\Lambda\)CDM are represented in Fig. 5.
### Deviations from \(\Lambda\)CDM and observational constraints
We will first define the effective gravitational constant that an observation would measure. For example, Big Bang Nucleosynthesis (BBN) puts constraints on the variation of such a \(G_{\rm eff}\)[33; 34]:
\[|\Delta G/G|\equiv\left|\frac{G_{\rm eff}^{\rm today}-G_{\rm eff}^{\rm BBN}}{ G_{\rm eff}^{\rm BBN}}\right|<0.4. \tag{150}\]
This assumes a Hubble evolution similar to that of \(\Lambda\)CDM in the matter-dominated era. In our model, since observations are made in the baryon frame, the corresponding \(G_{\rm eff}\) satisfies
\[H_{B}^{2}=\frac{8\pi}{3}G_{\rm eff}\bigg{[}\frac{c_{\rho}}{R_{B}^{3}}+\rho_{ \Lambda,B}\bigg{]}\, \tag{151}\]
for some \(c_{\rho}\). In order to have \(G_{\rm eff}(z_{i})=G_{N}\) initially, we set \(c_{\rho}=\rho_{0}\), as defined by Eq.(136), i.e we normalise Newton's constant by using the Planck normalisation in the early Universe. Since the physics between BBN and \(z=z_{i}\) is the same as in the standard model, \(G_{\rm eff}=G_{N}\) till \(z=z_{i}\). Later the relative variation of the effective coupling to baryons can be computed between \(z=z_{i}\) and today:
\[\frac{\Delta G_{B}}{G_{B}}\bigg{|}_{\rm BBN\to today}=\left.\frac{\Delta G_{B }}{G_{B}}\right|_{z_{i}\to\rm today}. \tag{152}\]
We can further constrain the possible deviations of the Hubble rate from the standard model by imposing that this should be less than the discrepancy appearing in the \(H_{0}\) tension. Indeed, there are two diverging determinations of the present time Hubble rate \(H_{0}\) with a relative difference of order 10% [28]. In the axio-dilaton theory, the fact that Newton's constant varies implies that the Hubble now differs from the corresponding Hubble rate in the standard model. We have normalised the Hubble rates to coincide at the beginning of the matter era. This motivates looking for parameters that satisfy
\[\left|\frac{\Delta H_{B}}{H_{B}}\right|_{\rm tension}\equiv\left|\frac{H_{B}( z=0)-H_{\rm SM}(z=0)}{H_{\rm SM}(z=0)}\right|<0.1\, \tag{153}\]
where \(H_{\rm SM}\) is the Hubble rate in the standard model. Another stringent constraint comes from BAO (Baryon Acoustic Oscillations) [24] which specify that the deviations of \(H_{B}(z)\) for \(0.2\lesssim z\lesssim 2.5\) should less than around 3 percent [24]
\[\left|\frac{\Delta H_{B}}{H_{B}}\right|_{\rm BAO}\equiv\] \[\left|\frac{H_{B}(z\in[0.2,2.5])-H_{B}^{\rm SM}(z\in[0.2,2.5])}{H_ {B}^{\rm SM}(z\in[0.2,2.5])}\right|<0.03. \tag{154}\]
This implies that the differences between \(\Lambda\)-CDM and the axio-dilaton models must appear late in the evolution of the Universe. We will see that the BAO constraint is the most stringent one amongst the ones we have selected. Of course a much more precise numerical study is required to constrain the parameter space. This is left to future work.
We also consider the effective equation of state of dark energy. In GR with matter in addition to a fluid \(X\) with equation of state \(w\), the deceleration parameter
\[q_{0}:=-\left|\frac{\bar{R}}{RH^{2}}\right|_{\rm today}. \tag{155}\]
is given by
\[q_{0}=\frac{1}{2}(\Omega_{m,0}+(1+3\omega)\Omega_{X,0}) \tag{156}\]
where \(\Omega_{i,0}=\rho_{i}(z=0)/3M_{p}^{2}H_{0}^{2}\). Observations give thus an estimate for \(w\) depending on \(\Omega_{m,0}\). For \(\Omega_{m,0}\sim 0.3\), which can be obtained independently, this leads to [35; 36]
\[w\sim-1\pm 0.1. \tag{157}\]
Recent constraints and future prospects can be found in [37]. We now define the effective equation of state
\[w_{\rm eff}\equiv\frac{1}{3}\frac{2q_{B,0}-\Omega_{m,0}}{\Omega_{ \Lambda,0}}-1\, \tag{158}\]
where:
\[q_{B,0}=-\left.\frac{\partial_{t_{B}}^{2}R_{B}}{R_{B}H_{B}^{2}} \right|_{\rm today}. \tag{159}\]
Taking \(\Omega_{m,0}\simeq 0.3\), we must impose the constraint
\[|\Delta w|\equiv|w_{\rm eff}+1|\lesssim 0.1. \tag{160}\]
We will use this bound in what follows as a guiding principle. We are not trying to give a precise fit to the data but an indication on the parameter space compatible with cosmology.
### Numerical integration
#### iv.8.1 Dimensionless equations
In the following, we will use \(L=\sqrt{3M_{P}^{2}/\rho_{0}}\) as the unit of time and length. Here \(\rho_{0}\) is defined via Eq.(135). We will obtain quantities as functions of redshift starting at matter-radiation equality. We define the system of dynamical equations with the number of efolds \(N\) and \(\varphi\) such that \(R=e^{N}\) and \(\tau=e^{\varphi}\). We get
\[\begin{array}{l}\ddot{\varphi}=-3\hat{H}\dot{\varphi}-\dot{a}^{2}e^{-2 \varphi}+\kappa\hat{\rho}\\ \ddot{a}=-3\hat{H}\dot{a}+2\dot{\varphi}\dot{a}+\epsilon\hat{\rho}e^{2\varphi} \\ \dot{N}=\hat{H}\end{array}\]
with
\[\begin{array}{l}\hat{H}^{2}=\hat{\rho}+\hat{\rho}_{f}+\hat{\rho}_{ \Lambda},\quad\hat{\rho}=Be^{-3N},\\ \hat{\rho}_{f}=\frac{1}{4}(\dot{\varphi}^{2}+\dot{a}^{2}e^{-2 \varphi})\.\end{array}\]
Here \(\hat{\rho}_{\Lambda}\) corresponds to \(\rho_{\Lambda}/\rho_{0}\) where \(\rho_{\Lambda}\) is the density associated to the cosmological constant. In \(\Lambda\)CDM, \(\rho_{0}\) is also the value of the matter density today, so that \(\hat{\rho}_{\Lambda}=\Omega_{\Lambda,0}/\Omega_{m,0}\simeq 7/3\) in \(\Lambda\)CDM. It is convenient to work in conformal time \(\eta\), such that \(Rd\eta=dt\), implying that
\[\begin{array}{l}\varphi^{\prime\prime}=-2\hat{\mathcal{H}}\varphi^{\prime} -a^{\prime 2}e^{-2\varphi}+\kappa\tilde{\rho}\\ a^{\prime\prime}=-2\hat{\mathcal{H}}a^{\prime}+2\varphi^{\prime}a^{\prime}+ \epsilon\tilde{\rho}e^{2\varphi}\\ N^{\prime}=\hat{\mathcal{H}}\end{array}\]
with
\[\hat{\mathcal{H}}^{2}=\tilde{\rho}+\hat{\rho}_{f}+\tilde{\rho}_{ \Lambda}\, \tag{163}\]
where the derivatives are now with respect to \(\eta\). We have rescaled all the densities as \(\check{\rho}=e^{2N}\hat{\rho}\) and similarly for the scalar and dark energy parts.
#### iv.8.2 Results
There are four main parameters: \(\kappa\), \(\epsilon_{B}\), \(\epsilon_{C}\) and \(\hat{\rho}_{\Lambda}\). Following our discussion about screening in SSIII.4 and our identification of the effective metric in SSIV.1, we will first take \(\kappa\) and \(\epsilon_{B}\) small. Taking \(\kappa=\epsilon_{B}=10^{-3}\) turns out to be enough to satisfy solar system constraints. Taking one of them to be one order of magnitude higher leads to a violation of the cosmological constraints when \(\epsilon_{C}\) is large enough. Numerically we study the cosmological evolution for a large range of values of \(\epsilon_{C}\) and \(\hat{\rho}_{\Lambda}\). We always start the cosmological evolution at \(z_{i}=3400\approx z_{\rm eq}\).
A first picture of the deviations from \(\Lambda\)CDM can be obtained by focusing on the cosmological tests (Newton's constant, the equation of state, the Hubble parameter today and in \(z\in[0.2,2.5]\) relative to that of the standard model) we defined above and their consequences as viewed in the \(\epsilon-\hat{\rho}_{\Lambda}\) plane. This is shown in Fig.2. We show their values only for the region in which the four of them satisfy the observational constraints. It turns out that the constraint from BAO is always the strongest. We illustrate the cosmological evolution using five sets of parameters in Figs.(4,6,5). Standard model quantities are computed analytically from the \(\Lambda\)CDM solutions in the matter dominated era with \(\Omega_{\Lambda,0}/\Omega_{m,0}=7/3\).
We recover that the smallest deviations to \(\Lambda\)CDM are for \(\epsilon_{C}=0\), \(\hat{\rho}_{\Lambda}\simeq 7/3\). We observe that deviations of \(H_{0}\) depend mostly on \(\hat{\rho}_{\Lambda}\) while deviations of \(G\) are essentially given by \(\epsilon_{C}\). Additionally, we see deviations which are both positive and negative for \(H_{0}\), the sign depending on the sign of the deviation of \(\hat{\rho}_{\Lambda}\) compared to \(\approx 7/3\). On the other hand, there are only negative deviations of \(G\). As we observe in Fig.4, \(\epsilon a\) increases and so \(G_{B}\) decreases.
For both \(H_{0}\) and \(w_{\rm eff}\), the deviations due to \(\epsilon_{C}\) tend to be compensated by deviations of \(\hat{\rho}_{\Lambda}\). This can be understood from the fact that the fields act as a fluid of equation of state \(w_{f}=1\) opposite to that of the cosmological constant \(w_{\Lambda}=-1\). They have opposite effects on the cosmic acceleration.
Moving on to the dynamical curves, we see in Fig.4 the evolution of the fields. The axion \(a\) increases with \(z\), which can be expected as the source term in its Klein-Gordon equation (126) has a factor \(\epsilon>0\) and initially dominates. On the other hand, \(\varphi\) slightly increases at first but eventually decreases much more. This can be expected, as the source term in its Klein-Gordon equation has a factor \(\kappa>0\) which initially dominates, and later the source term is dominated by the axion term proportional to \(-\dot{a}^{2}\). The deviation from the constant and vanishing fields increases with \(\epsilon\) as we can expect.
Fig.5 shows that the evolution of the energy content of the Universe in both the Einstein and baryon frames. As the couplings \(\kappa\) and \(\epsilon_{B}\) are small, the difference between the Einstein and the baryon frame quantities is negligible. In the matter-dominated era we have a slight increase in the field density and a corresponding decrease of the matter density. In the very late Universe close to a vanishing redshift the proportion of the cosmological
constant increases and we get the usual values \(0.3\) and \(0.7\) for matter and \(\Lambda\) with some deviations of the order of \(0.01\).
As a result we can see that generically the matter contents of the Universe are modified. First of all, the axion and dilaton energy densities evolve from being negligible initially to a long phase in the matter era where they remain nearly constant before dropping to lower values in the last few efoldings of the Universe. This has important consequences as the dynamics of the axio-dilaton system, and in particular the variation of the conformal factors \(B_{B,C}\) imply that the matter fractions of the Universe deviate from their \(\Lambda\)CDM values. We also notice that the deviation can be negative by a few percent. This is important as this will hamper the growth of structure and entail a compensation of the extra growth due to the attractive scalar forces by the lower amount of matter in the Universe. This will result in a reduced growth of structure in these models.
Fig.6 shows the evolution for the cosmological quantities of interest. In \(\Lambda\)CDM, both \(\Delta H_{0}/H_{0}\) and \(\Delta G/G\) vanish. We observe deviations that gets stronger as the parameters move away from the light regions and into the darker ones of Fig.2. Both also present a maximal negative deviation around \(z\sim 2\). At smaller redshifts the deviation shrinks. This can be attributed to the effect of the cosmological constant which becomes non-negligible at late times. In the case of the red, orange and purple curves, the deviation crosses \(0\) and becomes positive. They have in common that their values for the cosmological constant are stronger than the standard model value: \(\hat{\rho}_{\Lambda}\gtrsim 7/3\). Finally, notice that the evolution of \(G_{\rm eff}\) shows a negative deviation from Newton's constant in GR. This has two origins which can be traced to its definition (151). First of all, as the effective Newtonian constant is normalised with the Planck normalisation for \(\rho_{0}\), the fact that in Fig. 5 we find that there is less matter in the recent Universe than in \(\Lambda\)CDM implies that \(G_{\rm eff}/G_{N}\) should be less than unity. Another important effect is that the conformal factors \(B_{B,C}\) are less than
unity too, implying a reduction of \(G_{\rm eff}\) compared to \(G_{N}\). These effects could have been compensated by the extra pull arising from the scalar forces. This is not the case.
Fig.6 also shows \(w_{\rm eff}\). It is defined by generalizing (158)
\[w_{\rm eff}(z):=\frac{1}{3}\biggl{(}\frac{2q_{J}(z)-\Omega_{m,0}}{\Omega_{ \Lambda,0}}-1\biggr{)}. \tag{164}\]
This can simply be seen as a rescaling of the deceleration parameter \(q(z)\). In \(\Lambda\)CDM it is easy to see that \(w_{\rm eff}\) goes from \(0\) to \(-1\) (and equivalently, \(q\) from \(0.5\) to \(-0.55\)). Fig.6 shows that for parameters consistent with observation the deviations are not drastic.
#### iv.2.3 Cosmological constraint on \(\kappa\)
We now concentrate on another scenario where the coupling of the dilaton is relaxed from its solar system bound. This is of interest if the dilaton is screened in the solar system and partially on cosmological scales. The largest dilaton coupling allowed by cosmological data is smaller than the supergravity motivated value \(\kappa=1\).
We fix \(\epsilon_{B}=10^{-3}\), and we look for the highest value of \(\kappa\) such that there are some values of \(\epsilon_{C}\) and \(\hat{\rho}_{\Lambda}\) such that the observables are within the observational bounds. We use a precision of the order \(10^{-3}\) relative to the order of magnitude of the parameters. We find that the maximal
Figure 5: Evolution of energy contents of the Universe in the Einstein frame (left) and baryon frame (top three figures on the right) as a function of the redshift of the baryon frame. The \(\Lambda\)CDM case is represented in the bottom right figure. Deviations from \(\Lambda\)CDM are represented in the other figures for \(\kappa=10^{-3}\), \(\epsilon_{B}=10^{-3}\) and values of \(\epsilon_{C}\) and \(\hat{\rho}_{\Lambda}\) given in Fig.3.
order of magnitude of \(\kappa\) is \(0.1\). More precisely, if we fix also \(\epsilon_{C}\) and try to find some valid values of \(\hat{\rho}_{\Lambda}\), we find that for \(\epsilon_{C}=0.1\), \(\kappa_{\rm lim}=0.110\); and for \(\epsilon_{C}=0.001\), \(\kappa_{\rm lim}=0.124\).
This is illustrated in Fig.7. Since BAO is the strongest constraint, we look only at the deviations of the Hubble rate in the BAO interval. We can see that for favourable values of the other parameters (\(\epsilon_{C}=\epsilon_{B}=10^{-3}\), \(\hat{\rho}_{\Lambda}=7/3\)), \(\kappa\) is allowed at least up to \(0.12\). But for \(\kappa=0.13\) the BAO constraint is no longer satisfied. This is of course much less than unity and signals that the dilaton must be cosmologically screeened.
## V Cosmological perturbations
In this section, we study the growth of perturbations for the axio-dilaton models in the matter era in the sub-horizon and quasi-static approximation [38]. We will obtain the equation governing the evolution of the density contrast \(\delta\rho/\rho\)[39].
### Cosmological perturbations
We focus on small perturbations to the background solutions. We are interested in structure formation and will evaluate the time evolution of the baryon and CDM density contrasts simultaneously [40]. We only consider the scalar modes of the perturbation of the metric. Using Newton's gauge in the Einstein frame we have
\[g_{\mu\nu}dx^{\mu}dx^{\nu}=R^{2}(\eta)(-(1+2\Phi_{N})d\eta^{2}+(1-2\Phi_{N}) \gamma_{ij}dx^{i}dx^{j}) \tag{165}\]
where \(\gamma_{ij}=\delta_{ij}\). The cosmological perturbations are defined by
\[\rho_{i}=\overline{\rho}_{i}+\delta\rho_{i}=\overline{\rho}_{i}(1+\delta_{i}),\ u_{i}^{\mu}=\frac{1}{R}(1+\delta v^{0}\,,\,\vec{v}_{i}) \tag{166}\]
for each field. We also perturb the fields and the axion-matter coupling:
\[\varphi=\overline{\varphi}+\delta\varphi\, \tag{167}\]
\[a=\overline{a}+\delta a\, \tag{168}\]
\[\mathcal{A}=\overline{\mathcal{A}}+\delta\mathcal{A}. \tag{169}\]
All the symbols with a bar denote background values. The velocity \(u_{i}^{\mu}\) of the two fluids are given by
\[u_{i}^{\mu}=\frac{1}{R}(1-\Phi_{N}\,,\,\vec{v}_{i}). \tag{170}\]
at linear order. The perturbed Newton's law becomes for each species
\[\partial_{\eta}\vec{v}_{i}+(\mathcal{H}+\partial_{\eta}\overline{\chi}_{i}) \vec{v}_{i}=-\frac{\vec{\nabla}}{R^{2}}(\Phi_{N}+\delta\chi_{i}). \tag{171}\]
We recognise the scalar force in the Euler's equation due to the interaction with dark matter and the friction term whose origin is the modified Hubble rate \(\mathcal{H}_{i}=\mathcal{H}+\partial_{\eta}\overline{\chi}_{i}\) in conformal time and in the Einstein frame.
### Density contrast of sub-horizon modes in quasi-static regime
We focus on sub-horizon modes such that \(k\gg\mathcal{H}\) and in the quasi-static regime \(\partial_{\eta}\sim\mathcal{H}\). Using the Einstein equation
\[\mathcal{R}_{00}-\frac{3}{2\tau^{2}}(\partial_{\eta}\tau\partial_{\eta}\tau+ \partial_{\eta}a\partial_{\eta}a)-\frac{1}{M_{p}^{2}}(T_{00}-\frac{1}{2}Tg_{00} )=0\, \tag{172}\]
and the curvature perturbation
\[\delta\mathcal{R}_{00}=3\mathcal{H}\Phi_{N}^{\prime}+\Delta\Phi_{N}\approx \Delta\Phi_{N}\, \tag{173}\]
which reduces to the Laplacian of Newton's potential as \(3\mathcal{H}\Phi_{N}^{\prime}\sim\mathcal{H}^{2}\Phi_{N}\ll k^{2}\Phi_{N} \sim\Delta\Phi_{N}\), the Poisson equation then becomes
\[\Delta\Phi_{N}=\frac{1}{2M_{p}^{2}}R^{2}(\bar{\rho}_{B}\delta_{B}+\bar{\rho}_{ C}\delta_{C}). \tag{174}\]
The perturbed Klein-Gordon equation for \(\varphi\) is
\[\Delta\delta\varphi=-\frac{\kappa}{3M_{p}^{2}}R^{2}(\bar{\rho}_{B}\delta_{B}+ \bar{\rho}_{C}\delta_{C}). \tag{175}\]
Its structure is similar to the Poisson equation. Similarly we obtain for the axion field
\[\Delta\delta a=-\frac{e^{2\overline{\varphi}}}{3M_{p}^{2}}R^{2}(\bar{\rho}_{ B}\epsilon_{B}\delta_{B}+\bar{\rho}_{C}\epsilon_{C}\delta_{C}) \tag{176}\]
Figure 8: Growth rate for a few allowed parameters and in \(\Lambda\)CDM shown for redshifts in the baryon frame \(z\leq 3\).
Figure 9: Growth rate at \(z\leq 3\) in the baryon frame for different values of \(\kappa\) and \(\epsilon_{C}=\epsilon_{B}=10^{-3}\), \(\hat{\rho}_{\Lambda}=7/3\).
where we have systematically used the sub-horizon and quasi-static approximations. The conservation equation for each species implies that \(\rho_{i}=B_{i}\rho_{i{\rm cons}}\) where the conserved density satisfies
\[\frac{d\rho_{i{\rm con}}}{d\tau_{i}}+3h_{i}\rho_{i{\rm con}}=0. \tag{177}\]
In the subhorizon limit, we can identify \(\delta_{i}\simeq\delta_{i{\rm cons}}\). This is also the density constrast in the baryon frame as the contributions of both \(\delta a\) and \(\delta\phi\) to the change of frame are negligible in the subhorizon limit. This implies that the perturbed conservation equation becomes
\[\delta^{\prime}_{i}=-\vec{\nabla}.\vec{v}_{i}. \tag{178}\]
Now we apply \(\vec{\nabla}\) to Eq.(171) and obtain
\[-\delta^{\prime\prime}_{i}-(\mathcal{H}+\overline{\chi}^{\prime}_{i})\delta^{ \prime}_{i}=-\frac{\Delta}{R^{2}}(\Phi_{N}+\delta\chi_{i}). \tag{179}\]
Using the Laplacians from Eq.(174), Eq.(175), and Eq.(176) we finally deduce the growth equation for each species in the subhorizon limit
\[\delta^{\prime\prime}_{i}+(\mathcal{H}+\overline{\chi}^{\prime}_{ i})\,\delta^{\prime}_{i}-\frac{3}{2}\Omega_{B}\mathcal{H}^{2}\left(1+\frac{ \kappa^{2}+e^{2\overline{\sigma}}\epsilon_{B}^{2}}{3}\right)\delta_{B}\] \[-\frac{3}{2}\Omega_{C}\mathcal{H}^{2}\left(1+\frac{\kappa^{2}+e^{ 2\overline{\sigma}}\epsilon_{C}^{2}}{3}\right)\delta_{C}=0\]
in terms of the matter fraction \(\Omega_{i}\) in the Einstein frame. We find that the deviations from the standard model have two origins. First there is the friction term depending on \(\mathcal{H}_{i}=\mathcal{H}+\overline{\chi}^{\prime}_{i}\) which is specific to each species as the two fluids couple differently to the axion. There is also a modification of Newton's constant for each species on the two effective couplings
\[G^{i}_{N}=(1+2Q_{i}^{2})G_{N} \tag{181}\]
such that the perturbation equations become
\[\delta^{\prime\prime}_{i}+(\mathcal{H}+\overline{\chi}^{\prime}_ {i})\delta^{\prime}_{i}\] \[-\frac{3}{2}\Omega_{B}\mathcal{H}^{2}(1+2Q_{B}^{2})\delta_{B}- \frac{3}{2}\Omega_{C}\mathcal{H}^{2}(1+2Q_{C}^{2})\delta_{C}=0\.\]
We have defined the effective couplings
\[Q_{i}^{2}=\frac{\kappa^{2}+e^{2\overline{\sigma}}\epsilon_{i}^{2}}{6} \tag{183}\]
which parameterise the deviations from \(\Lambda\)CDM. We retrieve that in the absence of the \(\epsilon_{i}\)'s, gravity is modified by a factor \(\kappa^{2}/6\) like in the static regime around a compact object. In the following, we will solve these equations numerically as a way of investigating the growth of structure for the baryons and CDM.
### Growth rate
We now focus on the growth rate for small redshifts [41] as defined by
\[f_{i}=\frac{\mathrm{d}\,\ln\delta_{i}}{\mathrm{d}\,\ln R_{B}} \tag{184}\]
where the redshift is deduced from the baryonic scale factor. We represent the growth rates in Fig.8. We choose as initial conditions \(\delta_{i}(z_{ini})\ll 1\) and \(\delta^{\prime}_{i}(z_{ini})/\delta_{i}(z_{ini})\sim\mathcal{H}_{ini}\). We notice that the growth can be either enhanced or disfavoured at small redshifts depending on \(\epsilon_{C}\) and \(\rho_{\Lambda}\). On the other hand, the maximal deviation from \(\Lambda\)CDM is at most five percent. We have also represented the growth factors when \(\kappa\) is varied and both \(\epsilon_{B,C}\) are small. Notice that when \(\kappa\) is varied, the growth rate at small redshift becomes smaller and smaller. This is also the case when \(\kappa\) is fixed and \(\epsilon_{C}\) is increased. As the effective Newton constants should increase the growth thanks to the presence of fifth forces between particles, we conclude that the background evolution and the effective friction have a drastic effect on the growth of structure. This can be observed in Fig.5 where the matter density in the late Universe decreases compared to \(\Lambda\)CDM as in [29] where a similar effect was obtained and used to alleviate the \(\sigma_{8}\) tension. It would certainly be interesting to see if this trend can be also present in the non-linear regime and could have some consequences for the \(S_{8}\) tension where less matter clustering is observed at late time than inferred in the \(\Lambda\)CDM scenario [42] from the Planck data [3]. The analysis of the \(S_{8}\) tension in this scenario is left for future work.
Finally let us remark that we have not taken into account an important effect which would result from both the fact that the effective equation of state \(w_{\rm eff}\) (see Fig. 6) is not strictly equal to zero deep into the matter era for \(z\gtrsim 2\) and that the perturbations do not behave like in the Einstein-de Sitter Universe with \(f\neq 1\) (see Fig. 8). This would imply that the Newtonian potential \(\Phi_{N}\) in (174) is not strictly constant in the matter era. This could lead to a large Integrated Sachs Wolfe effect (ISW) which is tightly constrained and could appear in the galaxy counts versus the CMB (Cosmic Microwave Background) cross-correlations [43; 44; 45]. This will certainly restrict the available parameter space and constrain the possible deviations of the growth factor from \(\Lambda\)CDM. A detailed study of this effect is left for future work.
## VI Conclusion
The axio-dilaton model has a clear origin in string theory. We have focused on the screening mechanism introduced in [13] for a constant coupling of the axion to matter and explicitly shown how it can only be effective when the field values at infinity are tuned to specific values. These values should be determined cosmologically. We have studied the background cosmology of
these models and shown that the cosmological values do not correspond to the tuned values generically.
In the absence of explicit screening for the axio-dilaton system which may require to introduce non-linear couplings to matter and/or new fields [46] whose dynamics would drive the couplings of the dilaton and the axion to small values in the solar system and larger values cosmologically, we have employed a simple alternative and considered two scenarios. In the first one, the coupling of the dilaton and the axion to baryons is taken to satisfy the solar system constraints and remain identical to these values on large scales. Only the coupling to cold dark matter is allowed to take much larger values. In this case, we find that the coupling to cold dark matter must be bounded. It turns out that the constraints from Baryonic Acoustic Oscillations (BAO) at small redshift are the tightest and the present day Hubble rate does not deviate from the Planck normalised one by more than three percent. This is not enough to account for the \(H_{0}\) tension, which lies at the ten percent level. Similarly, the growth of structure is affected at the five percent level. Interestingly, in these models growth is not always enhanced and effectively a decrease in the growth rate is observed for a large part of the parameter space of the model. This follows from the fact that the growth increase due to the scalar forces is compensated by the decrease of the matter density. This could have some relevance to the \(\sigma_{8}\) tension. We hope to come back to this suggestion in the near future. We also consider the case where the axion does not couple significantly to matter and the dilaton couples with a strength \(\kappa\) reduced from the string theory motivated example. We find that \(\kappa\) cannot be allowed values of order unity and must be bounded around 0.1. This entails that the dilaton must not only be screened locally in the solar system, but also cosmologically.
Of course, our examples can be modified and the resulting physics very different. For instance, the couplings to matter of both the axion and the dilaton could become non-linear and therefore lead to screening mechanisms akin to the ones of the symmetron model for instance. Another possibility would be that other fields could relax to values whereby the couplings of the dilaton and the axion would become very small in the solar system and small cosmologically. The construction of these models is left for future work.
Phenomenologically, the scenarios we have introduced fall within the category of late time dark energy models where the evolution of the fields at small redshift would modify both the background cosmology and the growth of structure. As expected, we find that the BAO bound entails a tight constraint on both the possible deviations of the present Hubble rate from its Planck value and the growth factor from its \(\Lambda\)CDM counterpart. We notice that the allowed deviations of the growth factor could reach a few percent and therefore may become detectable by future large scale surveys. The detailed study of the phenomenology of these models is left for future work.
**Acknowledgments:** We would like to thank E. Lindner and L. Pogosian for fruitful comments. We would also like to thank A. T. Ortiz for running some of the long numerical computations. A.O. would like to thank IPhT for funding during his internship, as well as the physics department of the Ecole Normale Superieure for its scholarship.
## Appendix A Validity of the flat background approximation
We consider here the conditions for the flat background approximation used in SSIII to hold. We assume the exterior solution of SSIII.1 and find in which regime the terms neglected by the approximation are indeed negligible. The metric of SSIII can be written:
\[ds^{2}=-(1+2\Phi)dt^{2}+(1-2\Psi)(dr^{2}+r^{2}d\Omega^{2}). \tag{101}\]
The Christoffel symbols, the Ricci and Einstein tensors for this metric can be found in e.g. [47] SS7. The \((tt)\) and \((rr)\) equations from Eq.(13) are
\[\Delta\Phi=\frac{\rho}{2M_{p}^{2}}\, \tag{102}\]
\[(\Delta-\partial_{r}^{2})(\Phi-\Psi)=\frac{3}{4}\frac{(\tau^{\prime})^{2}+(a ^{\prime})^{2}}{\tau^{2}}\, \tag{103}\]
where \(\Delta\) is the Laplacian. The exterior solution gives
\[\frac{(\tau^{\prime})^{2}+(a^{\prime})^{2}}{\tau^{2}}=\frac{\gamma^{2}\beta^ {2}}{r^{4}}. \tag{104}\]
Integration gives us then
\[\Phi=-\frac{GM}{r}\qquad,\qquad\Psi-\Phi=\frac{3}{16}\frac{\gamma^{2}\beta^{2 }}{r^{2}}. \tag{105}\]
The deviation from the flat metric are therefore negligible for
\[|\Psi|\ll 1\qquad,\qquad\Phi\simeq\Psi. \tag{106}\]
This is verified in the regime
\[r\gg GM\qquad,\qquad r\gg|\gamma\beta|. \tag{107}\]
|
2303.14672 | Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs | This paper aims to develop an accurate 3D geometry representation of
satellite images using satellite-ground image pairs. Our focus is on the
challenging problem of 3D-aware ground-views synthesis from a satellite image.
We draw inspiration from the density field representation used in volumetric
neural rendering and propose a new approach, called Sat2Density. Our method
utilizes the properties of ground-view panoramas for the sky and non-sky
regions to learn faithful density fields of 3D scenes in a geometric
perspective. Unlike other methods that require extra depth information during
training, our Sat2Density can automatically learn accurate and faithful 3D
geometry via density representation without depth supervision. This advancement
significantly improves the ground-view panorama synthesis task. Additionally,
our study provides a new geometric perspective to understand the relationship
between satellite and ground-view images in 3D space. | Ming Qian, Jincheng Xiong, Gui-Song Xia, Nan Xue | 2023-03-26T10:15:33Z | http://arxiv.org/abs/2303.14672v2 | # Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs
###### Abstract
This paper aims to develop an accurate 3D geometry representation of satellite images using satellite-ground image pairs. Our focus is on the challenging problem of generating ground-view panoramas from satellite images. We draw inspiration from the density field representation used in volumetric neural rendering and propose a new approach, called Sat2Density. Our method utilizes the properties of ground-view panoramas for the sky and non-sky regions to learn faithful density fields of 3D scenes in a geometric perspective. Unlike other methods that require extra 3D information during training, our Sat2Density can automatically learn the accurate and faithful 3D geometry via density representation from 2D-only supervision. This advancement significantly improves the ground-view panorama synthesis task. Additionally, our study provides a new geometric perspective to understand the relationship between satellite and ground-view images in 3D space. The project website is available at [https://sat2density.github.io](https://sat2density.github.io).
## 1 Introduction
The emergence of satellite imagery has significantly enhanced our daily lives by providing easy access to a comprehensive view of the planet. This bird's-eye view offers valuable information that compensates for the limited perspective of ground-level observations by humans. However, what specific information does satellite imagery provide, and why is it so crucial? In this paper, we propose that the most critical insights come from the analysis of the geometry, topology, and geography of cross-view observations captured by paired satellite and ground-level images. Building on this hypothesis, we aim to address the challenging problem of synthesizing ground-level images from paired satellite and ground-level imagery by leveraging density representations of 3D scenes.
The challenge of generating ground-level images from satellite imagery is tackled by leveraging massive datasets containing both satellite images and corresponding ground-level panoramas captured at the same geographical coordinates. However, the drastic differences in viewpoint between the two types of images, combined with the limited overlap of visual features and large appearance variations, create a highly complex and ill-posed learning problem. To address this challenge, researchers have extensively studied the use of conditionally generative adversarial networks, which leverage high-level semantics and contextual information in a generative way [19, 20, 33, 26, 12]. However, since the contextual information used is typically at the im
Figure 1: Ground view synthesis from our Sat2Density. (a) Given a satellite image, Sat2Density learns geometry in the satellite scene in an explicit volume density, (b) rendered results from the points along the red trajectory curve. (c) rendered depth by volume rendering corresponding to the synthesized image in the same row. _The video along the curve is on the project page._
age level, the 3D information can only be marginally inferred during training, often resulting in unsatisfactory synthesis results.
Recent studies [22, 11] have suggested that accurate 3D scene geometry plays a crucial role in generating high-quality ground-view images. With extra depth supervision, Sat2Video [11] introduced a method to synthesize spatial-temporal ground-view video frames along a camera trajectory, rather than a single panorama from the center viewpoint of the satellite image. Additionally, Shi [22] demonstrated that coarse satellite depth maps can be learned from paired data through multi-plane image representation using a novel projection model between the satellite and ground viewpoints. Building on these insights, we aim to investigate whether it is possible to achieve even more accurate 3D geometry using the vast collection of satellite-ground image pairs.
Our study is motivated by the latest developments in the neural radiance field (NeRF) [16], which has shown promising results in novel view synthesis. Benefiting from the flexibility of density field in volumetric rendering [8], faithful 3D geometry can be learned from a large number of posed images. Therefore, we adopt density fields as the representation and focus on learning accurate density fields from paired satellite-ground image pairs. More precisely, in this paper, we present a novel approach called Sat2Density, which involves two convolutional encode-decoder networks: DensityNet and RenderNet. The DensityNet receives satellite images as input to represent the density field in an explicit grid, which plays a crucial role in producing ground-view panorama images using the RenderNet. With such a straightforward network design, we delve into the goal of learning faithful density field first and then render high-fidelity ground-view panoramas. While we employed a flexible approach to represent geometry using explicit volume density and volumetric rendering, an end-to-end learning approach alone is inadequate for restoring geometry using only satellite-ground image pairs. Upon examining the tasks and satellite-ground image pairs, we identified two main factors that may impede geometry learning, which has been overlooked in previous works on satellite-to-ground view synthesis. Firstly, the sky is an essential part of ground scenes but is absent in the satellite view, and it is nearly impossible to learn a faithful representation of the infinite sky region in each image using explicit volume density. Secondly, differences in illumination among the ground images during training make it challenging to learn geometry effectively.
With the above intuitive observation, we propose two supervision signals, the _non-sky opacity supervision_ and _illumination injection_, to jointly learn the density fields in a volumetric rendering form. The _non-sky opacity supervision_ compels the density field to focus on the satellite scene and ignore the infinity regions, whereas the _illumination injection_ learns the illumination from sky regions to further regularize the learning density field. By learning the density field, our Sat2Density approach goes beyond the center ground-view panorama synthesis from the training data and achieves the ground-view panorama video synthesis with the best spatial-temporal consistency. As shown in Fig. 1, our Sat2Density continuously synthesizes the panorama images along the camera trajectory. We evaluated the effectiveness of our proposed approach on two large-scale benchmarks [22, 34] and obtained state-of-the-art performance. Comprehensive ablation studies further justified our design choices.
The main contributions of our paper are:
* We present a geometric approach, Sat2Density, for ground-view panorama synthesis from satellite images in end-to-end learning. By explicitly modeling the challenging cross-view synthesis task in the density field for the 3D scene geometry, our Sat2Density is able to synthesize high-fidelity panoramas on camera trajectories for video synthesis without using any extra 3D information out of the training data.
* We tackle the challenging problem of learning high-quality 3D geometry under extremely large viewpoint changes. By analyzing the unique challenges that arise with this problem, we present two intuitive approaches _non-sky opacity supervision_ and _illumination injection_ to compel the density learning to focus on the relevant features in the satellite scene presented in the paired data while mitigating the effects of infinite regions and illumination changes.
* To the best of our knowledge, we are the first to successfully learn a faithful geometry representation from satellite-ground image pairs. We believe that not only do our new findings improve the performance of ground-view panorama synthesis, but the learned faithful density will also provide a renewed understanding of the relationship between satellite and ground-view image data from a 3D geometric perspective.
## 2 Related Work
### Satellite-Ground Cross-View Perception
Both ground-level and satellite images provide unique perspectives of the world, and their combination provided us with a more comprehensive way to understand and perceive the world from satellite-ground visual data. However, due to the drastic viewpoint changes between the satellite and ground images, poses several challenges in geo-localization [34, 23, 24, 25], cross-view synthesis [22, 11, 12, 20, 26], overhead image segmentation with the assistance of ground-level images [30], geo-enabled depth es
timation [29], predicting ground-level scene layout from aerial imagery [33].
To address this challenge, many previous works have proposed various approaches to model and learn the drastic viewpoint changes, including the use of homography transforms [20], additional depth or semantic supervision [26, 12, 11], transformation matrices [34], and geospatial attention [30], among others. Despite effectiveness, these approaches mainly address the challenge on the image level instead of the 3D scenes.
Most recently, Shi _et al_. [22] proposed a method to learn geometry in satellite scenes implicitly using the height (or depth) probability distribution map, which achieved better results in synthesized road and grassland regions through their geometry projection approach. However, their learned geometry has limited effectiveness as the rendered satellite depth cannot accurately recognize objects. We go further along the line to focus on the 3D scene geometry conveyed in the satellite-ground image pairs. We demonstrate that the faithful 3D scene geometry can be explicitly decoded and leveraged with an appropriate representation and supervision signals, to obtain high-fidelity ground-view panoramas. Besides, we believe that our study brings a novel perspective to rethink satellite-ground image data for many other challenging problems.
### Neural Radiance Field
Benefiting from the flexibility of density field in volumetric rendering [16], faithful 3D geometry can be learned from a dense number of posed images [4, 13, 1, 3]. Recent works [2, 31, 32] based on NeRF have shown that 3D representation can be learned even with only a few views. In a co-current work [28], it is also pointed out that the flexibility of the density field helps to learn the 3D geometric structure from a single image by disentangling the color and geometry, which allows neural networks to capture reliable 3D geometry in occluded areas.
Our goal can be viewed as an extremely challenging problem of density-based few-view synthesis with extremely large viewpoint changes, which was not studied well in previous works. In our study, we demonstrated the possibility of learning faithful geometry in the volumetric rendering formulation, shedding light on the most challenging cross-view configurations for novel view synthesis.
## 3 The Proposed Sat2Density
Figure 2 illustrates the computation pipeline for our proposed Sat2Density. Given the input satellite image \(I_{\text{sat}}\in\mathbb{R}^{H\times W\times 3}\) for the encoder-decoder DensityNet, we learn an explicit volume of the density field \(V_{\sigma}\in\mathbb{R}^{H\times W\times N}\). We render the panorama depth and project the color of the satellite image along rays in the ground view to generate an initial panorama image and feed them to the RenderNet. To ensure consistent illumination of the synthesis, the histogram of color in the sky regions of the panorama is used as a conditional input for our method.
### Density Field Representation
We encode an explicit volume density \(V_{\sigma}\in\mathbb{R}^{H\times W\times N}\) as a discrete representation of scene geometry and parameterize it using a plain encoder-decoder architecture in DensityNet \(G_{\text{dms}}\) to learn the density field:
\[V_{\sigma}=G_{\text{dms}}(I_{\text{sat}})\quad\text{ s.t. }V_{\cdot,\cdot,\cdot}\in[0,\infty). \tag{1}\]
where the density information \(v=V(x,y,z)\) is stored in the volume of \(V\) for the spatial location \((x,y,z)\). For any queried location that does not locate in the sample position of the explicit grid, tri-linear interpolation is used to obtain its density value. Suppose the size of the real-world cube is \((X,Y,Z)\) in the satellite image, two corner cases are considered: 1) for the locations outside the range of the world size covered by the satellite image, we set their density to zero, and 2) we set the density in the lowest volume (_i.e_., V(x,y,z=0)) to a relatively large value (\(10^{3}\) in our experiments), which made an assumption that all ground regions are solid.
With the density field representation, the volumetric rendering techniques [14] are applied to render the depth \(\hat{d}\) and
Figure 2: The architecture and training process of Sat2Density. Sat2Density consists of two components, DensityNet and RenderNet. We optimize Sat2Density by adversarial loss, illumination injection loss, and opacity loss. See text for details.
opacity \(\hat{O}\) along the queried rays by
\[\hat{d}=\sum_{i=1}^{S}T_{i}\alpha_{i}d_{i},\qquad\hat{O}=\sum_{i=1}^{S}T_{i} \alpha_{i}, \tag{2}\]
where \(d_{i}\) is the distance between the camera location and the sampled position, \(T_{i}\) denotes the accumulated transmittance along the ray at \(t_{i}\), and \(\alpha_{i}\) is the alpha value for the alpha compositing, written by
\[\alpha_{i}=1-\exp\left(-\sigma(\mathbf{x}_{i})\delta_{i}\right)\quad T_{i}= \prod_{j=1}^{i-1}\left(1-\alpha_{j}\right). \tag{3}\]
Unlike NeRF [16] that learns the radiance field to render the colored images, we take a copy-paste strategy to compute the colored images by copying the color from the satellite image along the ray via bilinear interpolation for image rendering in
\[\hat{c}_{\mathrm{map}}=\sum_{i}^{S}T_{i}\alpha_{i}c_{i}, \tag{4}\]
where \(c_{i}=c(x_{i},y_{i},z_{i})=I_{\mathrm{sat}}(\frac{x_{i}}{S_{x}}+\frac{H}{2}, \frac{y_{i}}{S_{y}}+\frac{W}{2})\). \(S_{x}\) and \(S_{y}\) are the scaling factors between the pixel coordinate of the satellite image and the grid coordinate in \(V_{\sigma}\). To keep the simplicity of our Sat2Density, we did not use the hierarchical sampling along rays for the computation of depth, colors, and opacity.
Thanks to the flexibility of volumetric rendering, for the end task of ground-view panorama synthesis, it is straightforward to render the ground-view depth, opacity, and the (initial) colored image. For the subsequent RenderNet, it takes the concatenated tensor of the rendered panorama depth and colors as input to synthesize the high-fidelity ground-view images.
Learning density could draw precise geometry information of the scene, but it is hard to acquire real density information of the satellite scene only from the satellite-ground image pairs. In our work, we propose two supervisions: _non-sky opacity supervision_ and _illumination injection_ to improve the quality of the 3D geometry representation.
### Supervisions from Sky/Non-Sky Separation
**Non-Sky Opacity Supervision.** We draw inspiration from the study of panorama image segmentation [15, 35], which treats the sky region as a meaningful category in the segmentation task. By taking the off-the-shelf sky segmentation model [35] to obtain the sky masks for the training panorama images, we tackle the _infinity issue_ with a novel _non-sky opacity supervision_ proposed. Based on our discussion in Sec. 1, the pseudo sky masks provide a strong inductive basis to faithfully learn the density fields for our proposed Sat2Density in a simple way.
Denoted by \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\) the non-sky/sky regions of the ground-view panorama, the loss function \(\mathcal{L}_{\mathrm{snop}}\) of our proposed non-sky opacity supervision reads to
\[\mathcal{L}_{\mathrm{snop}}=\sum_{\mathbf{r}\in\mathcal{R}}\left\|\hat{O}( \mathbf{r})-1\right\|_{1}+\sum_{\mathbf{r}\in\mathcal{R}^{\prime}}\left\|\hat {O}(\mathbf{r})\right\|_{1}. \tag{5}\]
**Illumination Injection from Sky Regions.** While the density field works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena for ground-view panorama synthesis. More importantly, due to the lacking of correspondence from the sky regions in ground-view images to the paired satellite image, we find that the variable illumination in the ground images is a key factor preventing the model to learn faithful 3D geometry.
Accordingly, we present an illumination injection from sky regions of the panorama. For the sake of simplicity of design, we choose the RGB histogram information in the sky regions as the illumination hints. In our implementation, we first cut out the sky part from the ground image, then calculate the RGB sky histogram with \(N\) bins. To further exploit the representational ability of sky histograms, we follow the style encoding scheme proposed in SPADE [17] to transform the sky histogram into a fixed-dim embedding, which allows our Sat2Density learn reliable information of complicated illumination from the sky histogram for the RenderNet. From our experiments, we also find that the injection of sky histogram into the RenderNet could further improve the quality of explicit volume density by encouraging the DensityNet to focus on the stuff regions rather than being disturbed by the per-image illumination variations issue.
By combining the above two approaches, we solve the per-image illumination variations and infinity issue, and let our model focus on learning the scene geometry relationship between satellite and ground view (see Fig. 3). Thus, a plausible geometry representation is achieved. Given a location, the RenderNet could render any ground panorama from the rendered depth and initial panorama. Besides, By utilizing the proposed sky histogram illumination injection approach, our model could own illumination transfer capabilities.
### Loss Functions
Sat2Density is trained with both reconstruction loss and adversarial loss. For reconstruction loss, we follow GAN-based syntheses works, using a combination of the perceptual loss [7], L1, and L2 loss. In the adversarial loss, we use the non-saturated loss [9] as the training objective. Besides, \(\mathcal{L}_{snop}\) is used for opacity supervision. For illumination learning, we follow the SPADE [18] use a KL Divergence loss. Last, in discriminator, we use the feature match
ing loss & a modified multi-scale discriminator architecture in [27]. Details can be found in the supplemental material.
## 4 Experiments
### Implementation Details
We train our model with \(256\times 256\) input satellite images and output a \(256\times 256\times 65\) implicit volume density and finally predict a \(128\times 512\) 360\({}^{\circ}\) panorama image, which is the same as the setting in [22] for a fair comparison. The maximum height modeled by our implicit volume density is 8 meters, which is an empirical setup. We approximate the height of the street-view camera as 2 meters with respect to the ground plane, which follows Shi _et al_. [22]. The bins of histogram in each channel are 90. The model is trained in an end-to-end manner with a batch size of 16. The optimizer we used is Adam with a learning rate of 0.00005, and \(\beta_{1}=0\), \(\beta_{2}=0.999\). Using a 32GB Tesla V100, the training time is about 30 hours for 30 epochs. As for the architectures of DensityNet and RenderNet, they share most similarities with the networks used in Pix2Pix[6]. More details about the model architecture and training details can be found in the supplemental material.
### Evaluation Metrics
We use several evaluation metrics to quantitatively assess our results. The low-level similarity measures include root-mean-square error (RMSE), structure similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and sharpness difference (SD), which evaluate the pixel-wise similarity between two images. We also use high-level perceptual similarity [36] for evaluation as in previous works. Perceptual similarity evaluates the feature similarity of generated and real images. We employ the pretrained AlexNet [10] and Squeeze [5] networks as backbones for evaluation, denoted as \(P_{\text{Alex}}\) and \(P_{\text{squeeze}}\), respectively.
### Dataset for Ground View Synthesis
We choose CVUSA [34] and CVACT(Aligned) [22] datasets for comparison in the central ground-view synthesis setting, following Shi _et al_. [22]. CVACT(Aligned) is a well-posed dataset aligned in Shi _et al_. [22], with 360\({}^{\circ}\) horizontal and 180\({}^{\circ}\) vertical visualization in panorama ground truth. Hence, we selected it for controllable illumination visualization and controllable video generation. For the dataset CVUSA, we only use it for center-ground view synthesis as their upper and lower parts of the panoramic image are trimmed for the geo-localization task, and the number of trimmed pixels is unknown [34]. During training and testing, we considered the street-view panoramas in the CVUSA dataset as having a 90\({}^{\circ}\) vertical field of view (FoV), with the central horizontal line representing the horizon. CVACT(Aligned) contains 26,519 training pairs and 6,288 testing pairs, while CVUSA contains 35,532 training pairs and 8,884 testing pairs. We did not choose other available datasets built in the city scene, such as OP[21] and VIGOR[37], since their GPS information is less accurate in urban areas compared to open rural areas, and poorly posed satellite-ground image pairs are not suitable for our task.
### Ablation Study
In this section, we conduct experiments to validate the importance of each component in our framework, including _non-sky opacity supervision_, _illumination injection_, and whether to concatenate depth with the initial panorama before sending it to the RenderNet. We first present quantitative comparisons in Table 1 for the center ground-view synthesis setting. It is evident that the illumination injection most affects the quantitative result, at the same time, only adding non-sky opacity supervision will lead to a little drop in the quantitative score. But combining the two approaches will lead to better scores. Moreover, the comparison on whether concatenate depth to the RenderNet shows almost equal results in terms of quantitative comparison.
Figure 3 shows some samples from the rendered panorama video and their corresponding depth maps. The results show that without the proposed components (baseline), the rendered depth seems meaningless in the upper half, while the lower regions look good. We attribute this phenomenon to the fact that the lower bound of the panorama is the ray that looks down, which is highly related to the ground region near the shooting center in the satellite. It can be easily learned by a simple CNN with a simple geometry projection, which also explains why the work in [22] can render the ground region well.
Compared to the baseline, adding the illumination injection can make the rendered depth look better, but the trees' density looks indistinct, and the sky region's density is still unclear. While only adding non-sky opacity supervision, the air region's opacity turns to zero, but the area between air and the ground is still barely satisfactory. The supervision did clear the sky region in the volume density, but the inner region between the sky and the ground is also smoothed. This is because such coarse supervision cannot help the model recognize the complex region.
By combining both strategies (Baseline+Illu+Opa), we can achieve a plausible 3D geometry representation that can generate a depth map faithful to the satellite image and the reference illumination. The volume density is clear compared to the above settings, and we can easily distinguish the inner regions.
Furthermore, when depth is incorporated into the rendering process, the resulting images tend to emphasize regions with depth information. This reduces the likelihood of generating objects in infinite areas randomly and leads to a synthesized ground view that more closely resembles the satellite scene, which can be observed from the video.
Figure 3: Ablation study on CVACT (Aligned) dataset. In the first row, the picture on the upper left is the input image. Each point from left to right is related to the bottom four rows from up to down. The remaining five images in the first row are the density rendered from the input satellite image following the setting (b-f) one by one. The images in the second row are the satellite depth calculated following the setting (b-f) one by one. ‘Baseline’ means baseline, ‘Opa’ means add _non-sky opacity supervision_, ‘Illu’ means add _illumination injection_, and ‘Sat2Density’ is our final result, compared to ‘Baseline+Illu+Opa’, we concatenate the depth map and initial panorama together to send to the RenderNet rather than only the initial panorama. _The video could be seen on the project page._
### Center Ground-View Synthesis Comparison
In the center ground-view synthesis setting, we compare our method with Pix2Pix [6], XFork [19], and Shi _et al._[22]. Pix2Pix is a classic GAN-based model for image-to-image translation. XFork is another effective network based on conditional GAN for cross-view synthesis. Both of them are content-based but ignore the 3D geometry connections between the two views. Shi _et al._[22] is the first geometry-guided synthesis model, which represents the 3D geometry in the depth probability MPI, showing brilliant results in the center ground-view synthesis setting.
**Quantitative Comparison.** As presented in Table 2, it is evident that Sat2Density achieves the best performance on all scores, including both low-level and perceptual similarity measures. Even when choosing a ground image randomly as the illumination image for the Sat2Density-sin, our model still outperforms other methods.
Moreover, a combined analysis of the quantitative results of Sat2Density-sin and controllable illumination in Figure 5 reveals that illumination can significantly affect both common low-level and perceptual similarity measures, although the objects in the scene remain unchanged. As a result, it is more important to consider qualitative comparisons and video synthesis results.
**Qualitative Comparison.** In Figure 4, we find that the condition-GAN based methods can only synthesize good-looking ground images, but can not restore the geometry information from the satellite scene. Shi _et al._[22] learn a coarse geometry representation, so the 3D information in the ground region is more reliable. For our method, as discussed in the ablation study, the high-fidelity synthesis (especially in the most challenging regions between the sky and the ground) is approached by learning faithful density representation of the 3D space.
### Controllable Illumination
As shown in Figure 5, it can be found that the sky histogram could easily control the who image's illumination, while the stuff in the satellite remains unchanged, _e.g._ The road's color was changed by giving different illumination, but the shape remains unchanged.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Comparison & RMSE \(\downarrow\) & SSIM\({}^{\dagger}\) & PSNR\({}^{\dagger}\) & SD \(\uparrow\) & \(P_{\text{circ}}\downarrow\) & \(P_{\text{open}}\downarrow\) \\ \hline Base & 48.40 & 0.4491 & 14.67 & 12.76 & 0.3772 & 0.2486 \\ Base+Opa & 48.39 & 0.4431 & 14.63 & 12.70 & 0.3847 & 0.2525 \\ Base+Illu & 41.62 & 0.4689 & 15.96 & 12.90 & 0.3497 & 0.2225 \\ Base+Opa+Illu & 40.71 & 0.4710 & 16.16 & 12.83 & **0.3329** & 0.2154 \\ Sat2Density & **39.76** & **0.4818** & **16.38** & **12.90** & 0.3333 & **0.2145** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study results on CVACT (Aligned) dataset. ‘Base’ means baseline, ‘Opa’ means add _non-sky opacity supervision_, ‘Illu’ means add _illumination injection_, and ‘Sat2Density’ is our result, compared to ‘Baseline+Opa+Illu’, we concatenate the depth map and initial panorama together to send to the RenderNet rather than only the initial panorama.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline & Method & RMSE \(\downarrow\) & SSIM\({}^{\dagger}\) & PSNR\({}^{\dagger}\) & SD \(\uparrow\) & \(P_{\text{circ}}\downarrow\) & \(P_{\text{open}}\downarrow\) & \(\frac{\text{reference}}{\text{reference}}\) \\ \hline \multirow{4}{*}{
\begin{tabular}{l} Pix2Pix \\ XFork [19] \\ Shi _et al._[22] \\ \end{tabular} } & 497.5 & 0.3826 & 14.38 & 12.69 & 0.4654 & 0.3096 & 10.29 \\ & XFork [19] & 48.95 & 0.3710 & 14.50 & 12.32 & 0.4638 & 0.3262 & 17.24 \\ & Shi _et al._[22] & 48.50 & 0.4727 & 14.59 & 1.231 & 0.4095 & 0.2708 & 109.88 \\ & Sat2Density-sin & 44.47 & 0.4129 & 15.34 & 12.15 & 0.3734 & 0.2404 & 33.12 \\ & Sat2Density & **39.76** & **0.4818** & **16.38** & **12.90** & **0.3339** & **0.2145** & **33.12** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison with existing algorithms on the CVACT (Aligned) dataset and CVUSA dataset in cente ground-view synthesis setting. The ‘\(\uparrow\)’ means the higher score indicates better performance and ‘\(\downarrow\)’ is the opposite. ‘Sat2Density-sin’ means we randomly choose a single ground image as the illumination when inference. The inference time was tested on a Tesla V100 GPU.
Figure 4: Example images generated by different methods in center panorama synthesis task. The top two rows show the results on CVACT (Aligned) dataset, and the bottom two rows show the results on the CVUSA dataset.
### Ground Video Generation
In Figure 6, we compare the rendered satellite depth, and synthesized ground images from a camera trajectory with the expansion of Shi [22]. Shi [22] focus on synthesizing ground panorama in the center of the satellite image, as they learn geometry by depth probability map, we expand their work by moving the camera position when inference. We also show the rendered depth maps that correspond to the synthesized ground images. It is worth noting that Shi [22] cannot render a depth map for novel views, due to the intrinsic flaw of the depth probability representation.
From the synthesized satellite depth, we observe that Shi [22] can only render a very coarse satellite depth, and is hard to recognize most regions. In contrast, trees and ground regions can easily distinguish from our satellite depth, and the depth in ground regions appears smooth. Additionally, by volume rendering, we can render depth in any view direction, as shown in Figure 6 (d).
Furthermore, we find that the rendered ground video by Shi [22] has little consistency due to the unfaithful 3D geometry representation, as evidenced by the inconsistencies present in the trees and sky. These results demonstrate that Sat2Density is capable of rendering temporal and spatially consistent videos.
## 5 Discussion and Limitations
Although our Sat2Density learns a faithful 3D geometry representation, it still has some limitations. For instance, the density of trees and the visibility of houses are not perfect in our results, which would come down to the following reasons. Firstly, the one-to-one correspondence between satellite and ground images is not optimal, as having multiple ground panoramas corresponding to one satellite image would result in more precise density. Additionally, images taken on different days may introduce transient objects that our approach is unable to handle. Secondly, the projected color map sent to the RenderNet may be too coarse in the region between sky and ground, which could impact the final result. Finally, well-aligned image pairs are required to learn the geometry, so we are unable to evaluate the effectiveness of our approach in city scenes for more coarse GPS precision in the city. Therefore, while our work is a promising start for learning geometry from cross-view image pairs, there are still many challenges that need to be addressed.
## 6 Conclusion
In this paper, we propose a method, [id
difference issue, to make geometry learning possible. By leveraging the learned density, our model is capable of synthesizing spatial and temporal ground videos from satellite images even with only one-to-one satellite-ground image pairs for training. To the best of our knowledge, our method represents the first successful attempt to learn a precise 3D geometry from satellite-ground image pairs, which significantly advances the recognition of satellite-ground tasks from a geometric perspective.
|
2308.11478 | Towards Autonomous Excavation Planning | Excavation plans are crucial in construction projects, dictating the dirt
disposal strategy and excavation sequence based on the final geometry and
machinery available. While most construction processes rely heavily on coarse
sequence planning and local execution planning driven by human expertise and
intuition, fully automated planning tools are notably absent from the industry.
This paper introduces a fully autonomous excavation planning system. Initially,
the site is mapped, followed by user selection of the desired excavation
geometry. The system then invokes a global planner to determine the sequence of
poses for the excavator, ensuring complete site coverage. For each pose, a
local excavation planner decides how to move the soil around the machine, and a
digging planner subsequently dictates the sequence of digging trajectories to
complete a patch. We showcased our system by autonomously excavating the
largest pit documented so far, achieving an average digging cycle time of
roughly 30 seconds, comparable to the one of a human operator. | Lorenzo Terenzi, Marco Hutter | 2023-08-22T14:44:16Z | http://arxiv.org/abs/2308.11478v1 | # Towards Autonomous Excavation Planning
###### Abstract
Excavation plans are crucial in construction projects, dictating the dirt disposal strategy and excavation sequence based on the final geometry and machinery available. While most construction processes rely heavily on coarse sequence planning and local execution planning driven by human expertise and intuition, fully automated planning tools are notably absent from the industry. This paper introduces a fully autonomous excavation planning system. Initially, the site is mapped, followed by user selection of the desired excavation geometry. The system then invokes a global planner to determine the sequence of poses for the excavator, ensuring complete site coverage. For each pose, a local excavation planner decides how to move the soil around the machine, and a digging planner subsequently dictates the sequence of digging trajectories to complete a patch. We showcased our system by autonomously excavating the largest pit documented so far, achieving an average digging cycle time of roughly 30 seconds, comparable to the one of a human operator.
## 1 Introduction
The construction industry is essential for building infrastructure for core human needs like shelter and transport. Yet, it faces persistent issues: lagging productivity, rising labor costs (McKinsey, 2016), and exacerbated labor shortages (McKinsey, 2022). Moreover, construction sites can be perilous, leading to high injury and fatality risks for workers (Bureau of Labor Statistics, 2022).
Advancements in automation, as seen in manufacturing, could be the answer. Especially, the automation of earthwork tasks using excavators offers promise due to their repetitive nature and the precision needed for modern computational landscape design (Hurkxkens, 2020). While the complexity of these tasks has historically posed hurdles, recent research shows promising opportunities for creating fully autonomous excavation systems.
Figure 1: **(a)**: terrain before the excavation, the excavation area is indicated in violet. **(b)**: excavated pit, the sides and bottom are highlighted in violet and brown.
A global plan outlines the sequence of excavator base poses to excavate the entire site while avoiding obstacles and preventing the excavator from becoming blocked or cornered by digging in the wrong sequence. The local-level planner then incorporates details on moving the earth around the excavator to follow and refine the global plan, including deciding where and how to dig to achieve the desired excavation geometry. The problem of global excavation planning is challenging as it involves both navigation and earthworks planning, which are closely interrelated. An incorrect digging sequence or improperly planned soil disposal can block the excavator and impede progress in other parts of the site. Additionally, early decisions in the planning process can have a ripple effect on the entire plan, even hours later. The global planner must also consider obstacles for navigation and constraints on soil disposal, which the literature does not address adequately. Furthermore, in large sites with multiple digging areas, the problem of efficiently visiting all areas cannot be simplified by casting the problem into a graph traversal problem, such as the traveling salesman problem, as the connectivity of cells is dependent on the excavation subroutine used, cells may need to be traversed multiple times before being excavated, and there is no requirement to return to the starting point. Once a global plan is found, the local planner ensures successful execution by efficiently rearranging soil around the excavator's current position and transporting it to the designated disposal location without obstructing future navigation. Depending on the excavation geometry, this process may involve multiple rearrangements of excavated soil. Finally, the digging plan must consider the desired final excavation geometry and enable an excavator to execute it efficiently with a minimum number of scoops while also being able to re-plan in case of possible spillages or wall collapses. Additionally, it should avoid many small scoops to refine the geometry and achieve the desired geometry within the required tolerances in the least amount of time.
A global plan outlines a sequence in which the excavator positions itself throughout the site, ensuring it avoids obstacles and doesn't get trapped or hindered by ill-advised excavation sequences. The local plan, on the other hand, delves into the specifics of the earth movement around the excavator, finetuning the overarching global strategy. It determines the precise excavation methods to achieve the desired shape.
The challenge in global excavation planning lies in its intertwined nature of navigation and earthworks planning. Mistakes in the excavation sequence or suboptimal soil disposal strategies can halt the excavator, impeding progress elsewhere on the site. Furthermore, initial planning decisions can profoundly impact the entire operation, with consequences manifesting even hours later. While the literature offers limited guidance, the global planner has to navigate obstacles and establish soil disposal constraints. Moreover, in expansive sites with numerous excavation zones, efficiently covering all areas is more complex than converting the task to a familiar graph traversal challenge like the traveling salesman problem. The interconnectedness of zones is influenced by the specific excavation methods employed. Zones might need multiple passes before complete excavation, and there's no mandate to return to the starting position.
Once the global strategy is in place, the local planner focuses on execution, managing the soil in the excavator's vicinity and ensuring transportation to the assigned disposal point without hampering subsequent navigation. Depending on the excavation design, this may necessitate several soil reconfigurations. The excavation strategy must also regard the intended final design, guiding the excavator to achieve this shape optimally using the fewest scoops. It must also be robust enough to adapt to unexpected issues like spillages or wall collapses and efficient enough to reach the target design swiftly, avoiding numerous minor adjustments.
In recent years, there have been advances in automating tasks like trenching (Cannon, 1999; Jud et al., 2021a; Kumparak, 2022), loading materials onto trucks (Stentz et al., 1998; Zhang et al., 2021), and embarkments (Jud et al., 2021a). But these methods often focus on specific geometries or site sizes. While large-scale excavations have been attempted with coverage routines (Kim et al., 2012; Choset, 2000), they seldom address local soil rearrangement and most rely heavily on heuristics (Jud et al., 2021a; Cannon, 1999; Jud et al., 2019).
The present work presents a holistic solution for autonomously excavating large structures, including rearranging soil and moving the machine. We treat the global excavation planning as a coverage problem, similar to(Kim et al., 2012). We develop a method to find the sequence of excavator base poses that allow the entire workspace to be excavated while satisfying constraints on soil disposal. The workspace is divided into cells, i.e., coverable areas. In coverage navigation, commonly employed in applications like floor cleaning robots, a 'coverable area' refers to a specific region within a larger workspace that can be navigated entirely, accessed, or serviced by a robot using its predefined navigation primitives or patterns. The order of cells is determined using graph navigation algorithms that preserve the connectivity of the non-excavated regions
while minimizing path length. We then use dynamic programming to optimize the entry and exit points of coverable cells covered using simple zig-zag navigation primitives to simplify the dirt-handling problem as shown in (b)b.
Earthmoving machine operators typically redistribute soil around the base of the excavator before moving to a new base pose. This method is more efficient regarding the volume of soil moved per hour instead of constantly alternating between digging and driving the machine. Thus, we define the problem of local earthwork planning as the task of proficiently redistributing soil around the base of the excavator to execute a global excavation plan. The area where this occurs, reachable by the excavator's arm from a fixed base pose, is termed the local workspace. The local planner selects the digging area based on the current excavated and desired geometry. The dump location is chosen primarily based on its proximity to the designated area for soil disposal. The digging planner uses Bayesian optimization to quickly determine the initial condition of a parametrized digging trajectory. The navigation stack employs an RRT* planner and a pure pursuit controller to minimize travel distance and maximize clearance from obstacles and hazardous excavated areas. An overview of the system and its components is shown in Figure 2.
## 2 Related Work
We build on prior work in research and field deployment of autonomous excavators, global earthworks planning, local earthworks planning, and trajectory planning for digging.
Advancements in Excavator AutomationThere has been significant progress in the automation of excavators in recent years. Early efforts, such as LUCIE (Bradley and Seward, 1995), focused on performing dig cycles without environmental perception. More recent work has incorporated range sensors (Cannon, 1999; Stentz et al., 1998) to enable autonomous digging in varied terrain profiles and the loading of soil onto trucks. The use of state machines and kinematic motion primitives has also been explored for autonomous trenching (Groll et al., 2019), with the addition of LIDAR-based elevation mapping and velocity and force control for the arm in (Jud et al., 2019). Built Robotics has recently developed an automation kit for trenching tasks (Kumparak, 2022).
Other recent research has focused on more complex excavation tasks, such as creating embankments (Jud et al., 2021). Our work aims to extend this progress by addressing excavation tasks of different geometries and constraints on dirt disposal and navigation.
Global Earthworks PlanningGlobal earthworks planning problem involves devising a set of poses for the excavator during a task and determining a sequence of digging and dirt disposal areas. This problem has been addressed in a variety of ways in prior research.
In one approach, (Woo et al., 2018) employed a discretized workspace, using a reinforcement learning agent to generate a value function over the grid. The cell with the highest estimated value is selected for digging. However, this approach overlooks dirt handling and fails to ensure the reachability of the subsequent dig cell. On the other hand, (Jud et al., 2021; Groll et al., 2019) proposed task-specific navigation trajectories for excavating an embankment and a trench, respectively.
Global earthworks planning can also be viewed as a coverage problem, where the objective is to find a sequence of base poses that allow the arm to reach and excavate the entire area. Coverage path planning (CPP) algorithms have been widely used in diverse fields such as agriculture, house cleaning robots, underwater exploration, and mapping via unmanned aerial vehicles (Galceran and Carreras, 2013). Commonly, these algorithms presuppose the space to be covered is known beforehand.
Coverage algorithms such as Wavefront and Spanning Tree Coverage prove efficient when considering discretized space. These, however, can generate trajectories involving backtracking (impossible during digging) and possibly complex paths, thereby complicating dirt handling. If the space is continuous, the initial step of a CPP algorithm often involves dividing the space into coverable zones using simple motion subroutines like a standard zigzag trajectory. For this purpose, decomposition methods such as Morse (Acar et al., 2002) and Boustrophedon (Choset, 2000) can be employed.
(Kim et al., 2012) proposed a coverage path planning algorithm for navigating multiple earthwork systems operating simultaneously in a known environment. This approach utilized a Morse function to partition
the space and solved the Travelling Salesman Problem (TSP) to compute the sequence of cells to be processed. This method, however, has several limitations, including reliance on edge adjacency to establish cell connectivity, a large set of navigation patterns, and no consideration for dirt handling--additional research utilized dynamic programming techniques to determine both a global cell ordering and a viable path. Specifically, the work addressed the TSP-CPP problem for UAVs using dynamic programming and straightforward subroutines(Xie et al., 2019). Furthermore, a hierarchical TSP problem was investigated (Cao et al., 2020).
Local Earthworks PlanningThe challenge of local earthworks planning has been largely overlooked in the literature. The solution is relatively straightforward for specific tasks like trenching, with the soil being dumped beside the trench (Groll et al., 2019). In other works, the problem is simplified by assuming that the dirt is immediately disposed of or dumped into a truck (Cannon, 1999).
Prior approaches to this problem have used a semi-circular digging workspace in front of the excavator. This concept has been reflected in various works, such as (Singh and Cannon, 1998) and (Seo et al., 2011).
Digging Planners and ControllersA digging planner is responsible for planning the sequence of digging trajectories to complete a task within a given digging zone. The problem is typically divided into two subproblems: the design of a digging planner that selects the attack point (the point where the digging trajectory begins on the excavation surface) and the design of a 2D digging trajectory planner/controller that executes a digging trajectory given a predefined attack point and a digging plane.
(Singh and Cannon, 1998) use expert operator heuristics to divide the digging zone into sections along the radial and tangential directions relative to the excavator base, which are dug sequentially until a certain precision is reached. However, these approaches can force the 2D digging planner to repeatedly dig low-volume scoops until the required precision in a subsection is reached. They may also make the plan brittle to wall collapses or soil spillage during the excavation of future adjacent sections or during dirt transport to the dump zone. In contrast, (Zhang et al., 2021) suggests using a data-driven approach to select the attack point of the digging trajectory by learning from human operator preferences.
(Son et al., 2020) use dynamic motion primitives to efficiently learn from expert data, with a modulation module to adapt the trajectory to different soil types. (Lee et al., 2021) use model predictive control (MPC) to plan digging trajectories in simulation. However, whether the system can bridge the sim-to-real gap without accurate soil modeling and a model of the machine's dynamics is uncertain. (Egli et al., 2022) successfully demonstrate the deployment of a soil-adaptive digging controller trained with reinforcement learning (RL) in simulation only.
### Contributions
This article presents a novel and comprehensive approach for autonomously excavating large sites using a single excavator. Demonstrated on a legged Menzi Muck M545 excavator, our framework offers a solution to specify the excavation area, constraints on navigation, and dirt disposal in georeferenced coordinates. It introduces a global workspace planner that calculates the required base poses, guaranteeing they are collision-free and achievable. In contrast to existing methods, we propose using cell corner adjacency, as these corners represent local subroutines' start or endpoints, to facilitate easier planning.
The local workspace planner in our approach is tasked with deciding soil redistribution around the excavator without base movement. Here, our system improves upon existing designs by creating five adaptable zones in the local workspace. These zones are dynamic in size, based on the required excavation geometry and serve as areas for digging and dumping soil.
The digging trajectory planner, another integral part of our system, determines the precise excavation point and trajectory. In a significant improvement over previous methods, we employ Bayesian optimization to identify the free parameters of a digging trajectory that greedily maximizes scooped soil volume.
Our system was put to the test by digging an approximately rectangular pit, measuring 11.6 x 15.6 x 1 m, which required planning intermediate dirt dumping sites. The successful completion of this complex task, involving hundreds of load cycles and moving about 300 tons of material within half a work day, demonstrates the system's capabilities and reliability in real-world excavation scenarios.
Notably, our system holds the distinction of being the first of its kind capable of autonomously performing diverse excavation tasks such as digging trenches, pits, and handling more complex projects with dirt disposal
and navigation constraints. Its development marks a significant step forward in excavation automation, moving beyond the limitations of previous systems that were constrained to simple 2D tasks or required extensive human intervention. It is also the first system to demonstrate reliable autonomous digging for hundreds of load cycles and the movement of approximately 300 tons of material in a half work day.
Highlighting our paper's key contributions:
* A global workspace planner that decomposes the workspace using the Boustrophedon algorithm, finds the related quotient graph that encodes the connectivity of the workspace, computes the minimum branching tree of the graph, and finds the visiting sequence of the nodes with a postorder traversal to ensure the connectivity of yet-to-be-excavated areas. The planner also takes into account dumping constraints and the presence of obstacles when using dynamic programming to choose the start and end point of each cell.
* A local workspace planner that determines how to move dirt around the excavator without moving the base, considering user input and constraints on dirt disposal. The planner selects dig areas based on the discrepancy between the current soil geometry and the desired one and dump areas based on the distance to a user-designated dumping zone.
* A digging trajectory planner that aims at reaching the target geometry in a dig area by using Bayesian optimization to find the free parameters of a digging trajectory, which greedily maximizes scooped soil volume.
* A safe and robust navigation system based on a sampling planner and pure pursuit controller that allows the excavator to navigate to the next base pose while avoiding obstacles and maintaining a safe distance from the excavation site.
* An experimental validation and quantitative results that set a new benchmark for autonomous excavation performance
* The introduction of a novel dataset comprising authentic building silhouettes and urban crop layouts, the development of a software program for generating realistic excavation shapes through procedural methods and a benchmark to assess the effectiveness of the excavation planning system. All associated code and resources are open-sourced at the digbench repository footnote9.
## 3 Method
HeapHeap is a M545 Menci Muck 12-ton legged excavator that has been adapted for autonomous forestry operations (Jelavic et al., 2022), rock wall construction (Johns et al., 2020), and digging tasks (Jud et al., 2021; Egli et al., 2022). The chassis is equipped with servo valves and pressure sensors that enable the deployment of a force-based chassis balancing controller (Hutter et al., 2017), which significantly aids navigation over challenging terrain. The system is equipped with an Ouster OS-0 128 LIDAR for mapping, localization, and online site mapping during operations. A GPS receiver with RTK correction and an IMU is mounted on the cabin. IMUs are also mounted on each arm link to estimate the kinematic position of the shovel. An encoder measures the cabin's orientation relative to the base. For more details on the hardware setup, see (Jud et al., 2021).
System OverviewThe deployed system integrates mapping, localization, and various planning components to enable autonomous excavation. It incorporates a user interface for excavation planning, global earthwork planning, local excavation planning, a digging planner, and a navigation sampling-based planner, as depicted in Figure 2. To assess the effectiveness of the excavation planning system. The planning modules can use different digging controllers, ranging from straightforward kinematic controllers to advanced reinforcement learning strategies.
### Mapping and State Estimation
MappingTo plan excavation tasks effectively, the system requires a precise construction site map. This map, often represented as an elevation map, outlines the excavation area, target depth, dump zones, and any obstacles. Additionally, it can be used to differentiate between the original terrain and newly excavated soil.
To produce an initial map for global planning, we manually traversed the excavation site and collected point clouds. These were then registered in a georeferenced frame with a voxel size of 0.05 m using Open3D SLAM (Jelavic et al., 2022). This ICP-based SLAM system identifies loop closures through local map
Figure 2: System architecture diagram. Subsystems are grouped in modules indicated in different colors: sensors(blue), mapping and state estimation (violet), excavation planning (orange), motion planning (green), controllers (red). The state machine in the middle of the diagram triggers the execution of the different modules. The arrows represent the flow of information from one subsystem to another.
matching. We registered the map by integrating GPS readings with ICP registration. We used the Earth-centered, Earth-fixed (ECEF) coordinate system for mapping since excavation plans typically define dig points using ECEF or its derivatives. Alternatively, this map can also be acquired using traditional surveying methods (Leica Geosystems, 2023) or other robotic technologies like drones (Wingtra, 2023).
We then transformed the point cloud map into a 2.5D elevation map at a resolution of 0.1 m using the grid map library (Jelavic et al., 2022), as depicted in Figure 3. This map representation is consistently used throughout the paper. We applied a hole-filling filter to the map to counteract minor occlusions from the LIDAR's field of view due to soil irregularities.
User InputThe system requires several input layers in the form of grid maps. These layers detail the excavation area, the desired depth, the locations of obstacles, and the designated dump sites. Each layer functions as a mapping \(f(x,y)\to z\), where \((x,y)\) represents a position on the map and \(z\) is the value of the layer at that position. In the context of an elevation map, \(z\) corresponds to the terrain's elevation.
Google Earth Pro serves as the tool for defining these layers. It allows users to create georeferenced shapes, allocate attributes, and specify elevation at each vertex. It also provides a 3D and top-down view of the excavation site. The user-defined layers are merged to produce a unified grid map, which the excavation planning system uses (see Figure 4). The integrated map comprises the following layers:
* **Elevation**: This is sourced from a point cloud map, representing the terrain's height.
* **Target elevation**: Defined in Google Earth Pro, this layer marks the intended excavation depth. Users can set this as absolute values or relative to the existing ground level.
* **Excavation mask**: A layer with integer values showing both the excavation and dump sites.
* **Occupancy**: A binary layer detailing terrain traversability, the presence of obstacles, and off-limits areas.
State EstimationTo estimate the robot's state, we use a graph-based multi-sensor fusion approach (Nubert et al., 2022). The state estimator fuses IMUs, encoders, RTK-GPS, and LIDAR measurements. The graph-based fusion of the LOAM-based LIDAR odometry, as described in CompSLAM (Khattak et al., 2020), together with the RTK-GPS measurements allows the system to be robust to connectivity and GPS outages. A robust state estimator is crucial for the machine to operate reliably for several hours.
Figure 3: **(a)**: Point cloud generated with Open3D SLAM of the excavation site. **(b)**): Elevation map (in blue) and traversability map (black non-traversable) generated from the point cloud map. The traversability has been manually modified to include the fence, which is not visible via the LIDAR.
### Global Excavation Planner
The global excavation planner determines the sequence of poses for the excavator base to ensure the entire excavation area is within arm's reach. The planner design is grounded on the local excavation geometry shown in Figure 4(a). The local digging geometry, illustrated in Figure 4(b), defines the excavator's accessible digging area based on its current base pose.
OverviewThe global planner accepts an excavation map from the user, encoding target dig and dump zones and obstacles. The system includes an outer and inner optimization loop. The inner optimization involves decomposing the dig space into coverable cells with Boustrophedon decomposition, generating a directed quotient graph over the cells, and finding a minimum branching spanning tree to minimize inter-cell movement. Post-order traversal of the tree ensures non-traversal of dug cells and maintains access to remaining cells. Dynamic programming is used to identify the cell corner sequence for minimum travel distance. The outer loop selects the coverage orientation to maximize the covered area while minimizing local workspaces and path length.
Bustrophedon Decomposition and Quotient GraphThe first step in the process is to decompose the target space using the Boustrophedon decomposition (Choset, 2000) based on a specific coverage orientation. During this decomposition, the space is segmented into different cells as a slice passes over the workspace, altering its connectivity. Figure 5(a) and Figure 5(d) provide examples of such decomposition, demonstrating a vertical coverage orientation. Following the decomposition, the cells' connectivity is represented through a graph, where each node symbolizes one of a cell's four extreme points. The adjacency matrix is then constructed using an equivalence relationship between two vertices \(v_{1}\) and \(v_{2}\), as defined in Equation 1.
\[v_{1}\sim v_{2}\iff\exists c\in C,v_{1}\in c,v_{2}\in c \tag{1}\]
where \(C\) signifies the set of cells. Subsequently, the quotient graph is generated by collapsing all equivalent vertices, with the edges connecting cells sharing at least one common vertex (see Figure 5(b)). The graph is bidirectional in scenarios involving only convex obstacles, except when the obstacles align with the sweeping slice's direction. Conversely, the graph is directed when one vertex shares corners with another, but not vice versa (e.g., when the obstacle is concave, and the sweeping slice passes through its concave part), as illustrated in Figure 5(e).
Spanning Tree and Topological SortingThe excavation problem's structure presents practical challenges that prevent a direct formulation as a Travelling Salesman Problem (TSP) or its variants. In this
Figure 4: **(a)** Users use Google Earth Pro to delineate various layers. No-go zones appear in red, dirt dumping restrictions are marked in red, and excavation zones are indicated in violet. Excavation depths are assigned to these polygons. **(b)** The target elevation is color-coded according to the excavation mask layer. Red indicates no-dumping zones, green shows allowed dump sites, blue marks the excavation area boundary, and violet represents the excavation area.
context, multiple cell visits are allowed and might be necessary, but a dug cell becomes unavailable for navigation. Furthermore, starting and ending at the same cell is not a requirement.
We address this problem by constructing a spanning tree of the graph, using post-order traversal to ensure full tree traversal. This strategy enables the excavator to dig the entire area, revisiting undug cells and minimizing driving time by reducing the number of branches in the tree. A tree with no vertex with a branching degree greater than one allows continuous digging. The problem, known as the Minimum Branch Vertices Problem (MBVP), is efficiently solvable via dynamic programming for both directed and undirected graphs (Silvestri et al., 2017). After generating the spanning tree, the sequence of cells to visit is determined through post-order traversal, with the results depicted in Figures 6c and 6f.
Dynamic Programming for Local CoverageWith the sequence of cells determined, the problem now focuses on selecting optimal corners to visit and the associated coverage subroutine. Three potential subroutines connect the vertices of a cell (Figure Figure 7), creating a series of spaced base poses (Figure Figure 8). The distance covered during each routine is the sum of distances between successive base poses, with an extra cost term accounting for the turns, approximating the additional distance needed for maneuvering. Should the robot's footprint collide with any obstacle, the path length is set to infinity; otherwise, distances in undug areas are approximated as the Euclidean distance between points.
The problem of finding the optimal sequence of corners to visit can also be efficiently solved using dynamic programming. The complexity of this approach is \(O(CV^{2})\), where \(C\) represents the number of corners and \(V\) represents the number of subroutines available.
We use the following recurrence relation shown in Equation 2 to solve the problem using dynamic programming.
\[D_{i,c_{k}}=\min_{c_{n},c_{j},l}\left(D_{i-1,c_{n}}+d_{i,l}(c_{n},c_{j})+d_{o} (c_{j},c_{k})\right) \tag{2}\]
In this equation, \(D_{i,c_{k}}\) represents the minimum distance required to reach corner \(c_{k}\) in cell \(i\). The term \(d_{i,l}(c_{n},c_{j})\) denotes the length of the coverage path between corners \(c_{n}\) and \(c_{j}\) in cell \(i\). The coverage path can have a flipped lane in the last position, controlled by the binary variable \(l\). Finally, \(d_{o}(c_{j},c_{k})\) signifies the length of the path between corners \(c_{j}\) of cell \(i\) and \(c_{k}\) in cell \(i+1\).
In Figure 9 are shown the solutions to the global coverage problem for the cases we have considered.
Figure 5: **(a)**: Local excavation geometry, showing the five planning areas. \(\theta\) is the workspace angle and has a value of 1.9 rad, \(r_{i}n\) and \(r_{out}\) are the digging area inner and outer radius equal to 7.0 and 4.5 m. The inner and outer radii of the front-left and front-right areas are 7.5 and 3.5 m. **(b)**: coverage paths generate for a rectangular workspace. The green dot represents the starting point of the excavation, and the red cross is the endpoint. The dotted lines show the quickest way to join two excavation lines.
Figure 6: Workspace decomposition and cell visitation workflow, colors have been added just for easy visualization. **Top**: Two examples of Boustrophedon decomposition. The space has been subdivided into different cells, each marked with a different color. **Center**: Corresponding quotient graph to the two examples. For the left column case, the graph is undirected since all cells share one with at least another cell in one corner; for the right column, no cell shares a corner with cells 2 and 4. Therefore these two cells must be excavated before cells 0 and 5, respectively. The simply connected components of the graph are indicated with a different color. **Bottom**: Resulting in minimum branching spanning trees for the above graphs. A dashed edge indicates that the machine moved to the cell, while a solid edge indicates that the machine excavated the cells.
Outer OptimizationThe excavation layer is initially divided into locally connected areas. The global planning problem is solved for each area, and a global path is created by concatenating the paths of these areas based on the solution to a traveling salesman problem. The choice of orientation for each connected area significantly influences the excavation plans, including the number of cells, workspaces, and the path's feasibility.
The best orientation for a given excavation geometry is found by solving the following nonlinear optimization problem:
\[\theta=\arg\min J(\theta)=c_{\text{ax}}(\theta-\phi)+c_{p}L_{p}+c_{n}N_{w}+c_{ a}A_{c} \tag{3}\]
where \(\phi\) is the main axis of the excavation object, \(L_{p}\) is the length of the coverage path, \(N_{w}\) is the number of workspaces, and \(A_{c}\) is the fraction of covered area. Depending on the excavation type, the user can tune the coefficients accordingly. For example, in the case of trenching, the only coefficient that should be non-zero is the main axis coefficient \(c_{\text{ax}}\) because this creates a coverage plan aligned with the main direction of the trench.
### Volumetric Simulation of Earth Movement
This section describes our method to simulate soil movement for system simulation. We represent the soil as a simplified geometric solid volume, excluding dynamics, as our primary focus is excavation planning.
To mimic digging, we track the shovel's path, removing soil without considering the forces on the shovel's edge. Mass conservation is maintained by calculating the volume of soil removed with each scoop and adding an equivalent volume to the dumping location.
Figure 8: Coverage geometry, where the excavator always moves backward, corresponding to the subroutines shown in Figure 7. The worst-case alignment of the local workspace is used to determine the lane spacing.
Figure 7: Three possible subroutines used to cover a cell. **(a)**: alternating lane directions with obstacles lying only one side. **(b)**: two consecutive lanes having the same direction. **(c)**: coverage with obstacles on both sides present.
Our simulation models the falling soil particles and calculates their ground distribution. This is achieved by dividing the space into "bucket slices," assuming that particles in each slice fall according to a normal distribution independent of one another. The mean of this distribution is the slice's center, and the standard deviation is half the bucket's width. Additionally, we assume that the dumping location is relatively flat, preventing lateral soil sliding post-dumping.
The resulting ground height distribution is described by Equation 4 and Equation 5, where variables are defined as follows: \(\sigma_{x}\) and \(\sigma_{y}\) are half the shovel dimensions in the body frame along the x and y axes, \(\psi\) is the shovel's heading angle in the world frame, \(N\) is the number of discretized elements in the bucket, and \(V_{b}\) is the bucket's volume.
\[h(x)=\frac{V_{b}}{N}\sum_{i=0}^{N}\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp \left(-\frac{1}{2}(x-\mu_{i})^{T}\Sigma^{-1}(x-\mu_{i})\right) \tag{4}\]
\[\Sigma=\begin{bmatrix}\cos(\psi)&-\sin(\psi)\\ \sin(\psi)&\cos(\psi)\end{bmatrix}\begin{bmatrix}\sigma_{x}^{2}&0\\ 0&\sigma_{y}^{2}\end{bmatrix} \tag{5}\]
In Figure 10, two snapshots of a pit's excavation in our simulated test field are shown. The model's main goal was to expedite the system's overall development through simulation, not to create a realistic representation of soil transport. Validating the model using real-world data is challenging since factors such as water content, soil type, and compaction are not considered.
### Local Excavation Planner
The local excavation planner arranges soil around the excavator's base to fulfill the excavation task. Figure 4(a) illustrates the local excavation geometry, divided into five areas:
* _Front_: an unexcavated digging area.
* _Front-left_ and _front-right_: areas for digging or dumping previously dug soil.
Figure 9: Coverage paths for the cases considered in Figure 6. The excavation for each cell starts at the green dot and ends at the red cross. Solid lines indicate the excavation path, dotted lines indicate the forward orientation of the base along the path navigation to the next line. The cells have been re-indexed to reflect the coverage order.
* _Back-left_ and _back-right_: areas for dumping soil excavated from other regions.
The excavator's rear is kept clear for navigation, enabling operation in constrained spaces. The front areas' sizes can vary with the task.
The planner uses the excavation mask layer in the elevation map, containing constraints and target geometry information.
Excavation MaskThe updated excavation mask layer guides the choice of dig and dump zones based on user input. It includes:
* _Dig area_ (violet): Identified by a discrepancy between target and current elevations.
* _Permanent dump area_ (green): For permanent disposal of excavated material.
* _Neutral area_ (light blue): An area for digging and temporary material dumping, eventually moving to the permanent dump area.
* _No-go area_ (red): User-defined, no dig or dump area.
* _Boundary area_ (dark blue): Equivalent to a no-go area at the excavation zone's edge, preventing soil spillage.
The neutral area combines user-defined areas, the convex hull of potential future base footprints, and unreachable dump areas from the current machine configuration. It ensures a continuous path to future target poses by keeping material away from this region.
Dig and Dump Zone SelectionThe local planner selects the dig and dump zones by first checking the front zone and then lateral front zones based on the criteria:
1. Less than 10% of elevation map cells deviate more than 0.1 m from the target elevation.
2. Remaining volume to excavate is less than 8% of the total volume.
The front zone's target elevation uses the "desired elevation" layer, while the front-left and front-right zones use the "original elevation" layer. Thresholds for these conditions were determined experimentally for a good balance between accuracy and speed.
The dumping zone is chosen from front-left, front-right, back-left, or back-right zones, considering the lowest dumping cost, computed using Equation 6.
\[C_{D}=\frac{1}{N}\sum_{i=1}^{N}SDF_{dump}(x_{i})+\alpha||x_{dig}-x_{dump}||^ {2} \tag{6}\]
Figure 10: Elevation maps generated while excavating in simulation. **(a)**: Elevation map at the beginning of the excavation. **(b)**: Elevation map after the excavation is finished.
Here, \(C_{D}\) is the dumping cost, calculated as an average signed distance over the number of cells of the zone (\(N\)) from the closest permanent dump area (\(SDF_{dump}\)) plus a term for the distance between digging (\(x_{dig}\)) and dumping locations (\(x_{dump}\)), weighted by \(\alpha=4.0\). This formulation aims to minimize transport time and number of moves before final disposal.
A permanent dump area is accessible if within the excavator's reach or further along the path and does not obstruct future targets. The local excavation geometry, shown in Figure 4(a), and the global planner's strategy contribute to this. Several dirt handling scenarios in the single-cell excavation are demonstrated in Figure 12.
The dig and dump selection continues until all zones are excavated, and then the machine transitions to the refinement phase, involving soil grading.
During refinement, the excavator sweeps from left to right in a grading motion to smooth the terrain. The arm's back-and-forth movement levels any unevenness to achieve the desired slope and surface smoothness.
### Arm Trajectory Planner
The arm trajectory planner governs all arm trajectories, such as digging, dumping, and grading motions. It remains agnostic to the specific trajectories and focuses on optimizing the trajectories' parameters or initial conditions. The digging system optimizes trajectories to scoop the maximum volume in the target zone. This work employs parameterized trajectories with inverse kinematics, but the planner is extendable to other low-level controllers.
### Parametrized Digging Trajectories
The digging trajectory is divided into penetration, dragging, and closing. Penetration starts at ground level at an initial position, \(x_{0}\), with the shovel edge moving along the shovel's x-axis, as shown in Figure 12(a) until the target depth is reached. This occurs when the target elevation or maximum depth to the surface is attained. The attitude angle, \(\gamma\), is independent of the soil profile and varies linearly across the workspace, with values summarized in Table 1.
In the dragging phase, the bucket moves radially along the boom, with its angle varying linearly between \(\gamma_{\min}\) and \(\gamma_{\max}\). The dragging motion stops when the bucket is full, a self-collision risk is detected, or it exits the diggable zone. To increase the average volume per scoop, the scooped volume is calculated by integrating the shovel depth, including areas outside the target workspace but still part of the next workspace. As excavation progresses, scoops diminish in size, ensuring maximum soil removal.
During the closing phase, the bucket begins to close, moving radially and vertically until fully closed and at a target height above the soil.
Figure 11: Elevation maps of the terrain during the digging process in simulation. **(a)**: colored based on the excavation mask following the color scheme defined above. **(b)**: colored based on the dumping cost, with red indicating the highest cost and violet indicating the lowest cost.
Figure 12: Excavation mask layer and zone dumping costs for different cases throughout the excavation of a pit in simulation. **(a)**: excavation mask layer. **(b)**: the soil is moved from the front dig area to the front-left dump area. The back-left area has a higher cost because it is further away from the point where the excavator has dug. **(c)**: the first lane is completed, and soil is moved from the front to the back-right zone. The left zones are all inactive as they overlap with already dug areas. The front-right zone has a higher dumping cost since it is further away from the reachable dump zone in the back. **(d)**: dirt is moved from the front and front-right zone to the back-right zone. **(e)**: the permanent dump zone is unreachable, and the dirt is moved from the front, and front left zone to the back-right zone. **(f)**: first two lanes are completed, and multiple permanent dump zones are reachable. The dirt is moved from the front to the front-right or back-left zone, depending on which zone is closer to the digging point.
Loose Soil TrajectoriesScooping loose soil is challenging as it lacks support from undisturbed soil and can easily displace and accumulate near the machine. The same parametrization is used to scoop loose soil, with different initial attitude angles and radial distance during the closing phase (as shown in Table 1). The initial attitude angle is reduced, and the bucket only moves vertically up during closing, thus reducing the radial forces on the soil, allowing scooping without dragging and spillage.
#### 3.6.1 Digging Planner
The digging planner selects the parameters for the digging trajectory to maximize the expected scooped soil volume in the target digging zone. We use Bayesian optimization as the trajectory parameter space is low-dimensional, we cannot estimate gradients of the objective function, and the objective function is costly to evaluate since the whole trajectory must be simulated. This is particularly important if using a different subsystem generates digging trajectories requiring more computation time.
In our case, the optimization problem (shown in Equation 7) is reduced to finding the 2D coordinates of the initial position for the digging trajectory.
\begin{table}
\begin{tabular}{l l l} \hline
**Parameter** & **Value** & **Description** \\ \hline \(\gamma_{\min}\) & 0.5 rad & Minimum attitude angle \\ \(\gamma_{\max}\) & 1.5 rad & Maximum attitude angle \\ \(\gamma_{\max\text{-dirt}}\) & 0.9 rad & Maximum attitude angle for loose soil \\ \(d_{\max}\) & 1.5 m & Maximum dragging distance \\ \(h_{\max}\) & 0.3 m & Maximum depth \\ \(V_{b}\) & 0.6 m\({}^{3}\) & Bucket volume \\ \(V_{\max}\) & 0.8 m\({}^{3}\) & Maximum volume \\ \(h_{c}\) & 0.5 m & Height change in the closing motion \\ \(v_{d}\) & 0.5 m/s & Digging velocity \\ \hline \end{tabular}
\end{table}
Table 1: Trajectory parameters
Figure 13: Two examples of digging trajectories on different terrain profiles. **(a)**: typical on undug flat soil. **(b)**: common while digging a pit or trench.
\[\begin{array}{rl}\max_{(\mathbf{r},\theta)}&V_{w}(\tau(r,\theta))\\ \text{subject to}&\tau_{in}<r<r_{out}\\ &\theta_{min}<\theta<\theta_{max}\end{array} \tag{7}\]
The optimization objective is to maximize the scooped soil volume \(V_{w}(\tau(r,\theta))\) in the workspace over a circular sector defined by radii \(r_{in}\), \(r_{out}\) and angular limits \(\theta_{min}\), \(\theta_{max}\).
We solve this optimization problem using Bayesian optimization with Gaussian processes, expected improvement as the acquisition function, and a custom sampler. The initial points are sampled uniformly in Cartesian space, some Gaussian noise is added, and then the points are transformed to polar coordinates. Since we sample multiple times the same space, the added noise allows the system to explore the dig area better. The Gaussian noise has a standard deviation of one-fourth of the spacing between points. We query the optimizer with 20 randomly selected values and allow ten iterations of refinement for a total of 30 calls to the objective function. The optimizer has been tuned to yield results with an average error of 0.06 m\({}^{3}\), approximately 10% of the scooped volume, from the actual maximum. The grid search optimizer, by comparison, needs 150 samples to achieve the same precision, scaling exponentially if the number of tunable parameters increases. The real and estimated optimization landscapes are compared in Figure 14.
#### 3.6.2 Refinement Trajectories
Inspired by expert operators aiming to achieve a final grade of the soil surface, we design refinement trajectories to remove minor imperfections and spillage left from digging. The front digging zone is expanded by 10% in the radial and tangential directions, and any accumulated soil closer to the machine base than the inner radius of the current digging zone is removed in the next workspace. We use only radially inward motions to minimize the risk of pushing soil out of reach.
During arm movement, the shovel edge height and attitude angle are maintained as specified, and the cabin is rotated by an angle equal to the angular dimension of the shovel edge, as shown in Figure 15.
#### 3.6.3 In-Air Motion Trajectories
The arm trajectories for reaching the digging location and dumping soil are created using Hermite splines with a step velocity profile. This approach enables circular arm movement around the base, with the plan computed in cylindrical coordinates by interpolating between the bucket's initial and final positions. The orientation is fixed for trajectories leading the bucket to the digging trajectory's starting point. In contrast,
Figure 14: **(a)**: excavator digging at the first workspace of the pit, indicated in a darker color. The workspace is both constrained radially and tangentially by the edges of the pit. **(b)**: estimated optimization landscape of the expected scooped volume function via a Gaussian process. The dots in the left plot are the points sampled by the optimizer with our custom sampler. **(c)**: optimization landscape of the expected scooped volume function created with full search over the elevation map.
the bucket is kept closed during the dumping trajectory until the final six seconds, when it begins to open. The ground is kept at a minimum distance of 0.3 meters to prevent collisions, as determined by querying the elevation map at various points along the shovel edge. The hierarchical inverse kinematic controller presented in (Jud et al., 2017) tracks the plans.
The dump point is selected by convolving the shovel filter (a 2D projection of the bucket with unit weight) with the dumping zone, followed by a full grid search over the allowed locations to choose the cell with the minimum dumping cost, as defined in Equation 8.
\[C_{dump}(x,y)=\alpha_{d}\sum_{i=1}^{n}\sum_{j=1}^{n}S_{ij}h_{dirt}(x+\Delta s, y+\Delta s)-\beta_{bB}x_{bd}-\gamma_{bB}y_{bd} \tag{8}\]
Here, \(h_{dirt}(x,y)\) is the height of the dirt at the dumping location, \(S\) is the shovel filter, and \(\Delta s\) is the grid map's resolution. The coordinates of the dump point in the base frame are given by \({}_{B}x_{bd}\) and \({}_{B}y_{bd}\). The dumping cost, a weighted sum of the dirt height and distance from the base is governed by weights \(\alpha_{d}=1.0\), \(\beta_{d}=0.1\), and \(\gamma_{d}=0.05\). This cost structure promotes uniform dirt spreading and prevents accumulation near the base.
## 4 Navigation Planning
The motion planning module produces safe and efficient paths for an excavator during excavation tasks, utilizing information from an occupancy map and motion constraints.
### Occupancy Map
The motion planning module employs an occupancy map, a 2D representation of the terrain's traversability, to validate sampled states and compute path costs. The map is formed by merging information from:
* An offline traversability map considering slopes and steps.
* User-defined non-accessible zones and manually identified obstacles invisible to LIDAR, like wire fences.
* Online-generated maps marking dug areas and soil piles as non-traversable.
* Dug areas marked as non-traversable to avoid soil deformation and surface quality degradation.
Figure 15(a) depicts an operational example in our test area.
Figure 15: Refinement Trajectory
### Planner
We utilize RRT* (Sucan et al., 2012) implemented in the OMPL with a state space defined by Reeds-Shepp curves in SE2 for local motion planning. This choice accommodates the changing nature of dug areas and soil piles.
The planner produces a sequence of poses, accounting for terrain traversability through an occupancy map comprising offline maps, inaccessible areas, and real-time updates. Excavator clearance is also considered, as loading soil too close to the edge may cause collapses, damaging the machine or affecting the grade. Falling risks are evaluated as well.
The validity of states is assessed through the occupancy map, and costs are calculated using the following:
\[C_{n}(s)=\alpha_{n}\cdot d(s1,s2)+\frac{1}{M}\sum_{j=0}^{M}\beta_{n}\cdot\exp^{ (-\gamma_{n}\cdot sdf_{dug}(f_{s2j}))} \tag{9}\]
where \(d(s1,s2)\) is the distance between the current and previous base position and \(sdf_{dug}(f_{s}2j)\) is the SDF value for the footprint \(f_{s2j}\) and is the number of elevation map cells inside the footprint of the robot. The parameters \(\alpha_{n}\), \(\beta_{n}\), and \(\gamma_{n}\) are set to 10.0, 5.0, and 0.7, respectively.
The full pose is obtained by constraining height relative to soil and maintaining gravity alignment (Hutter et al., 2017). Plans are tracked using a pure pursuit controller (Jelavic et al., 2022), with safety measures for deviation from target paths due to slippery or muddy terrain. Figure 15(b) provides an example of a generated plan.
The planner's variable time ranges from 0.5 to 5 seconds for 0.5 to 60-meter plans, utilizing the RRT* algorithm with up to 10 trials. Success was achieved 78.7% of the time in a simulated scenario, and the failure probability is less than \(3e^{-7}\).
### State Machine
The control state machine for the excavator robot is organized as follows:
_Initialize Workspace_: Initializes the workspace by defining the local digging geometry.
_Main Digging Loop_: While the workspace is not complete, the robot repeatedly goes through the following states:
Figure 16: The RRT* uses an occupancy map to check the feasibility of the path. Both images are generated in simulation. **(a)**: the occupancy map is generated by integrating the latest elevation map information. Note how the pit and dumping piles are marked as untraversable. **(b)**: the generated plan is shown in green (forward moving) and red (backward moving). The excavator is able to navigate around the pit and reach the last lane.
_Check Workspace_: Determines if the workspace is complete and finds the next dig and dump point using the local planner.
* _Find Dig Point_: Uses the digging planner to locate the starting point of the scoop and moves the arm to that position.
* _Dig_: Executes the digging motion.
* _Dump_: Identifies the dump point, moves to it, and discards the soil.
_Find Path Plan_: Locates a path plan to the subsequent base pose using the RRT* planner.
_Driving_: Propels the excavator to the designated base pose utilizing the pure pursuit controller.
Note: The control state machine also handles various corner cases separately to ensure robust operation.
## 5 Experiments
In this section, we provide an overview of the experimental evaluation conducted to assess the performance of our excavation planner. We employ two distinct sets of experiments: simulation-based tests focusing on the global planner's performance in a realistic excavation scenario, and real-world deployment to evaluate the full system's capabilities during the excavation of a prototypical building foundation.
This section discusses the tests carried out to evaluate the performance of our excavation planner. We conducted two types of tests: simulations that assess the global planner's efficiency in virtual digging environments. 2. Real-world tests that gauge the system's overall ability during the excavation of a typical building foundation.
### Global Planner Experiments
This section discusses the tests carried out to evaluate the performance of our excavation planner. We conducted two types of tests: simulations that assess the global planner's efficiency in virtual digging environments and real-world tests that gauge the system's overall ability during the excavation of a typical building foundation.
These building maps vary in size, from 20 m to 100 m on each side. The random crops are even bigger, from 100 m to 1000 m per side. From this data, we generated five types of excavation tasks:
* Foundations: digging the shape of a building with no obstacles.
* Exterior Foundations: digging the inverse shape of a building's foundation, treating the buildings as obstacles.
* Exterior Foundations Traversable: same as above, but here, the building shapes can be crossed or passed through.
* Crops: digging that involves multiple building shapes on one map.
* Exterior Crops: digging the insides of streets and parks, with buildings acting as obstacles.
For the first three tasks, we assumed dirt could be placed anywhere outside the dig area. For the last two tasks, due to their complex nature, we didn't consider how to manage the excavated dirt. Instead, we presumed dirt could be moved off-site from any location using other methods or machines.
We assessed the planner's performance based on four criteria: successful plan rate, path efficiency, workspaces efficiency, and the digging coverage.A plan is "unsuccessful" if the planner can't process the digging area's shape or can't find a digging solution at all.
The _path efficiency_ indicates how straight and short the system's movement paths are. It's found by adding up the straight-line distances between all digging positions, then dividing by the digging area's size:
\[S_{p}=\sum_{i=0}^{N-1}\frac{(x_{B_{i+1}}-x_{B_{i}})}{\sqrt{A_{d}}} \tag{10}\]
The _workspace efficiency_ measures how many local working areas are necessary for the planning problem. It's determined by:
\[S_{w}=\frac{N_{w}\cdot A_{w}}{A_{d}} \tag{11}\]
Here, \(A_{w}=\frac{1}{2}\pi R_{\max}^{2}\) represents the reference workspace area. The _digging coverage_ reveals what percentage of the digging area the plan covers.
The solved excavation plans for the five different datasets can be seen in Figures 17, 18, 19, 20, and 21.
Tables 2 and 3 show the benchmark scores for the global planner without and with coverage orientation optimization respectively. When not optimizing the coverage angle, it was aligned with the main axis of the excavation area, which already provided a stronger baseline compared to taking the original orientation of the excavation map into account.
The same target height was set for the entire excavation area to maintain a uniform pit floor. This task was chosen as it is typical for excavator operators and requires multiple passes to relocate the dug soil without using a dump truck or wheel loader.
The solution is depicted in Figure 22. Yellow arrows represent base poses, and the excavation mask color scheme is consistent with Figure 12. Only corners 1 and 2 of the four considered starting points yield feasible
Figure 19: Three solved samples from the ”Exterior Foundations Traversable” dataset.
Figure 20: Three solved samples from the ”Crops” dataset.
Figure 18: Three solved samples from the ”Crops” dataset.
solutions due to spatial constraints and a manually marked fence, which prevent full pit coverage beginning from the other two corners.
Figures 4 display the test field's map, created using Open3DSlam, and the user input layers. The machine was operational for 4 hours and 25 minutes to complete the pit, with a time breakdown in Table 4. Figure 23 illustrates the digging stages in the test field, previously shown in simulation in Figure 12. Videos of the excavation stages are available in the supplementary materials12.
Footnote 1: [https://youtu.be/bWw4RRqz_dM](https://youtu.be/bWw4RRqz_dM)
Footnote 2: [https://youtube.com/playlist?list=PLpldy9k5iMC9mMnRazYzi80Uj-W7N6UQ-](https://youtube.com/playlist?list=PLpldy9k5iMC9mMnRazYzi80Uj-W7N6UQ-)
Evaluating the efficiency involves two metrics: cycle time and volume of soil excavated per time unit. The recorded cycle time - the time needed to dig, dispose of soil, and reposition the arm for subsequent digging - is 32.08 (5.47) seconds, similar to the value of 30 seconds reported for human operators (Sopic et al., 2021). However, directly comparing the system's efficiency to that of a proficient human operator can be challenging. This complexity arises from the fact that the cycle time is influenced by a multitude of variables, including but not limited to the size and type of excavator used, the properties of the soil, and the employed method of unloading (Litvin and Litvin, 2020). Hence, while drawing a direct comparison between the system and the human operator is tempting, the multitude of influencing factors suggests a nuanced evaluation is necessary.
\begin{table}
\begin{tabular}{l c c c} \hline
**Dataset (size)** & \(S_{p}\) & \(S_{w}\) & **Coverage Fraction** \\ \hline Foundations (838) & 3.23 (2.52) & 14.20 (4.09) & 0.944 (0.087) \\ Exterior Foundations (838) & 9.96 (4.60) & 15.03 (1.78) & 0.952 (0.115) \\ Exterior Foundations Traversable (838) & 9.78 (4.24) & 15.35 (1.77) & 0.960 (0.104) \\ Crops (100) & 68.12 (22.57) & 27.95 (3.29) & 0.912 (0.163) \\ Exterior Crops (100) & 105.23 (45.16) & 26.66 (3.51) & 0.782 (0.293) \\ \hline \end{tabular}
\end{table}
Table 2: Benchmark scores for the global planner **without** coverage angle optimization.
\begin{table}
\begin{tabular}{l c c c} \hline
**Dataset (size)** & \(S_{p}\) & \(S_{w}\) & **Coverage Fraction** \\ \hline Foundations (838) & 5.08 (2.51) & 19.89 (4.38) & 0.982 (0.052) \\ Exterior Foundations (838) & 9.87 (5.40) & 13.66 (2.11) & 0.972 (0.072) \\ Exterior Foundations Traversable (838) & 9.64 (4.42) & 13.91 (2.14) & 0.978 (0.041) \\ Crops (100) & 73.79 (19.75) & 29.09 (3.20) & 0.991 (0.028) \\ Exterior Crops (100) & 130.08 (41.67) & 26.45 (3.41) & 0.982 (0.039) \\ \hline \end{tabular}
\end{table}
Table 3: Benchmark scores for the global planner **with** coverage angle optimization. The coverage direction is aligned with the major axis of the excavation area.
Figure 22: The solution to the excavation task. The yellow arrows correspond to the base poses, and the color scheme for the excavation mask is the same as used in Figure 12.
Table 5 details efficiency metrics based on excavation area and modes. The overall times include driving, digging, and refinement. The total volume is determined by comparing the initial map with the map after excavation in each workspace. The volume excavated in the front area represents the amount of undug dirt in the pit, whereas the volume moved in front-side regions corresponds to piles of dirt that need further relocation.
To better understand scooping efficiency, we analyzed the distribution of soil volume dug for each scoop. A total of 380 scoops were executed during the operation, excluding bucket motions used for refining the dig area. The distribution of scoop volume, shown in Figure 24, appears to follow a Gaussian distribution with a mean of 0.45 \(m^{3}\) and a standard deviation of 0.16. The bucket volume is approximately 0.6 \(m^{3}\), but it can
Figure 23: Snapshops of the pit excavation, each image shows a different earthworks planning strategy. **(a)**: terrain before the excavation. **(b)**: Dumping on the left side. **c**: the soil is moved from the front of the dig area to the back-left area, which is still inside the excavation area but is closer to its boundary. **(d)**: Moving dirt from front-left to back-left zone. **(e)**: end of the second lane, soil is accumulated behind the first excavated lane. **(f)** dirt is moved from the front and front-right zone to the back-right zone, which is now reachable. **(g)**: dumping on the right side. **(h)**: final result of the excavation.
be overfilled for more efficient digging. Lower volume scoops are often used to remove the last remaining soil from the dig area or result from constraints on digging geometry, such as being close to the edges of the pit with the bucket.
The summary of the excavation precision is outlined in Table 1 (formerly referred to as Figure 6). The mean level metric represents the average height difference to the desired elevation. Significant inconsistencies in the flatness of the pit's bottom are evident from the values presented. Before refinement, the average absolute error measures around 10 centimeters. However, through refinement, this error was successfully reduced by 3 centimeters.
Several factors hinder finer precision, such as unidirectional grading, sensor noise from the lidar system, and the global positioning system's height covariance. Moreover, imprecise tracking from using a singular set of low-level PID gains to manage arm velocity causes potential errors in varied arm configurations during digging and grading.
\begin{table}
\begin{tabular}{c c c} \hline
**Condition** & **Mean Level** & **Abs. Err.** \\ \hline Before ref. & -3.5 & 9.7 \\ \hline After ref. & -1.3 & 7.2 \\ \hline \end{tabular}
\end{table}
Table 6: Digging precision statistics in centimeters.
\begin{table}
\begin{tabular}{c c c} \hline
**State** & **Mean Duration (std)** & **Total Duration** \\ \hline Initialize Workspace & 0.45 (0.18) s & 0h 0m \\ \hline Check Workspace & 0.29 (0.13) s & 0h 2m \\ \hline Dig & 14.13 (3.38) s & 1h 51m \\ \hline Arm to Dig Point & 8.04 (3.64) s & 1h 8m \\ \hline Arm to Dump Point and Dump Soil & 9.91 (2.30) s & 1h 1m \\ \hline Find Path Plan & 9.38 (38.20) s & 0h 7m \\ \hline Driving & 34.07 (37.07) s & 0h 12m \\ \hline Retract Arm & 3.96 (1.07) s & 0h 1m \\ \hline
**Total** & - & 4h 25m \\ \hline \end{tabular}
\end{table}
Table 4: Breakdown of operations’ time by state of the excavator
Figure 24: Distribution of the scoops’ volume expressed as a ratio of the bucket volume (0.6 \(m^{3}\)).
\begin{table}
\begin{tabular}{c c c c c} \hline
**Dig Area** & **Duration [h]** & **Mean Scoop Vol. [m\({}^{3}\)]** & **Vol. [m\({}^{3}\)]** & **Dig Eff. [m\({}^{3}\)/h]** \\ \hline Front & 3h 20m & 0.43 & 142.47 & 42.70 \\ \hline Front-Left & 0h 17m & 0.55 & 16.18 & 54.72 \\ \hline Front-Right & 0h 0m & - & - & - \\ \hline Refinement & 0h 24m & - & - & - \\ \hline Overall including Loose Dirt & 4h 25m & - & 158.77 & 35.95 \\ \hline Overall & 4h 25m & - & 142.47 & 32.35 \\ \end{tabular}
\end{table}
Table 5: Breakdown of efficiency metrics
### Different Excavation Geometries
The system generates plans for various excavation geometries, such as carving the letters H, T, and E on the ground. To protect the edges of the letters, a non-permanent dump area surrounds each letter. Figure 25 displays snapshots of the excavation process using the same map as in Figure 12.
### Excavation Constraints
This section provides an overview of different excavation scenarios under varied dumping constraints, shown in the video supplement. In the first scenario, the back of the first lane is designated as a no-dump area. Two feasible solutions exist, starting at either corner 1 or 2 (see Figure 12. The solution starting at corner 2 is selected due to its lower cost and efficient excavation pathway (see Figure 26). In the second scenario, permanent dump areas behind all lanes are disallowed, making the excavation paths from both corners longer. Temporary dumping in the back of the first lane precedes the transfer of soil to permanent dump areas (see Figures 27a and 27b). In the third scenario, areas behind the lanes are assigned as non-dumping zones. Without any accessible dump areas, the excavator ultimately becomes trapped, illustrating the limitations of the current planning method (see Figures 27c and 27d). Finally, in the fourth scenario, the side of the first lane is off-limits for dumping. The excavator is forced to temporarily dump the soil inside the pit, making the corner 2 solution infeasible. The corner 1 solution is preferred, and subsequent soil handling follows standard practices (see Figures 27f and 27e).
Figure 25: Snapshots of the letters’ excavation process in simulation, the target depth is 0.7 meters. Outside of no dumping close to the letters’ edges, no further dumping constraint has been added. **(a)**: H letter. **(b)**: T letter. **(c)**: E letter.
Figure 26: The two feasible simulated plans side by side. (a): Plan that starts at corner 1. The extra base pose necessary to complete the workspace is circled. (b): Plan starting at corner 2.
Figure 27: Excavation scenarios: (a) Soil from the central lane is dumped in the light blue area (temporary dump location). (b) Excavating the middle lane, the soil is starting to accumulate at the pit’s edge. (c) Soil has been removed to a permanent dump location. (d) The moment where the local planner failed. There is no space available to dump soil inside the pit, and the dumping area outside the red area is too far away. (e) Simulation of the excavation of a pit with the no-dump area adjacent to the left side of the pit. The soil from the first lane is dumped inside the pit. (f) Soil from the first lane is dumped in a permanent dump area.
Conclusion and Discussion
This study presents a fully autonomous system for excavation planning and execution. Using a 12-ton excavator, we demonstrate the ability of the system to dig a pit measuring 15.6 x 11.5 x 1 m in a total of 4 hours and 25 minutes. The system has an average cycle time of about 32 seconds and can move 42.7d m\({}^{3}\)/h of soil per hour, with a final grade error of 7.2 cm on average.
The global planner tackles the excavation problem by finding a set of base poses that allow the excavator to dig through the excavation area. Using boustrophedon decomposition, it first decomposes the excavation site into cells with simple navigation patterns. It then uses a tree search algorithm to find the optimal order of the cells and minimize the total travel cost, ensuring the feasibility of the excavation. The plan is then completed with dynamic programming to determine the start and end points of the excavator's path for each cell. The local excavation planner determines how to move the soil around the excavator for each base pose. It targets moving the dirt from a reachable digging area into an allowed dumping area. The digging planner uses a Bayesian optimizer to choose the parameters of digging trajectories that maximize the scooped volume in the target workspace. The navigation between base poses is executed by an RRT* sampling-based planner, minimizing travel costs. We utilize Google Earth Pro for target geometries, though professional architecture software is a potential future improvement.
The current global planner and simple zig-zag subroutines restrict the range of possible plans. Future work could integrate a system for navigation and local excavation planning, potentially employing reinforcement learning. Challenges include creating an abstract yet practical environment for sim to real transfer.
One limitation of the current digging planner is that it uses a greedy optimization approach, which can result in trajectories that do not minimize the number of scoops required to complete the local workspace. This limitation can be addressed by using a non-greedy optimization approach. Additionally, simulation speed advances could enable the training of model-free reinforcement learning agents that do excavation planning based on elevation map data (Rudin et al., 2022). Instead of going end to end, another possibility is to use a lower-level policy, such as the one described in (Egli et al., 2022), and train higher-level policy that conditions it to dig on target excavation locations. These approaches could lead to more efficient and effective excavation planning in the future.
Overall, addressing these limitations could lead to a more effective and versatile excavation system in the future. |
2304.11975 | MRSN: Multi-Relation Support Network for Video Action Detection | Action detection is a challenging video understanding task, requiring
modeling spatio-temporal and interaction relations. Current methods usually
model actor-actor and actor-context relations separately, ignoring their
complementarity and mutual support. To solve this problem, we propose a novel
network called Multi-Relation Support Network (MRSN). In MRSN, Actor-Context
Relation Encoder (ACRE) and Actor-Actor Relation Encoder (AARE) model the
actor-context and actor-actor relation separately. Then Relation Support
Encoder (RSE) computes the supports between the two relations and performs
relation-level interactions. Finally, Relation Consensus Module (RCM) enhances
two relations with the long-term relations from the Long-term Relation Bank
(LRB) and yields a consensus. Our experiments demonstrate that modeling
relations separately and performing relation-level interactions can achieve and
outperformer state-of-the-art results on two challenging video datasets: AVA
and UCF101-24. | Yin-Dong Zheng, Guo Chen, Minglei Yuan, Tong Lu | 2023-04-24T10:15:31Z | http://arxiv.org/abs/2304.11975v1 | # MRSN: Multi-Relation Support Network for Video Action Detection
###### Abstract
Action detection is a challenging video understanding task, requiring modeling spatio-temporal and interaction relations. Current methods usually model actor-actor and actor-context relations separately, ignoring their complementarity and mutual support. To solve this problem, we propose a novel network called _Multi-Relation Support Network_ (MRSN). In MRSN, Actor-Context Relation Encoder (ACRE) and Actor-Actor Relation Encoder (AARE) model the actor-context and actor-actor relation separately. Then Relation Support Encoder (RSE) computes the supports between the two relations and performs relation-level interactions. Finally, Relation Consensus Module (RCM) enhances two relations with the long-term relations from the Long-term Relation Bank (LRB) and yields a consensus. Our experiments demonstrate that modeling relations separately and performing relation-level interactions can achieve and outperformer state-of-the-art results on two challenging video datasets: AVA and UCF101-24.
Yin-Dong Zheng\({}^{*}\), Guo Chen\({}^{*}\), Minglei Yuan, Tong Lu\({}^{\dagger}\)+National Key Laboratory for Novel Software Technology, Nanjing University
{zhengyd, cg1177, mlyuan}@smail.nju.edu.cn [email protected] Video Understanding, Action Detection
Footnote †: This work is supported in part by the National Natural Science Foundation of China (Grant No. 61672273, 61832008).
+
Footnote †: This work is supported in part by the National Natural Science Foundation of China (Grant No. 61672273, 61832008).
## 1 Introduction
Spatio-temporal action detection is a challenging task that involves locating the spatial and temporal positions of actors, as well as classifying their actions. In order to accurately detect and classify actions, spatio-temporal action detection algorithms must take into account not only the human pose of actors but also their interactions with the surrounding context and other actors. As such, the ability to model and handle actors' interactions in the spatial and temporal dimensions is a key factor of these algorithms.
In AVA [1], actions are divided into three broad categories: pose, actor-context action, and actor-actor action. For actor-context and actor-actor actions, current methods [1; 2; 3; 4] usually model the two relations separately and then fuse them. We argue that human action semantics are complex and comprise multiple interaction relations. Separately modeling these two relations ignores complementarity and dependencies between them, making it difficult to extract higher-level interaction semantics effectively. For example, as shown in Figure 1, we need actor-actor relations for two actors in a car to distinguish who is driving and who is riding; similarly, distinguishing "sing to a person" and "talk to a person" also relies on actor-context relation. Therefore, complex relation modeling relies on adaptive complementation and support among multi-relations, which requires not only actor-context and actor-actor interactions but also relation-level interactions.
To tackle the above issues, we propose _Multi-Relation Support Network_ (MRSN), which focuses on complex relation modeling for action detection. MRSN takes context and actor features as input. First, Actor-Context Relation Encoder (ACRE) and Actor-Actor Relation Encoder (AARE) capture the two relations by tokenizing context and actor features and feeding them into the Transformer encoders. Then, the Relational Support Encoder (RSE) computes the relational support and enriches the details of the two relations. Finally, Relation Consensus Module (RCM) introduces temporal support information from the Long-term Relation Bank (LRB) and fuses multiple relations to obtain results.
Figure 1: **Left:** It is difficult to distinguish driver and passenger by separately reasoning the relations between actors and steering wheel. Actor-actor relation supports the distinction between these actions. **Right:** In both videos, A opens his mouth while B listens. Only actor-context relation between the guitar and A reveals A is singing to B.
MRSN is validated by extensive experiments conducted on AVA and UCF101-24. Our contributions are summarized as follows: 1) We propose the concept of relational support and relation-level interaction for actor-context and actor-actor to enrich the details of the two relations. 2) We propose MRSN and introduce LRB to achieve multi-relation and long-short-term relation consensus. 3) MRSN achieves state-of-the-art performance on benchmark datasets.
## 2 Related Works
**Action Recognition.** Action recognition aims to classify the actions in the video and can be mainly divided into 2D CNNs and 3D CNNs. 2D CNN [5, 6, 7] uses a 2D convolutional network to extract spatial features while modeling temporal relation by inserting temporal inference modules or extra optical flow. 3D CNN [8, 9, 10] uses 3D convolution to extract features in both temporal and spatial dimensions. To optimize the backbone, [11, 12] employ training strategies, such as reinforcement learning, as well as generative and discriminative self-supervised learning. We use SlowFast [10], which jointly extracts features through the spatial high-resolution slow and low-resolution fast pathways, as the backbone.
**Spatio-Temporal Action Localization.** Spatio-temporal action detection locates actors and classifies their actions. Some methods [13, 14] use a single backbone for both tasks. Most current state-of-the-art methods [1, 10, 15, 3, 4, 16] use a two-backbone pipeline with multi-stage or end-to-end strategy. This pipeline decouples human detection and action classification, making the model more flexible and suitable for actor-context and actor-actor interaction. MRSN adopts this two-backbone pipeline.
**Relational Reasoning and Attention Mechanism.** Relational reasoning and attention mechanisms play important roles in video understanding. Non-local [17] insert non-local blocks into the backbone, which compute the response by relating the features at different times or spaces. LFB [2] applies the non-local operator to long-term feature banks. [3, 4] uses self and cross attention mechanisms to achieve relational reasoning between different instances. With the development of Transformer [18, 19, 20] uses the Transformer's encoder to replace the convolution module in the traditional CNN backbone and extract the relation between pixels. All modules in MRSN are implemented with Transformer.
## 3 Multi-Relation Support Network
In this section, we describe Multi-Relation Support Network (MRSN) in detail. As shown in Figure 2, given a clip, an off-the-shelf person detector detects the keyframe and generates candidate boxes, and the backbone extracts a 3D feature map from the clip. Then, 3D feature map is temporal pooled to 2D feature map \(F\in\mathbb{R}^{C\times w\times h}\), and RoI [21] features \(A^{1},A^{2},\ldots,A^{N}|A^{n}\in\mathbb{R}^{C\times 7\times 7}\) are extracted for each actor box using RoIAlign [22], where \(N\) is the number of actor boxes. After that, MRSM models interaction relations using the feature map and RoI features as input. Finally, the interaction relations features are fed to an action classifier for multi-label action prediction.
### Multi-Relation Support Encoder
The Multi-Relation Support Encoder (MRSE) extracts complex interaction relations. It models actor-context and actor-actor relations separately but also interacts with them at the relation level to enrich their semantics. MRSE consists of three sub-encoders: Actor-Context Relation Encoder (ACRE), Actor-Actor Relation Encoder (AARE), and Relation Support Encoder (RSE). ACRE and AARE model actor-context and actor-actor relations, respectively. RSE computes relational supports for these relations, which are then fed to subsequent modules for relation-level interaction and semantic complement. MRSE has a flexible structure, allowing it to be stacked and inserted into other detection heads.
#### 3.1.1 Actor-Context Relation Encoder
Actor-context interactions can be realized in several ways. ACRN [15] cuts the context feature into a grid, replicates and concatenates the actor feature to all spatial locations, then uses convolution to make the actor interact with context. Inspired by ACRN's [15] feature gridding and ViT's [23] images splitting, we propose Actor-Context Relation Encoder (ACRE), which splits and embeds feature maps into patch sequences to achieve actor-context interactions in the global receptive field.
**Patch Embedding.** For the context feature map \(F\in\mathbb{R}^{C\times w\times h}\), since patch embedding requires feature maps of all clips to have the same shape, we first adaptively pool the feature map to \(S\times S\), namely \(F^{\prime}\in\mathbb{R}^{C\times S\times S}\). Then, we cut \(F^{\prime}\) into \(L=\frac{S}{p}\times\frac{S}{p}\) pieces of \(p\times p\) patches \(\{P_{1},\ldots,P_{L}|P_{i}\in\mathbb{R}^{C\times p\times p}\}\) without overlapping. The width of the patch \(p\) can be any factor of \(S\), and in our implementation, we set \(S=16,p=2\). After that, we separately operate Liner Projection on RoI feature \(A_{n}\) and patch sequence \(\{P_{1},\ldots,P_{L}\}\) to obtain the token sequence \(\{R_{A},R_{1},\ldots,R_{L}|\{R\in\mathbb{R}^{1\times d}\}\), where \(R_{A}\) indicates actor token and \(\{R_{1},\ldots,R_{L}\}\) indicate the context token sequence. \(d\) is the embedding dimension set to 512 in the experiment. To retain context positional information and distinguish actor and context, additional learnable actor embedding \(E_{A}\) and positional embeddings \(\{E_{i}|i=1,\ldots,L\}\) are added to \(R_{A}\) and \(\{R_{1},\ldots,R_{L}\}\) respectively. For each actor in \(\{A^{1},\ldots,A^{N}\}\), we construct a total of \(N\) token sequences \(T=\{R_{A}^{n},R_{1},\ldots,R_{L}\}_{n=1}^{N}\). The entire embedding
procedure is shown in Eqn.1.
\[\begin{split}&\{R_{1},\ldots,R_{L}\}=\text{PatchEmbedder}(F)\\ &\{R_{A}^{1},\ldots,R_{A}^{N}\}=\text{ActorEmbedder}(\{A^{1}, \ldots,A^{N}\})\\ & T^{n}=\left[R_{A}^{i},R_{1},\ldots,R_{L}\right]+\left[E_{\text{A }},E_{1},\ldots,E_{L}\right],\\ & n=1,\ldots,N.\end{split} \tag{1}\]
**Actor-Context Relation Encoding.** For the token sequence \(T^{n}=\{R_{A}^{n},R_{1},\ldots,R_{L}\}\) of length \(L+1\), we feed it into a Transformer encoder [18] with multi-head self-attention (MSA) to make actor token interact with context tokens. The encoding procedure can be formalized as follows:
\[\begin{split} X_{0}&=T^{n}\\ \begin{split} X_{k}^{\prime}&=\text{LN}(\text{ MSA}(X_{k-1}))+X_{k-1}\\ X_{k}&=\text{LN}(\text{FFN}(X_{k}^{\prime}))+X_{k}^ {\prime},\qquad k=1,\ldots,M_{\text{ACRE}}\end{split} \tag{2}\]
where LN is the LayerNorm layer, and FFN is the Feed Forward Layer using GELU as non-linearity. \(M_{\text{ACRE}}\) is the number of the encoder.
#### 3.1.2 Actor-Actor Relation Encoder
Prior work often pairs RoI features of all actors to extract actor-actor relations or uses self-attention without positional information. We argue that positional information between actors is beneficial for relation modeling, so we add positional embedding to actor tokens and use the same Transformer Encoder as in Section 3.1.1 to extract actor-actor relations.
**Actor Positional Embedding.** We use the same splitting method in Section 3.1.1 for the space, that is, the space is split into \(L=\frac{S}{p}\times\frac{S}{p}\) patches without overlapping, and we construct \(L\) learnable positional embedding \(\{E_{i,j},i\in\{1,\ldots,\frac{S}{p}\},y\in\{1,\ldots,\frac{S}{p}\}\}\). Then, for each actor box, we calculate its center point, search which patch its center point belongs to, and add the corresponding positional embedding to its token. Specifically, given an actor box \(B^{n}=\{x_{1}^{n},y_{1}^{n},x_{2}^{n},y_{2}^{n}|x,y\in[0,1]\}\), where \((x_{1}^{n},y_{1}^{n})\) and \((x_{2}^{n},y_{2}^{n})\) are the coordinates of the upper left and lower right corners, respectively. We assign its positional embedding \(E_{A}^{n}\) by Eqn.3.
\[\begin{split} i^{n}=\left\lceil\frac{(x_{1}^{n}+x_{2}^{n})\times p }{2}\right\rceil\quad j^{n}=\left\lceil\frac{(y_{1}^{n}+y_{2}^{n})\times p}{2} \right\rceil\\ E_{A}^{n}=& E_{i^{n},j^{n}}\end{split} \tag{3}\]
Finally, the sequence of actor tokens with positional embeddings is fed into the Transformer encoder for self-attention computation of the actor-actor relation.
\[\begin{split} Y_{0}&=\left[R_{A}^{1},\ldots,R_{A}^{N }\right]+\left[E_{A}^{1},\ldots,E_{A}^{N}\right]\\ Y_{k}^{\prime}&=\text{LN}(\text{MSA}(Y_{k-1}))+Y_{k-1} \\ Y_{k}&=\text{LN}(\text{FFN}(Y_{k}^{\prime}))+Y_{k}^ {\prime},\qquad k=1,\ldots,M_{\text{AARE}}\end{split} \tag{4}\]
where \(M_{\text{AARE}}\) is the number of AARE in each MRSE.
#### 3.1.3 Relation Support Encoder
After modeling actor-context and actor-actor relations, RSE will compute supports between the two relations and further make them interact on the relationship level to enrich their semantics. Inspired by CrossViT [24] performs cross-attention on cross-domain information, we use the Bidirectional Cross-attention Transformer to perform relation-level interaction.
Specifically, given the outputs \(X\) and \(Y\) from ACRE and AARE, we perform bidirectional cross-attention on each actor's \(X^{n}\) and \(Y^{n}\). The whole procedure is shown in Eqn.5.
\[\begin{split} X_{0}^{n}&=X^{n},\qquad Y_{0}^{n}=Y^ {n}\\ X_{k}&=\text{LN}(\text{MCA}_{X}(X_{k-1},Y_{k-1}))+X_ {k-1}\\ Y_{k}&=\text{LN}(\text{MCA}_{Y}(Y_{k-1},X_{k-1}))+Y_ {k-1}\\ X_{k}&=\text{LN}(\text{FFN}_{X}(X_{k}^{\prime}))+X_ {k}^{\prime}\\ Y_{k}&=\text{LN}(\text{FFN}_{Y}(Y_{k}^{\prime}))+Y_ {k}^{\prime}\qquad k=1,\ldots,M_{\text{RSE}}\end{split} \tag{5}\]
where \(M_{\text{RSE}}\) is the number of RSE and MCA is the multi-head cross-attention. The implementation of MCA is shown
Figure 2: **Multi-Relation Support Network (MRSN).** context and actor-actor relations. Then, Relation Support Encoder computes relational support between these relations and makes them interact on the relation level. Finally, the enhanced relations are fused with the long-term relations from the Long-term Relation Bank, yielding a long-short-term relations consensus and classifying.
in Figure 3 and Eqn.6.
\[\begin{split}&\text{MCA}(X,Y)=\text{Concat}(\text{head}_{1}, \dots,\text{head}_{h})W^{O}\\ &\text{head}_{i}=\text{Softmax}(\frac{(XW_{i}^{Q})(YW_{i}^{K})^{T} }{\sqrt{d_{k}}})YW_{i}^{V}\end{split} \tag{6}\]
where \(W_{i}^{Q}\), \(W_{i}^{K}\), \(W_{i}^{V}\), and \(W^{O}\) are the weights of linear transforms, \(h\) is the number of heads, and \(d_{k}\) is the dimension of each head.
### Relation Consensus Module
The Relation Consensus Module (RCM) fuses actor-context relation \(X\) and actor-actor relation \(Y\), then yields classification results. LFB [2] shows long-term information benefits to action classification, so we design two RCMs for short-term (clip) and long-term (clip window) relation consensus.
#### 3.2.1 Short-Term Relation Consensus
Short-term RCM (RCM\({}_{\text{S}}\)) only yields a consensus of two relations in a single clip. Given actor-context relation \(X^{n}=\{X_{A}^{n},X_{1}^{n},\dots,X_{L}^{n}\}\) and actor-actor relation \(Y^{n}\), where \(X_{A}^{n}\) is the actor token and \(\{X_{1}^{n},\dots,X_{L}^{n}\}\) are the context tokens, RCM\({}_{\text{S}}\) performs average pooling on context tokens to obtain context feature \(X_{C}\). Then, \(X_{A}^{n}\), \(X_{C}^{n}\), and \(Y^{n}\) are concatenated and fed to an MLP for action classification. The short-term relation consensus for actor \(n\) can be formulated as Eqn.7.
\[\begin{split} X_{C}^{n}&=\text{AvgPool}([X_{1}^{n}, \dots,X_{L}^{n}])\\ G^{n}&=\text{Concat}(X_{A}^{n},X_{C}^{n},Y^{n})\\ Z^{n}&=\text{MLP}(G^{n})\end{split} \tag{7}\]
#### 3.2.2 Long-Term Relation Consensus
To better make short-term information interact with long-term information, we propose Long-term RCM (RCM\({}_{L}\)) and Long-term Relation Bank (LRB). First, we train a separate MRSN with RCM\({}_{S}\). Then, given a video, we uniformly sample clips of length 2s with the stride 1s and send these clips to the trained MRSN. For the output of MRSE, we use the same operation as RCM\({}_{S}\) to generate and save feature \(\{G_{j}\}_{j=1}^{M}\) for each clip, where \(M\) is the number of actor boxes in each clip. We name the \(\{G_{j}\}_{j=1}^{M}\) of all clips in the video as Long-term Relation Bank, and to avoid confusion, we denote it as \(\{\overline{G_{j}}\}_{j=1}^{M}\).
After that, we set a temporal window of size \([t-\omega,t+\omega]\) and take out all clip features in the window from the feature bank to obtain the long-term support features \(H_{t}=\{\{\overline{G_{j}}\}_{j=1}^{M}\}_{t-\omega}^{t+\omega}\) for clip \(t\). We give each support feature \(\overline{G}\) a learnable distance embedding according to the distance between the clip it belongs to and the clip \(t\). Finally, we take \(\{G_{n}\}_{n=1}^{N}\) of clip \(t\) as Query, \(H_{t}\) as Key and Value, and input them into a Cross-attention Transformer encoder to achieve long-term relation consensus. The final output is fed into an MLP for action classification. Since RCM\({}_{L}\) is followed by an MLP, we remove the Feed Forward Layer in the Transformer Encoder to save computational overhead. The Eqn.8 formalizes this procedure.
\[Z=\text{MLP}(\text{LN}(\text{MCA}(\{G_{n}\}_{n=1}^{N},H_{t}+E_{\text{dist}})) +\{G_{n}\}_{n=1}^{N}) \tag{8}\]
where \(E_{\text{dist}}\) is the distance embeddings, and MCA is the same as in Eqn.6 and Figure 3.
## 4 Experiments
### Datasets
We evaluate MRSN on AVA and UCF101-24. AVA [1] has 430 videos split into 235, 64, and 131 for training, validating, and testing. UCF101-24 is a subset of UCF101 with 3,207 videos of 24 sports classes. Following standard protocol, we evaluate models with frame-mAP by frame-level IoU threshold 0.5.
### Implementation Details
**Person Detector.** For person detection on keyframes, we follow previous works [1; 10] which use the person boxes obtained by the off-the-shelf person detector. For AVA, the detector is a Faster R-CNN with ResNeXt-101-FPN [25; 26] backbone, pre-trained on ImageNet [27] and the COCO [28] human keypoint images, then finetuned on the AVA dataset. For UCF101-24, we use the person detector from [16].
**Backbone and MRSM** We choose the SlowFast [10] network as the backbone and increase the spatial resolution of res5 by \(2\times\). SlowFast's Slow pathway, named SlowOnly R-50 \(16\times 4\), is used for the ablation study, and SlowFast R-50 & R-101 for state-of-the-art methods comparison. For AVA, the input of the backbone is a 64-frame clip, and the slow pathway samples \(T=8\) frames with the temporal step \(\tau=8\), while the fast pathway samples \(\alpha T(\alpha=4)=32\) frames. For UCF101-24, we use SlowFast R-50 \(8\times 4\) as the backbone, and the input is a 32 continuous frames clip. The SlowOnly R-50 and
Figure 3: **Multi-head Cross-attention (MCA) module.**
SlowFast R-50 are pre-trained on the Kinetics-400 [29] and SlowFast R-101 is pre-trained on the Kinetics-700. For ACRE, AARE, RSE, and \(\text{RCM}_{\text{L}}\), the channels are set to 1024, and the number of heads are set to 8. For \(\text{RCM}_{\text{L}}\), we set \(\omega=10\), which means using the relations between 10 seconds before and after the clip, and the window size is 21 seconds.
**Training and Inference.** We train our models using the SGD optimizer with a batch size of 16, a weight decay of \(10^{-7}\), a momentum of 0.9, and the linear warm-up strategy. For AVA, the models are trained for 6 epochs with a learning rate of 0.04, which is decreased by 10 at epochs 5.6 and 5.8. During training and inference, we scale the shorter side to 256. In training, we use ground-truth boxes and proposals with an IoU greater than 0.75 as training samples. In inference, person boxes with scores greater than 0.85 are used for action classification. For UCF101-24, the models are trained for 5 epochs, with the learning rate decreased at epochs 4.8 and 4.9.
### Comparison with the State-of-the-art Methods
We compare MRSN with state-of-the-art methods on the validation set of AVA v2.1 (Table 1) & v2.2 (Table 2) and the UCF101-24 split-1 (Table 3). LFB, AIA, ACAR, and MRSN all use SlowFast as the backbone. \(\text{MRSN}_{\text{S}}\) and \(\text{MRSN}_{\text{L}}\) indicate MRSN equipped with \(\text{RCM}_{\text{S}}\) and \(\text{RCM}_{\text{L}}\). Since the videos in UCF101-24 are very short, we only report the result for \(\text{MRSN}_{\text{S}}\). For a fair comparison, we do not use multi-model ensemble or multi-scale augmentation. All results are based on single models and scale 256.
**AVA.** On AVA v2.1, compared to baseline SlowFast, R-50, \(\text{MRSN}_{\text{S}}\), R-50 improves the mAP by 2.4. \(\text{MRSN}_{\text{L}}\) with Long-term Relation Bank outperforms all state-of-the-art models. Although \(\text{MRSN}_{\text{L}}\) is only 0.1 higher than ACAR, the amount of parameters and computation of MRSN is only half of that in ACAR. On AVA v2.2, \(\text{MRSN}_{\text{L}}\) with backbone SlowFast R-50 pre-trained on Kinetics-400 is higher than SlowFast R-101 pre-trained on Kinetics-600. \(\text{MRSN}_{\text{L}}\), R-101 is 1.2 higher than AIA, R-101. It is worth noting that AIA uses an object detector for object detection, which means an extra instance detector and more computational overhead.
**UCF101-24.** On UCF101-24, MRSN outperforms all methods (including two-stream methods). Since UCF101-24 has less than 1.4 actors per video on average, action classification does not rely on actor-actor interaction, mainly evaluating actor-context modeling ability. MRSN's mAP is 1.6 higher than AIA using an additional object detector, demonstrating its strong ability to capture actor-context interactions.
### Ablation Study
We conduct ablation studies on AVA v2.2 using SlowOnly as the backbone. We set the channel dimension of all encoders to 1024 and the number of heads in the Transformer to 8. We only do component analysis in the paper and put the rest of the ablation studies in the supplementary material.
To validate the effectiveness of the MRSN design, we ablated each component of the MRSN. As shown in Table 4, the
\begin{table}
\begin{tabular}{l c|c} model & flow & mAP \\ \hline ACRN [15] & ✓ & 17.4 \\ SlowFast, R-50 [10] & & 24.8 \\ SlowFast, R-101 [10] & & 26.3 \\ LFB, R-50 [2] & & 25.8 \\ LFB, R-101 [2] & & 26.8 \\ ACAR, R-50 [4] & & 28.3 \\ \hline \(\text{MRSN}_{\text{S}}\), R-50 & & 27.2 \\ \(\text{MRSN}_{\text{L}}\), R-50 & & **28.4** \\ \end{tabular}
\end{table}
Table 1: Comparison with state-of-the-art methods on AVA v2.1. ✓indicates the method uses optical flow as extra inputs.
\begin{table}
\begin{tabular}{l c c|c} model & pretrain & mAP \\ \hline SlowOnly, R-50 [10] & K400 & 20.9 \\ SlowFast, R-50 [10] & K400 & 25.6 \\ SlowFast, R-101 [10] & K600 & 29.1 \\ AIA, R-50 [3] & K700 & 29.8 \\ AIA, R-101 [3] & K700 & 32.3 \\ ACAR, R-101 [4] & K700 & 33.3 \\ \hline \(\text{MRSN}_{\text{S}}\), R-50 & K400 & 28.3 \\ \(\text{MRSN}_{\text{L}}\), R-50 & K400 & 29.2 \\ \(\text{MRSN}_{\text{S}}\), R-101 & K700 & 32.1 \\ \(\text{MRSN}_{\text{L}}\), R-101 & K700 & **33.5** \\ \end{tabular}
\end{table}
Table 2: Comparison with state-of-the-art methods on AVA v2.2.
\begin{table}
\begin{tabular}{l c|c} model & flow & mAP \\ \hline ACT [14] & & 69.5 \\ STEP, I3D [30] & ✓ & 75.0 \\ C2D [3] & & 75.5 \\ Gu _et al._[1] & ✓ & 76.3 \\ AIA, R-50 [3] & & 78.7 \\ S3D-G [9] & ✓ & 78.8 \\ \hline MRSN\({}_{\text{S}}\), R-50 & & **80.3** \\ \end{tabular}
\end{table}
Table 3: Comparison with state-of-the-art methods on UCF101-24 split1. ✓indicates extra input optical flow.
performance of AARE is worse than ACRE because AARE only takes RoI features as input, and the lack of context information will seriously impair the classification of actor-context actions. Although ACRE only performs actor-context interaction, the context features contain actors' features, which helps classify actor-actor actions. Compared with single relation modeling, the fusion of the two relations has achieved significant performance improvements. RSE brings a performance improvement of 0.32, demonstrating the effectiveness and importance of relation-level interaction. LRB can additionally improve mAP by 1.01, confirming that the integration of long-term information is beneficial to action detection.
## 5 Conclusion
In this paper, we analyzed the complexity of action relation semantics and proposed Multi-Relation Support Network (MRSN). MRSN enhances and fuses actor-context and actor-actor relations by modeling relations separately and performing relation-level interaction. We also proposed a Long-term Relation Bank (LRB) to integrate the long-term video information. Experiments and ablation studies validated the effectiveness and efficiency of MRSN design. However, relational support and relation-level interaction are currently performed at clip level instead of video level and are not used for human detection refinement. Applying these techniques to video-level action tubes remains an open and challenging problem.
|
2310.05653 | Gaia uncovers difference in B and Be star binarity at small scales:
evidence for mass transfer causing the Be phenomenon | Be stars make up almost 20% of the B star population, and are rapidly
rotating stars surrounded by a disc; however the origin of this rotation
remains unclear. Mass transfer within close binaries provides the leading
hypothesis, with previous detections of stripped companions to Be stars
supporting this. Here, we exploit the exquisite astrometric precision of Gaia
to carry out the largest to date comparative study into the binarity of matched
samples of nearby B and Be stars from the Bright Star Catalogue. By utilising
new "proper motion anomaly" values, derived from Gaia DR2 and DR3 astrometric
data alongside previous values calculated using Hipparcos and Gaia data, and
the Gaia provided RUWE, we demonstrate that we can identify unresolved binaries
down to separations of 0.02". Using these measures, we find that the binary
fractions of B and Be stars are similar between 0.04 - 10", but the Be binary
fraction is significantly lower than that of the B stars for separations below
0.04". As the separation range of these "missing" binaries is too large for
mass transfer, and stripped companions are not retrieved by these measures, we
suggest the companions migrate inwards via binary hardening within a triple
system. This confirms statistically for the first time the hypothesis that
binary interaction causes the Be phenomenon, with migration causing the dearth
of Be binaries between 0.02 - 0.04". Furthermore, we suggest that triplicity
plays a vital role in this migration, and thus in the formation of Be stars as
a whole. | Jonathan M. Dodd, René D. Oudmaijer, Isaac C. Radley, Miguel Vioque, Abigail J. Frost | 2023-10-09T12:07:04Z | http://arxiv.org/abs/2310.05653v2 | Gaia uncovers difference in B and Be star binarity at small scales: evidence for mass transfer causing the Be phenomenon
###### Abstract
Be stars make up almost 20% of the B star population, and are rapidly rotating stars surrounded by a disc; however the origin of this rotation remains unclear. Mass transfer within close binaries provides the leading hypothesis, with previous detections of stripped companions to Be stars supporting this. Here, we exploit the exquisite astrometric precision of Gaia to carry out the largest to date comparative study into the binarity of matched samples of nearby B and Be stars from the Bright Star Catalogue. By utilising new "proper motion anomaly" values, derived from Gaia DR2 and DR3 astrometric data alongside previous values calculated using Hipparcos and Gaia data, and the Gaia provided RUWE, we demonstrate that we can identify unresolved binaries down to separations of 0.02\({}^{\prime\prime}\). Using these measures, we find that the binary fractions of B and Be stars are similar between 0.04 \(-\) 10\({}^{\prime\prime}\), but the Be binary fraction is significantly lower than that of the B stars for separations below 0.04\({}^{\prime\prime}\). As the separation range of these "missing" binaries is too large for mass transfer, and stripped companions are not retrieved by these measures, we suggest the companions migrate inwards via binary hardening within a triple system. This confirms statistically for the first time the hypothesis that binary interaction causes the Be phenomenon, with migration causing the dearth of Be binaries between 0.02 - 0.04\({}^{\prime\prime}\). Furthermore, we suggest that triplicity plays a vital role in this migration, and thus in the formation of Be stars as a whole.
keywords: Astrometry and celestial mechanics: proper motions - stars: emission-line, Be - binaries: close
## 1 Introduction
Be stars make up around 20% of the total population of B-type stars (Bodensteiner et al., 2020) and are defined, in part, by the presence of Balmer emission lines. These lines are thought to originate from a slowly outflowing ionised circumstellar gas-disc (Porter and Rivinius, 2003; Rivinius et al., 2013; Jones et al., 2022). While the dynamics of these discs have been studied extensively (Wisniewski et al., 2010; Draper et al., 2014; Suffak et al., 2022), the mechanism by which they form is undetermined. Consensus suggests that the formation is related to the rapid rotation observed in practically the entire Be population (Zorec et al., 2016, 2017), and which is suggested to be at or near the critical velocities of such stars (Townsend et al., 2004; Granada et al., 2013). Such rapid rotation could thus allow for processes such as turbulence (Townsend et al., 2004) or non-radial pulsations (Owocki and Cranmer, 2002; Semaan et al., 2018) to contribute significantly to the formation of the decreretion discs around Be stars due to decreased effective gravity at the surface of such stars (Granada et al., 2013). This, however, only leads to a further question - what exactly causes Be stars to rapidly rotate?
Three mechanisms have been suggested as the origin of this characteristic rotation; it could be a relic from the parent molecular cloud, the rotation thus a result of inherited angular momentum (Bodenheimer, 1995); the envelope of the star could be spun up via momentum transport from a contracting core (Granada et al., 2013; Hastings et al., 2020); or, crucially for this study, it could be a result of mass and angular momentum transfer via binary interaction (Shao and Li, 2014; Bodensteiner et al., 2020).
If binary interaction is the source of this rapid rotation, then one could expect that the binary statistics of B and Be stars will differ. This line of reasoning provided the basis for the study performed in Oudmaijer and Parr (2010), which found the binary fraction of B and Be-type stars to be similar between separations of 0.1 and 8\({}^{\prime\prime}\) based on a sample of 36 B and 37 Be stars, and concluded that binarity, at least at these scales, could not be responsible for all Be stars. However, while a number of studies into the binarity of Be stars exist (Boubert and Evans, 2018; Klement et al., 2019; Hastings et al., 2021; El-Badry et al., 2022, among others) and suggest the stripping of the companion to the Be star as the cause (Han et al., 2002; Naze et al., 2022; El-Badry and Quataert, 2021; El-Badry et al., 2022), very few comparative studies into the binarity of both B and Be-type stars exist. The next most recent points of comparison are Abt and Levy (1978) and Abt & Cardona (1984), each from around 40 years ago, and which similarly conclude that binarity cannot be responsible for the Be phenomenon at the scales they probe. However, Abt and Levy (1978), which utilises radial velocity variations to detect binarity,
suffers from the effects of small sample sizes, while Abt & Cardona (1984) is hampered by inhomogeneous biases as it is, in effect, a review of the then available literature.
With the advent of high-precision astrometric all-sky surveys, we are now able to mitigate these issues and investigate the multiplicity of large samples. Recent studies have employed astrometric data from surveys such as Hipparcos (Van Leeuwen, 2007) and Gaia (Gaia Collaboration et al., 2021) in order to detect binarity in the overall stellar population, both in the case of wide (El-Badry et al., 2021) and close (Kervella et al., 2019, 2022) binaries. The time is thus ripe for such an investigation into the binarity of B and Be stars.
Here we utilise the Proper Motion Anomaly (PMa), as introduced in Kervella et al. (2019) (see also Brandt, 2018), which is a measure of the difference between long and short term proper motions of a point source, in order to determine the fractions of B and Be type stars in visually unresolved binary systems. Alongside previously determined PMA values from Kervella et al. (2022), calculated using Hipparcos and Gaia DR3 data (a similar catalogue can also be found in Brandt, 2021), we use new values, calculated for the first time in this work using a combination of Gaia DR2 and DR3 data (Gaia Collaboration et al., 2018, 2022). Using these metrics we probe binary systems with separations between 0.02 and 1.1'', and with magnitude differences up to 4mag.
Additionally we utilise the Gaia-provided Renormalised Unit Weight Error (RUWE) to further constrain the binary fractions of these two populations (Stassun & Torres, 2021). Thus we are able to bridge the gap in binary separations between, and expand upon previous radial velocity (Abt & Levy, 1978) and adaptive optics (Oudmaijer & Parr, 2010) studies of these stars. Additionally, by querying a sample over an order of magnitude larger than present in the most recent similar study (Oudmaijer & Parr, 2010), this paper provides the largest scale comparative study of B and Be star binarity to date.
This paper is structured as follows: in Section 2 we detail the two binary detection methods used (the PMA and the RUWE), and demonstrate their sensitivities and limitations; in Section 3 we report the results of utilising these methods on samples of B and Be stars; and in Section 4 we discuss the implications of these newly reported binary fractions on models for the formation of Be stars.
## 2 Methods
### Binary Detections via Proper Motion Anomaly
For any binary system we can define two similar variables, the centre of mass, which the two component stars orbit, and the photocentre, which describes the location of the apparent point source in an unresolved system. In general, the positions of these two points differ, thus the photocentre also orbits the centre of mass - leading to a deviation from the linear proper motion expected for single sources in the case of unresolved binary systems.
This deviation from the expected linear proper motion, referred to as the Proper Motion Anomaly (PMa), \(\Delta\mu\), can be detected through a comparison of long and short term proper motion measurements (\(\mu_{It}\) and \(\mu_{st}\) respectively), with the long term effectively tracing the motion of the centre of mass and the short term tracing the sum of the centre of mass and relative photocentre motions. An illustration of this principle can be found in Figure 1. The PMa can thus be expressed as:
\[\Delta\mu=\mu_{st}-\mu_{It} \tag{1}\]
Thanks to current advances in the precision of proper motion measurements, alongside high precision positional measurements allowing for long term proper motions to be calculated, the PMa thus provides a powerful tool in the search for unresolved binary candidates within large samples of stars.
This measure is sensitive to the period of a given binary system. Periods significantly longer than the time elapsed over the long term proper motion measurement, \(\Delta t\) will have a deviation in proper motion too small to be detected. Similarly, systems wherein a whole number of complete orbits have occurred during the measurement of the long term proper motion will not be detected, again as the deviation in long and short term proper motions will be too small to detect.
Following this logic, peak sensitivity of the PMa should be achieved for systems where an \(\frac{2n+1}{2}\) orbital periods have elapsed between the two observations, as the absolute change in orbital phase is at a maximum here. However, in reality, due to the time-window smearing effects present over the observation periods of the two constituent astrometric surveys (Kervella et al., 2022) there is a general decrease in sensitivity towards smaller periods, with peak sensitivity being achieved for periods of \(P\simeq\Delta t\)(Kervella et al., 2022). This also explains why, although of much higher astrometric precision, the DR2-DR3 derived PMa's probe larger periods than the Hipparcos-DR3 PMA values (as will be shown in the following sub-subsection). Below we will probe the sensitivity limits of the various methods both theoretically and empirically.
#### 2.1.1 Separation Limits of the Proper Motion Anomaly
Whereas the sensitivity of the PMa to mass as a function of orbital period (and thus linear separation) has previously been investigated (Kervella et al., 2022), the sensitivity to angular separation remains unexamined. As detailed in Appendix A, using an idealised theoretical model, consisting of a face-on, circularly orbiting binary system, we arrive at a minimum detectable separation of 0.01'' for PMa values calculated using Hipparcos and Gaia DR3 data. Additionally,
Figure 1: Cartoon illustrating the principle behind the Proper Motion Anomaly. The large and small stars are the primary and secondary companion respectively, the black dot is the centre of mass and the yellow dot the photocentre. Two sets of observations are taken at \(A\) and \(B\), with a time interval of \(\Delta t\) between them. \(\mu_{A}\) and \(\mu_{B}\) are the short term proper motions measured during their respective observations, and \(\mu_{AR}\) is the long term proper motion, determined from the change of observed position over \(\Delta t\). Arrows labelled \(\mu_{PC}\) are the instantaneous proper motions of the photocentre in the frame which the centre of mass is stationary.
when considering the PMa calculated using Gaia DR2 and DR3 data, the theoretical minimum detectable separation is \(0.0025^{\prime\prime}\). However, this computation does not take into account the effect of the time-window smearing during the PM measurement - which leads to the DR2-DR3 PMa losing out on the shorter separations.
It is thus useful to obtain an empirical value as a point of comparison. In order to evaluate this sensitivity of the PMa to varying separations, we use a sample of binary systems listed in the Washington Double Star (WDS, Mason et al., 2022) catalogue. The WDS contains a compilation of multiple star systems for which at least one measured separation is available within the literature. The sources in the WDS were cross-matched with the list provided in Kervella et al. (2022) using the xMatch service provided by CDS, Strasbourg (Pineau et al., 2020). No restrictions on mass or spectral type were placed on the stars within the binaries in order to generate the largest, general sample. Reported higher order multiple systems were then removed for the sake of simplicity, as were any systems with separations greater than \(10^{\prime\prime}\). While the PMa primarily probes the unresolved regime of separations by nature, this \(10^{\prime\prime}\) limit was selected in order to ensure that no unpredictable effects were materialising at higher separations.
For the resulting 11 116 unique binary systems, we obtained values of the PMa from the Kervella et al. (2022) catalogue (which uses data reported in the Hipparcos re-reduction (Van Leeuwen, 2007) and the Gaia DR3 catalogues, giving \(\Delta t\approx 25\) yrs) and calculated new values, using data available in the Gaia Data Release 2 and Data Release 3 (Gaia Collaboration et al., 2018, 2021; \(\Delta t\approx 0.5\) yrs). The methodology used in Kervella et al. (2022) was utilised for the calculation of these new values, with calibration performed via recalculating Hipparcos-Gaia DR3 PMa's replicating the reported PMa values exactly and reported errors to within a precision of 1%.
The final sample contains 10 283 sources with reported PMa values; note that not all values surpass the binary detection threshold of S/N\(>\)3. The majority of these systems had separations reported to a precision of \(0.1^{\prime\prime}\). 316 systems have separations reported as \(0^{\prime\prime}\) in the WDS Catalogue (i.e. their separations were below \(0.1^{\prime\prime}\)). For each of these systems, higher precision separations were retrieved from supplemental WDS material (Mason et al., 2022). Where orbital solutions had previously been reported, the separation available at the Gaia DR3 epoch was used.
Figure 2 shows the median values of the signal-to-noise ratio for the Hipparcos-Gaia DR3 PMa for the WDS sample at varying separations. These systems have been placed into bins of width \(0.1^{\prime\prime}\) at separations greater than or equal to \(0.1^{\prime\prime}\). At separations below \(0.1^{\prime\prime}\) the systems have been placed into 10 bins such that each bin has an equal number of systems within it (approximately 30 per bin). For these bins the mean reported separation of the systems within it is plotted.
Based on the data presented in Figure 2, it can be seen that for the DR3-Hipparcos PMa, there is an increase of SNR towards lower reported separations. On average, this PMa remains above the detection threshold of 3 for all separations between 0.02 and \(0.7^{\prime\prime}\) (based on the median values). As a result, \(0.02-0.7^{\prime\prime}\) appears to be an empirical range of separations detectable via this PMa; at least half of all WDS binary systems with separations within this range are detected (note that we wouldn't expect for all binaries within this range to be detected due to projection effects). This is in agreement with the calculated theoretical minimum detectable separation and the expectation that the upper limit occurs somewhere around the spatial resolution limit of the surveys.
It is also to be noted that high PMa systems at separations greater than the resolvable limit (i.e. with separations greater than \(\sim 1^{\prime\prime}\)) exist in non-negligible numbers. These systems have reported separations much greater than the PMa can detect (as it measures the motion of an essentially unresolved binary), and it is thus likely that they are, in fact, as of yet undetected higher order multiple systems.
Alongside the examination of the Kervella et al. (2022) reported Hipparcos-Gaia DR3 PMa values, an equivalent consideration of Gaia DR2-DR3 PMa values has been conducted. As there does not exist an equivalent catalogue as in Kervella et al. (2022), these PMa's were thus calculated via the same process, but using data from the Gaia Data Releases 2 and 3.
The median values of these PMa SNR values at a given separation can also be found in Figure 2. Again, a rise in the median PMa can be seen towards lower separations, here beginning at separations of approximately \(1.1^{\prime\prime}\), increasing towards a peak at around \(0.6^{\prime\prime}\), and remaining above the detection threshold down to \(0.2^{\prime\prime}\). While this lower limit is in disagreement with the theoretical limit of \(0.0025^{\prime\prime}\) derived in Appendix A, the theoretical considerations largely ignore the effect of smearing over the observing timeframe. As the DR2-DR3 PMa is smeared across the entirety of the long-term proper motions timeframe, this will mean that the lowest separations are not probed, although the number of potential observable objects increases significantly (as Gaia has observed orders or magnitude more objects than Hipparcos). Additionally the increase in precision achieved since Hipparcos has both resulted in the upper limit of separations detectable to be expanded beyond the Gaia resolution limit, and increased the total fraction of systems detected within the sensitive limit.
These results remain consistent when examining B and Be type binaries within the WDS (Mason et al., 2022), with systems within
Figure 2: Median values of the Hipparcos - Gaia DR3 and Gaia DR2 - DR3 PMa S/N’s and the RUWE, of systems reported as binary in the WDS (Mason et al., 2022). Here values greater than 3 constitute a detection of binarity by either PMa, and values greater than 1.4 constitute a detection by the RUWE - see Section 2.2 for details.
the two sensitive ranges being detected at significantly higher rates than those outside that range.
We thus show that the Hipparcos-Gaia DR3 PMA is sensitive to binary systems with angular separations between \(0.02-0.7^{\prime\prime}\), and the Gaia DR2 - DR3 PMA is sensitive between separations of \(0.2-1.1^{\prime\prime}\).
#### 2.1.2 Magnitude Limits of the PMa
In order to evaluate the limitations of the Proper Motion Anomaly, we must also consider the sensitivity of this measure to the difference in magnitude between the two components. Here systems contained within the WDS catalogue are once again considered, now limiting the sample to those systems with reported separations between 0 and \(1^{\prime\prime}\) in line with the more favourably sensitive range determined in the previous section.
In the case of the Hipparcos-Gaia DR3 PMA, and as can be seen in Figure 3, examining the difference in the WDS reported apparent magnitude reveals that the PMA shows some preferential sensitivity within the 1.0 to 3.5mag range, peaking at around \(\Delta V_{mag}=3\)mag.
In the case of the Gaia DR2-DR3 PMA, an equivalent examination reveals a general increase in sensitivity across most magnitude differences available via the WDS sample, with the preferentially detected region expanding to systems with a difference in magnitude \(0<\Delta V_{mag}\leq 4.5\), peaking at around \(\Delta V_{mag}=2.5\), although significant numbers of detections are still made at greater differences in magnitude.
### Binary Detections via the RUWE
The Renormalised Unit Weight Error, or the RUWE, is a reduced chi-squared parameter that describes the 'goodness-of-fit' of a single body astrometric model to an observed source. As such, it would follow that such a parameter is sensitive to the presence of unresolved binary companions - with the photocentre wobble caused by such a companion causing deviations from what is expected of a single body, resulting in a worse fit astrometric solution. Indeed, the RUWE parameter has been suggested to act as a binarity indicator (Belokurov et al., 2020), with values significantly above the normalised value of 1 (i.e \(\geq\) 1.4 as per Stassun & Torres, 2021) being regarded as indicative of the source being 'badly fit' by the astrometric solution and thus potentially binary.
Like the PMA, the RUWE varies in sensitivity according to the period of a binary system, with the RUWE having some function of sensitivity to binarity determined both by the period of the system in question and by the time baseline over which the RUWE is determined (approximately six months in the case of Gaia DR2 and DR3), with Penoyre et al. (2022) suggesting that for Gaia DR3, RUWE is an effective measure of binarity for periods of around a month to a decade.
However, due to sensitivity to other (non-multiplicity related) factors which can inflate the RUWE (Belokurov et al., 2020), it is important to note that there may be some fraction of false positive detections - for example if the sample contains stars known to have circumstellar discs or extended regions of emission (Oudmaijer et al., 2022). This is unlikely to hamper our study of B and Be stars however, as the discs of Be stars are very small when compared to the resolution achieved by Gaia DR3 (Quirrenbach et al., 1997; Wheelwright et al., 2012).
#### 2.2.1 Separation Limits of the RUWE
The separation limits of the RUWE were determined using the same process as for the PMA above. Here, the detection threshold is given to be 1.4 (as suggested by Stassun & Torres, 2021). Based on the data presented in Figure 2, we find that the RUWE exhibits a sharp and marked increase above the detection threshold at binary separations below the resolution limit of Gaia (0.7\({}^{\prime\prime}\)). This increase peaks at approximately 0.09\({}^{\prime\prime}\), and, on average, remains above the threshold down to separations of 0.04\({}^{\prime\prime}\). Thus, we find that the RUWE is sensitive to those binary systems with separations ranging between 0.04\({}^{\prime\prime}\) and 0.7\({}^{\prime\prime}\).
#### 2.2.2 Magnitude Limits of the RUWE
As with the PMA, it is useful to quantify the sensitivity of the RUWE to the magnitude differences of the binary systems probed. Figure 3 demonstrates, by repeating the same process as used to determine the magnitude differences the PMA is sensitive to, that it would seem that the RUWE is primarily sensitive to systems with magnitude differences between 1.5 and 3.0mag, peaking somewhere between 2.0 and 2.5mag.
We have thus shown that, via these astrometric methods, we can consistently detect visually unresolved binaries with separations as low as 0.04\({}^{\prime\prime}\) using the RUWE, and 0.02\({}^{\prime\prime}\) using the Hipparcos-Gaia PMa and with magnitude differences up to 4.5mag, depending on the measure used. Thus, thanks to the precision achieved by modern astrometric surveys, we will be able to probe the origins of Be stars with a statistical basis heretofore unavailable.
Figure 3: Median PMA of WDS systems, as a function of the difference in magnitude of the two stars for the Hipparcos-Gaia DR3 and Gaia DR2-DR3 PMA’s and the RUWE. Both plots have been cropped to a maximum difference in magnitude of 8mag, as the number of systems with magnitude differences greater than this decreases to the point where meaningful conclusions cannot be drawn. Detections are reported out to differences of up to 12mag in all three cases.
## 3 The Binarity of B and Be stars
### Sample Selection
Here we utilise the same initial sample of B and Be stars extracted from the Bright Star Catalogue (BSC) 5th Revised Edition (Hoffleit & Warren, 1995; Hoffleit & Saladyga, 1997) as used in Radley et al. (in preparation). This sample was then cross-matched within \(5\arcsec\) against the Gaia Data Release 3 (Gaia Collaboration et al., 2021) data. Stars without proper motion measurements in the Hipparcos or Gaia Data Releases were removed. Stars with non-V luminosity classes were also removed in order to ensure that the comparisons made between our two samples are between like sources that are subject to the same selection effects. A summary of the spectral types of the stars included in this sample can be found in Table 1.
This sample provides an almost thirty-fold increase in the number of B-type stars, and a four-fold increase in the number of Be-type stars compared to similar comparative studies (Abt & Levy, 1978; Oudmaijer & Parr, 2010) that probe a well defined separation range. Additionally, by virtue of being selected from the same catalogue of the brightest stars in the sky, these stars will be subject to the same selection effects, and have similar brightnesses and parallaxes, thus ensuring their usefulness in comparing how B and Be type stars differ in their astrometric properties. These stars are also amongst the most nearby ones, thus allowing us to detect the physically closest companions whilst remaining within the range of angular separations detectable by our methods (see Sections 2.1.1 and 2.2.1), with the lower end of our detectable range, \(0.02\arcsec\) equating to 5 au at the mean distance to the stars in these samples.
### Detection of Unresolved Binaries
We first consider the complete, unmodified samples of 907 B and 123 Be stars, utilising a combination of PMa values found within the catalogue contained within Kervella et al. (2022), calculated using Hipparcos and Gaia DR3 data, and new PMa values calculated using data reported in Gaia Data Releases 2 and 3, as well as the Gaia-provided RUWE. These measures allow us to detect unresolved binary systems via differences in long and short term proper motions (for the PMa; see Section 2.1) and via deviations from a single body astrometric fit (in the case of the RUWE; see Section 2.2).
Using the Hipparcos-Gaia DR3 PMa, which probes the range 0.02 \(-\) 0.7\(\arcsec\) (see Section 2.1.1), equating to around \(5-180\) au at the average distance of 260pc, we arrive at binary fractions of \(42\pm 2\%\) and \(28\pm 4\%\) for the B and Be samples respectively1. This proves to be a greater than \(3\sigma\) difference.
Footnote 1: Uncertainties here are calculated using the binomial method. Error bars are given at \(1\sigma\).
In contrast, the Gaia DR2 - DR3 PMa values reveal similar B and Be binary fractions in the \(0.2-1.1\arcsec\) range (around \(50\,-\,290\) au at 260pc). In this case, the fraction of B stars detected as being in a binary system is found to be \(27\pm 1\%\) and the equivalent for Be stars is found to be \(29\pm 4\%\).
Finally, the fractions of B and Be stars detected via applying a detection threshold of \(RUWE>1.4\), tracing binaries with separations between \(0.04-1.1\)" (\(10-290\) au), were around to be \(27\pm 1\%\) and \(20\pm 4\%\) respectively. See Table 2 further down for a summary of these binary fractions.
### Correcting Biases
Although the global properties of the B and Be stars in the BSC are very similar, the two samples exhibit slightly different properties across a number of observables. For example, by virtue of being rarer, the Be stars are on average further away than the B stars, with the average B star in our sample sitting at a parallax of 5.26mas with a standard deviation of 3.38mas, whereas the average Be star sits at a parallax of 3.81mas with a standard deviation of 2.42mas. Likewise the average apparent magnitudes of the B and Be populations are 6.06mag (standard deviation of 1.07mag) and 5.95mag (standard deviation of 1.32mag) respectively. In addition to this, the two populations differ in the distribution of their spectral types (for example, almost 30% of the B stars are B9V, while only 7% of the Be stars are B9Ve). As such, the variations in the distributions of these properties must be taken into account to ensure that the binary fractions are not affected by them.
Thus, in order to remove the effects of these biases, a variety of sub-samples of B stars were created. These included random samples, alongside "semi-random" samples wherein the B population was binned according to a given variable, mentioned above, and stars were randomly picked from each bin in order to match the distribution of said variable in the Be sample. In each case a Kolmogorov-Smirnov test was performed in order to confirm that the sub-sample was likely drawn from the same distribution as the Be population. The binary fraction of each sub-sample was then determined using the PMa and RUWE detection thresholds detailed above.
A summary of these samples and their binary fractions can be found in Figure 4. Therein, the greater than \(3\sigma\) dearth of Be binaries, when compared to the number of B binaries, is still seen to be present at the tightest separations probed by our methods.
### Wider Separation Binaries
While the focus of this study is on close binaries, we now consider the presence of binary companions outside the \(0.02-0.7\arcsec\) range. Oudmaijer & Parr (2010), who probe separations of \(0.1-8\arcsec\), determined binary fractions of B and Be stars at \(29\pm 8\) and \(30\pm 8\%\). This finding could imply that the difference in the two binary fractions disappears towards larger separations.
Exploiting the added information provided by Gaia, we performed a basic search for spatially resolved B and Be binaries by cross-matching the sample depicted in Fig. 4, and looking for sources within a radius of \(10\arcsec\). Any 'duplicate' sources that were retrieved
\begin{table}
\begin{tabular}{c|c|c}
**Spectral Type** & **B** & **Be** \\ \hline B0V & 12 & 4 \\ B1V & 47 & 10 \\ B2V & 104 & 31 \\ B3V & 97 & 19 \\ B4V & 31 & 12 \\ B5V & 111 & 13 \\ B6V & 43 & 8 \\ B7V & 48 & 4 \\ B8V & 156 & 13 \\ B9V & 258 & 9 \\ Total & 907 & 123 \\ \end{tabular}
\end{table}
Table 1: Populations of main sequence B and Be type stars within the sample, divided by spectral type.
were then deemed to be probable companions to the primary if they satisfied:
\[|\omega_{primary}-\omega_{duplicate}|\leq\sigma_{primary,\omega}+\sigma_{duplicate,\omega} \tag{2}\]
where \(\omega\) is the parallax of either star, and \(\sigma_{\omega}\) is the uncertainty corresponding to either parallax.
Based on this basic test, \(6\pm 1\%\) of B and \(10\pm 3\%\) of Be V-type stars are found to have potential resolved companions within a distance of \(10\arcsec\). These values remained consistent even when randomly sampling or when generating semi-random samples of B stars intended to match the spectral type, parallax or apparent magnitude distributions of the Be stars. A summary of these results can also be found in Figure 4. In the case of each of the distribution matching samples, a Kolmogorov-Smirnov test was performed in order to verify that the distribution of each given parameter in the B star sample is likely being drawn from the same parent distribution as the Be star sample.
Presuming that each of these additional sources is indeed a binary companion, poses an interesting question, that being why do Be stars have binary companions at a lower rate than their B type equivalents at close (i.e. \(<0.7\arcsec\) separations, but at a similar rates at greater separations?
### Further Constraints on Binary Separation
In this study we have determined, through the use of various astrometric measures, between separations of \(0.02-0.7\arcsec\), the binary fraction of Be stars is significantly lower than that of their B star equivalents, while they are similar at larger separations. Let us continue by further identifying at which separation the B binary stars become more prevalent.
Table 2 reveals that by comparing the two PMa values and looking at those systems detected by the Hipparcos-Gaia DR3 PMa but not by the Gaia DR2-DR3 PMa, we can constrain the lack of Be binaries to between 0.02 and \(0.2\arcsec\). We also note that at separations greater than \(0.2\arcsec\), B and Be stars have similar fractions of their populations in binary systems - in agreement with the previous most recent compar
Figure 4: Summary of the binary fractions of B and Be stars, as determined by Hipparcos-Gaia DR3, Gaia DR2 - DR3 PMa values, Gaia DR3 RUWE values and resolved binaries (see text). Also presented are the binary fractions of the various sub-samples of the B type stars, taken in order to account for any biasing effects. ‘Matched’ here indicates that a sub-sample of the B stars has been created such that the distribution of a given variable in the sub-sample is drawn from the same parent distribution as the Be population. Note that the ‘Wide Binaries’ plot is nosier due to smaller number statistics. The Hipparcos - Gaia DR3 values, probing the smallest separations, show a significantly lower Be binary frequency (in each case a greater than \(3\sigma\) difference), while all other approaches return similar binary fractions to within the errors. Based upon this report a dearth of Be binaries with separations between 0.02 and \(0.7\arcsec\), and, as discussed in Section 3.5 we further constrain these missing binaries to between 0.02 and \(0.2\arcsec\).
ative study of these stars (Oudmaijer & Parr, 2010) 2. Additionally, we note that the RUWE detects statistically similar rates of B and Be stars as being in binary systems. Thus, by virtue of the RUWE's sensitivity stretching between 0.04 and 0.7'', we are able to confine the dearth of Be binaries even further - restricting them to between 0.02 and 0.04''.
Footnote 2: Although we do not wish to make quantitative comparisons between values reported by our different detection methods, a larger binary fraction detected by the Hipparcos-Gaia DR3 PMa (\(42\pm 2\%\)) than the Gaia DR2-DR3 PMa (\(27\pm 1\%\)) would be expected in the case of a log-flat distribution of separations (as per Oplis law; Opka, 1924). This is because the separation range probed by the Hipparcos-Gaia DR3 PMa (0.02 to 0.7′′) is almost twice the width in log-space as that of the Gaia DR2-DR3 PMa (0.2 to 1.1′′). Conversely, by themselves the values reported for the Be stars may indicate a distribution decreasing towards to closer separations that we are sensitive to.
## 4 Discussion
In previous sections we have established, for the first time, ranges of angular separation and magnitude difference within which the PMa and RUWE are sensitive to binarity.
We have then used these measures to reveal a greater than \(3\sigma\) dearth of Be stars in binary systems at separations unresolved by Gaia, compared to B stars, we then go to further constrain this absence of Be binaries to separations between 0.02 and 0.04''.
These results pose a conundrum, as one might naively assume that this implies that the Be phenomenon more favourably occurs via single star pathways, rather than the currently favoured binary pathways. Therefore, in order to continue to pursue the binary pathway for Be formation, we must consider: what sort of companion will remain undetected by these methods, whilst also remaining consistent with such models of Be formation?
In this section we will posit some explanations for this difference in binary fraction, alongside the implication for models regarding the formation of the Be phenomenon.
### Stripped Companions
One such solution invokes the presence of a companion whose envelope has been stripped by the putative Be star. Accretion from such a companion would provide a source for the initial spin-up mechanism that eventually results in the formation of the Be stars' characteristic disc (El-Badry et al., 2022; Jones et al., 2022), while also allowing the companion to become low-mass, dim or close enough to the host star (Frost et al., 2022) to be unlikely to be detected via the PMa.
Such a model has been suggested to be the route through which \(20-100\%\) of field Be stars form (El-Badry & Quataert, 2021), and a significant number of these stripped companions to Be stars have been reported in the literature (El-Badry et al., 2022; Bodensteiner et al., 2020; Frost et al., 2022; see Table 3). Additionally, no close main sequence companions to Be stars have been confirmed (Bodensteiner et al., 2020), which is consistent with what would be expected in the case of a stripped companion origin for Be stars (although it should be noted that this study only considers massive Be stars - i.e. spectral types earlier than B1.5). As such, they may be responsible for hydrogen-poor supernovae and neutron star mergers resulting in gravitational wave sources.
The periods proposed to be required in order for stripping to begin by Han et al. (2002) are of order \(0.1-10\) days for Common Envelope Ejection and \(400-1500\) days for Roche Lobe Overflow. Using the average distance to the Be stars in our sample (\(\sim 260\)pc), and assuming a circular, Keplerian orbit, these stars would require a binary companion at a separation of 0.02'' (approximately 5 au) or closer in order for stripping to initiate. While, the process of mass transfer would then act to widen the orbit of this companion, the resultant stripped star would be low-mass and faint. We would thus expect that the majority of stripped companions, if they indeed exist, would be missed by both our PMa values and the RUWE as they will too close to the host Be star, or too low mass and dim to be detected.
In support of this, PMa and RUWE values were obtained for 16 Be stars confirmed via X-ray observations to host a stripped companion (all of those from Naze et al., 2022). In order for the above suggestion to have any validity, these stars, that are known to host stripped companions, should not be detected as binaries using the measures detailed within this paper. A summary of these objects and their Gaia data can be found in Table 3.
Seven of these 16 systems are detected by either method; 5 by the Hipparcos- Gaia DR3 PMa, and 5 by the RUWE, with 3 objects detected by both. However, 5 of these 7 detected systems (the exceptions being HD 194335 and HD 200310) have known additional companions (Mason et al., 2022) at separations within the range the PMa's and RUWE are sensitive to. In addition, these companions are at separations significantly larger than required for the periods reported by Naze et al. (2022), and are therefore inconsistent with being the reported stripped companion. An undetected, third, companion has also been suggested in the case of HD 200310 to explain the eccentric orbit of the stripped companion (Klement et al., 2022), making the system a potential triple. Thus, the true fraction of detected stripped companions would seem to be either 6 or 12% depending on how one wishes to treat the case of HD 200310. Therefore, we reach the conclusion that the stripped companions listed in Naze et al. (2022) are by and large not detected by the PMa or RUWE, whose detec
\begin{table}
\begin{tabular}{c|c|c|l}
**Separation Range / ′′** & **B** & **Be** & **Detection Method** \\ \hline \(0.02\leq x\leq 0.2\) & \(28\pm 1\%\) & \(17\pm 3\%\) & Detected by the Hipparcos-DR3 \\ \(0.02\leq x\leq 0.7\) & \(42\pm 2\%\) & \(28\pm 4\%\) & Detected by the Hipparcos-DR3 \\ \(0.2\leq x\leq 1.1\) & \(27\pm 1\%\) & \(29\pm 4\%\) & Detected by the Gaia DR2-DR3 \\ \(0.04\leq x\leq 0.7\) & \(27\pm 1\%\) & \(20\pm 4\%\) & Detected by the RUWE. \\ \(0.1\leq x\leq 8\) & \(29\pm 8\%\) & \(30\pm 8\%\) & Detected by Oudmaijer \& Parr (2010) \\ \(0.7\leq x\leq 10\) & \(6\pm 1\%\) & \(10\pm 3\%\) & Resolved as binary by Gaia. \\ \end{tabular}
\end{table}
Table 2: Percentage of B and Be stars detected as binary at varying ranges of separations. By utilising combinations of the previously explored astrometric detection methods we are able to constrain the greater than \(3\sigma\) dearth of Be binaries to between 0.02 and 0.2′′.
tions turn out to trace a third body in these systems instead. Thus, the potential for the Be phenomenon being caused by the presence of stripped companions remains consistent with our results.
### Triples
Intriguingly, the separation range of the "missing" Be binaries (0.02 - 0.04''; with 0.02'' corresponding to 5au) is typically too large for mass transfer to have occurred (typically less than 4au is required per the periods reported in Han et al., 2002). The natural next question to ask is then, why would the Be stars lack binaries in this particular separation range, while the stripping of a close companion should occur at smaller separations?
A clue can be found in the fact that the systems listed in Table 3 show a high incidence rate of higher order multiplicity, with 56\(\pm\) 18% of this sample of Be stars known to host a stripped companion having additional companions (Mason et al., 2022) at separations larger than those reported in Naze et al. (2022). Additionally this number is also somewhat elevated with the expected incidence rate of higher order multiplicity posited by Moe & Di Stefano (2017) - being around 40%.
The presence of such a third companion can neatly explain the absence of detected Be binaries at larger separations, while also allowing for any companion stars to become close enough for mass transfer to occur; it is well known that higher order multiplicity can result in the hardening of an inner binary. Indeed, a third body increases the occurrences of migration and eventual binary interactions significantly (Toonen et al., 2022; Kummer et al., 2023).
Recent models (Preece et al., 2022) suggest that of the systems that eventually form stripped companions - half form via binary pathways and half form via triple pathways. This can take place by either mass transfer occurring within the inner binary - with the outer tertiary star potentially becoming unbound - or a merger of the inner binary and mass transfer occurring between the two remaining stars (Preece et al., 2022).
Our results therefore imply that close binary interactions are responsible for the formation of Be stars, which constitute 20% of the B star population, with a migratory effect causing the lack of Be binaries in the 0.02 to 0.04''. Moreover, we suggest that triplicity must play a vital role in triggering this migration, and thus in the formation of Be stars as a whole.
### Potential Limitations of the Methods
We will now address the various potential limitations of the methods as described in Sections 2 and 3.
#### 4.3.1 Sample Selection
First, we note that the Be phenomenon is known to be transient, with the disc known to disappear and reappear on short timescales, which can lead to the potential misclassification of Be stars as B stars Rivinius et al. (2013). Thus, contamination of the B star sample with misclassified Be stars could effect the previously detected binary fractions for the B star sample. However, as the Be sample should have low contamination (as observation of the emission line features associated with the Be phenomenon is required for classification as a Be star, even if these features later disappear), the detected Be binary fractions should be representative of the Be population as a whole - and thus representative of any misclassified Be stars. Therefore, should there be a significant amount of Be contaminants in the B star sample, then they will act to bring the true B binary fraction down - thus the true B binary fraction would differ more significantly from the Be fraction.
We also note that, as this sample is selected from the BSC (Hoffleit & Jaschek, 1982; Hoffleit & Warren, 1995; Hoffleit & Saladyga, 1997), these stars are necessarily bright. Whilst Gaia astrometric fits become systematic dominated at \(G<13\)(Lindegren, 2020), the resultant increase in uncertainties associated with this effect ensures that the PMA is still a viable metric - as a signal-to-noise ratio is used to quantify binary detection. Additionally, in the calculation of the new Gaia DR2 - DR3 PMA's corrections to the respective proper motions of bright stars as proscribed by Lindegren et al. (2018) and Cantat-Gaudin & Brandt (2021) have been applied. Steps have also been taken to ensure that the two samples are subject to the same selection effects, and these are detailed in Section 3.3, thus any systematic effects should be consistent between the two samples. To check this, all stars brighter than 5mag were removed from the B and Be samples; which resulted in zero change to the reported dearth of Be binaries within the 0.02 \(-\) 0.2'' range.
#### 4.3.2 Variability
While photometric variability has not been quantitatively considered here, we anticipate that it is unlikely to effect the PMa unless the variability is occurring both asymmetrically and on a scale similar to the sensitive separation range, here we look at this in more detail.
First, consider a single star exhibiting symmetrical variation; in this case the position of the photocentre does not deviate from the centre of mass - thus leading to no proper motion anomaly. When we then introduce an unresolved companion, the position of the photocentre therefore deviates from the position of the centre of mass, however the magnitude of the deviation varies with the variability cycle. Thus, while such variability shouldn't produce a false positive detection of binarity, it is possible for it to induce a false negative. The likelihood of this false negative thus depend on two factors: the magnitude of the variability, and the timescale on which it occurs (as observations taken at the same point in the variability cycle will obviously be unaffected, and variability with a period shorter than either individual observing period will be smeared, diminishing the effect on the PMa).
In the case of asymmetric variability - i.e. that caused by phenomena such as mass ejection, or variations in surface brightness in stars such as red super-giants (Chiavassa et al., 2022), or asymmetry
\begin{table}
\begin{tabular}{c|l|c|c}
**Name** & **Spectral Type** & **PMa** & **RUWE** \\ \hline \(\varphi\) Per & B1.5Ve & 0.23 & 1.804 [FOOTNOTE:4]Footnote 4: The \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per is the \(\varphi\) Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per Per, \(\varphi\) Per is the \(\varphi\) Per Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per Per, \(\varphi\) Per is the \(\varphi\) Per Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per Per Per, and \(\varphi\) Per Per is the \(\varphi\) Per Per Per Per, \(\varphi\) Per Per is the \(\varphi\) Per Per Per, \(\varphi\) Per Per Per is the \(\varphi\) Per Per Per Per, and \(\varphi\) Per Per Per is the \(\varphi\) Per Per Per Per, \(\varphi\) Per Per Per is the \(\varphi\) Per Per Per Per, and \(\varphi\)
within a disc (Meilland et al., 2007; Stee et al., 2013) - deviation of the photocentre from the centre of mass will occur. However, such asymmetry would have to be on a similar scale to the PMa's sensitive range to in order for such deviation to be construed as a false positive detection of binarity.
In the specific case of this study, we anticipate little effect of the photometric variability of Be stars (Labadie-Bartz et al., 2017, 2022) on our PMa values. This is due to the observed periods of Be variability being significantly shorter than the observing periods of Gaia DR2, DR3 and Hipparcos (668, 1038 and 1227 days respectively), with the longest period variability found by Labadie-Bartz et al. (2017) being approximately 200 days. Similarly, variability caused by mass loss onto the Be disc is unlikely to significantly effect the PMa, as the discs of Be stars, with sizes of order milli-arcseconds, are very small when compared to the resolution achieved by Gaia DR3 (i.e. \(0.7^{\prime\prime}\); Wheelwright et al. (2012); Quirrenbach et al. (1997)). Additionally, by similar logic, asymmetry of the discs themselves (Meilland et al., 2007; Stee et al., 2013) is also expected to have little effect on the PMa measurements of these stars. Likewise, the stars are necessarily smaller than their disc, they are expected to be too small to have asymmetric surface brightness that would effect the PMa.
## 5 Conclusions
In this paper we have performed the largest scale comparative study of B and Be-star binarity to date, utilising both the Hipparcos - Gaia DR3 PMa values found in Kervella et al. (2022) and new Gaia DR2 - DR3 PMa values calculated for the first time here, alongside the Gaia provided RUWE.
We evaluate the limits in which these methods are sensitive to binarity. Particularly we determine for the first time the range of angular separations they are capable of reliably detecting, through the use of WDS (Mason et al., 2022) data. We thus find that the Hipparcos - Gaia DR3 and Gaia DR2 - DR3 PMa's are sensitive to binaries with separations in the \(0.02-0.7^{\prime\prime}\) and \(0.2-1.1^{\prime\prime}\) ranges respectively. Likewise the RUWE is found to be sensitive to separations between 0.04 and \(0.7^{\prime\prime}\).
Through the use of these astrometric methods in combination with one another, we report a greater than \(3\sigma\) dearth of Be binaries compared to B stars at separations between 0.02 and 0.2\({}^{\prime\prime}\), with the fractions of B and Be stars found to be in binary systems at these separations being \(28\pm 1\) and \(17\pm 3\%\) respectively.
We posit that this apparent lack of Be binaries provides evidence for the presence of stripped companions to the Be stars, with the mass-transfer process by which they are created having been previously suggested as an origin for the Be phenomenon. Such stars would go undetected by the measures used within this paper, as they are too close, low-mass and dim to induce a significant enough deviation of the photocentre from the centre of mass. We also find that Be stars previously reported to have a stripped companion have additional companions (i.e. triples and higher order multiples) at a rate somewhat elevated from the expected rate for B stars as a whole. Therefore, we suggest that migration of binary companions to Be stars to separations at which mass-transfer can occur, triggered by a third companion, must play an important role in the formation of Be stars.
## Acknowledgements
The STARRY project has received funding from the European Union's Horizon 2020 research and innovation programme under MSCA ITN-EID grant agreement No 676036. This project has also received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 823734. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
The catalogues of the proper motion anomalies and RUWE values of both the sample of B and Be stars, as well as the WDS binaries are available form the corresponding author on reasonable request, and will also be made available via VizieR.
|
2305.01683 | Timescales of Chaos in the Inner Solar System: Lyapunov Spectrum and
Quasi-integrals of Motion | Numerical integrations of the Solar System reveal a remarkable stability of
the orbits of the inner planets over billions of years, in spite of their
chaotic variations characterized by a Lyapunov time of only 5 million years and
the lack of integrals of motion able to constrain their dynamics. To open a
window on such long-term behavior, we compute the entire Lyapunov spectrum of a
forced secular model of the inner planets. We uncover a hierarchy of
characteristic exponents that spans two orders of magnitude, manifesting a
slow-fast dynamics with a broad separation of timescales. A systematic analysis
of the Fourier harmonics of the Hamiltonian, based on computer algebra, reveals
three symmetries that characterize the strongest resonances responsible for the
orbital chaos. These symmetries are broken only by weak resonances, leading to
the existence of quasi-integrals of motion that are shown to relate to the
smallest Lyapunov exponents. A principal component analysis of the orbital
solutions independently confirms that the quasi-integrals are among the slowest
degrees of freedom of the dynamics. Strong evidence emerges that they
effectively constrain the chaotic diffusion of the orbits, playing a crucial
role in the statistical stability over the Solar System lifetime. | Federico Mogavero, Nam H. Hoang, Jacques Laskar | 2023-05-02T18:00:04Z | http://arxiv.org/abs/2305.01683v1 | # Timescales of Chaos in the Inner Solar System:
###### Abstract
Numerical integrations of the Solar System reveal a remarkable stability of the orbits of the inner planets over billions of years, in spite of their chaotic variations characterized by a Lyapunov time of only 5 million years and the lack of integrals of motion able to constrain their dynamics. To open a window on such long-term behavior, we compute the entire Lyapunov spectrum of a forced secular model of the inner planets. We uncover a hierarchy of characteristic exponents that spans two orders of magnitude, manifesting a slow-fast dynamics with a broad separation of timescales. A systematic analysis of the Fourier harmonics of the Hamiltonian, based on computer algebra, reveals three symmetries that characterize the strongest resonances responsible for the orbital chaos. These symmetries are broken only by weak resonances, leading to the existence of quasi-integrals of motion that are shown to relate to the smallest Lyapunov exponents. A principal component analysis of the orbital solutions independently confirms that the quasi-integrals are among the slowest degrees of freedom of the dynamics. Strong evidence emerges that they effectively constrain the chaotic diffusion of the orbits, playing a crucial role in the statistical stability over the Solar System lifetime.
## I Introduction
The planetary orbits in the inner Solar System (ISS) are chaotic, with a Lyapunov time distributed around 5 million years (Myr) [1; 2; 3; 4]. Still, they are statistically very stable over a timescale that is a thousand times longer. The probability that the eccentricity of Mercury exceeds 0.7, leading to catastrophic events (i.e., close encounters, collisions, or ejections of planets), is only about 1% over the next 5 billion years (Gyr) [5; 6; 7]. The dynamical half-life of Mercury orbit has recently been estimated at 30-40 billion years [4; 7]. A disparity of nearly four orders of magnitude between the Lyapunov time and the timescale of dynamical instability is intriguing, since the chaotic variations of the orbits of the inner planets cannot be constrained _a priori_. While the total energy and angular momentum of the Solar System are conserved, the disproportion of masses between the outer and inner planets implies that unstable states of the ISS are in principle easily realizable through exchanges of these quantities. The surprising stability of the ISS deserves a global picture in which it can emerge more naturally.
To our knowledge, the only study addressing the timescale separation in the long-term dynamics of the ISS is based on the simplified secular dynamics of a massless Mercury [8]: All the other planets are frozen on regular quasi-periodic orbits; secular interactions are expanded to first order in masses and degree 4 in eccentricities and inclinations; an _a priori_ choice of the relevant terms of the Hamiltonian is made. The typical instability time of about 1 Gyr [8; 9] is, however, too short and in significant contrast with realistic numerical integrations of the Solar System, which show a general increase of the instability rate with the complexity of the dynamical model [7]. We have shown that truncating the secular Hamiltonian of the ISS at degree 4 in eccentricities and inclinations results in an even more stable dynamics, with an instability rate at 5 Gyr that drops by orders of magnitude when compared to the full system[10]. From the perspective of these latest findings, the small probability of 1% of an instability over the age of the Solar System may be naturally regarded as a perturbative effect of terms of degree 6 and higher. Clearly, the striking stability of the dynamics at degree 4 is even more impressive in the present context, and remains to be explained.
A strong separation in dynamical timescales is not uncommon among classical quasi-integrable systems [e.g., 11; 12]. This is notably evinced by the Fermi-Pasta-Ulam-Tsingou (FPUT) problem, which deals with a chain of coupled weakly-anharmonic oscillators [13]. Far from Kolmogorov-Arnold-Moser (KAM) and Nekhoroshev regimes (as is likely to be pertinent to the ISS, see Sect. III), one can generally state that the exponential divergence of close trajectories occurring over a Lyapunov time is mostly tangent to the invariant tori defined by the action variables of the underlying integrable problem, and hence contributes little to the diffusion in the action space [14; 15]. In other words, the Lyapunov time and the diffusion/instability time scale differently with the size of the terms that break integrability, and this can result in very different timescales [12]. However, this argument is as general as poorly satisfactory in addressing quantitatively the timescale separation in a complex problem as the present one. Moreover, even though order-of-magnitude estimates of the chaotic diffusion in the ISS suggest that it may take hundreds of million years to reach the destabilizing secular resonance \(g_{1}-g_{5}\)[16], the low probability of an instability over 5 Gyr still remains unexplained [4]. Establishing more pre
cisely why the ISS is statistically stable over a timescale comparable to its age is a valuable step in understanding the secular evolution of planetary systems through metastable states [4; 17][18]. With its 8 secular degrees of freedom (d.o.f.), this system also constitutes a peculiar bridge between the low-dimensional dynamics often addressed in celestial mechanics and the systems with a large number of bodies studied in statistical mechanics: It cannot benefit from the straightforward application of standard methods of the two fields [e.g., 19, Appendix A].
This work aims to open a window on the long-term statistical behavior of the inner planet orbits. Section II briefly recalls the dynamical model of forced secular ISS introduced in Ref. [4]. Section III presents the numerical computation of its Lyapunov spectrum. Section IV introduces the quasi-symmetries of the resonant harmonics of the Hamiltonian and the corresponding quasi-integrals (QIs) of motion. Section V establishes a geometric connection between the quasi-integrals and the slowest d.o.f. of the dynamics via a principal component analysis (PCA) of the orbital solutions. Section VI states the implications of the new findings on the long-term stability of the ISS. We finally discuss the connections with other classical quasi-integrable systems and the methods used in this work.
## II Dynamical model
The long-term dynamics of the Solar System planets consists essentially of the slow precession of their perihelia and nodes, driven by secular, orbit-averaged gravitational interactions [2; 20]. At first order in planetary masses, the secular Hamiltonian corrected for the leading contribution of general relativity reads [e.g. 4; 21]
\[\widehat{H}=-\sum_{i=1}^{8}\left[\sum_{l=1}^{i-1}\left\langle\frac{Gm_{i}m_{l }}{\left\|\mathbf{r}_{i}-\mathbf{r}_{l}\right\|}\right\rangle+\frac{3G^{2}m_{0}^{2}m _{i}}{c^{2}a_{i}^{2}\sqrt{1-e_{i}^{2}}}\right]. \tag{1}\]
The planets are indexed in order of increasing semi-major axes \((a_{i})_{i=1}^{8}\), \(m_{0}\) and \(m_{i}\) are the Sun and planet masses, respectively, \(e_{i}\) the eccentricities, \(G\) the gravitational constant and \(c\) the speed of light. The vectors \(\mathbf{r}_{i}\) are the heliocentric positions of the planets, and the bracket operator represents the averaging over the mean longitudes resulting from the elimination of the non-resonant Fourier harmonics of the \(N\)-body Hamiltonian [4; 21]. Hamiltonian (1) generates Gauss's dynamics of Keplerian rings [4; 22], whose semi-major axes \(a_{i}\) are constants of motion of the secular dynamics.
By developing the 2-body perturbing function [23; 24] in the computer algebra system TRIP [25; 26], the secular Hamiltonian can be systematically expanded in series of the Poincare rectangular coordinates in complex form,
\[\begin{split} x_{i}&=\sqrt{\Lambda_{i}}\sqrt{1- \sqrt{1-e_{i}^{2}}}\mathrm{E}^{j\varpi_{i}},\\ y_{i}&=\sqrt{2\Lambda_{i}}\left(1-e_{i}^{2}\right) ^{\frac{1}{2}}\sin(\mathcal{I}_{i}/2)\mathrm{E}^{j\Omega_{i}},\end{split} \tag{2}\]
where \(\Lambda_{i}=\mu_{i}[G(m_{0}+m_{i})a_{i}]^{1/2}\), \(\mu_{i}=m_{0}m_{i}/(m_{0}+m_{i})\) being the reduced masses of the planets, \(\mathcal{I}_{i}\) the inclinations, \(\varpi_{i}\) the longitudes of the perihelia and \(\Omega_{i}\) the longitudes of the nodes[27]. Pairs \((x_{i},-j\overline{x}_{i})\) and \((y_{i},-j\overline{y}_{i})\) are canonically conjugate momentum-coordinate variables. When truncating at a given total degree \(2n\) in eccentricities and inclinations, the expansion provides Hamiltonians \(\widehat{H}_{2n}=\widehat{H}_{2n}[(x_{i},\bar{x}_{i},y_{i},\bar{y}_{i})_{i=1}^ {8}]\) that are multivariate polynomials.
Valuable insight into the dynamics of the inner planets is provided by the model of a forced ISS recently proposed [4]. It exploits the great regularity of the long-term motion of the outer planets [2; 20; 28] to predetermine their orbits to a quasi-periodic form:
\[x_{i}(t)=\sum_{l=1}^{M_{i}}\tilde{x}_{il}\,\mathrm{E}^{j\mathbf{m}_{il}\cdot\mathbf{ \omega}_{0}t},\;y_{i}(t)=\sum_{l=1}^{N_{i}}\tilde{y}_{il}\,\mathrm{E}^{j\mathbf{n} _{il}\cdot\mathbf{\omega}_{0}t}, \tag{3}\]
for \(i\in\{5,6,7,8\}\), where \(t\) denotes time, \(\tilde{x}_{il}\) and \(\tilde{y}_{il}\) are complex amplitudes, \(\mathbf{m}_{il}\) and \(\mathbf{n}_{il}\) integer vectors, and \(\mathbf{\omega}_{\mathrm{o}}=(g_{5},g_{6},g_{7},g_{8},s_{6},s_{7},s_{8})\) represents the septuple of the constant fundamental frequencies of the outer orbits. Frequencies and amplitudes of this Fourier decomposition are established numerically by frequency analysis [29; 30] of a comprehensive orbital solution of the Solar System [4; Appendix D]. Gauss's dynamics of the forced ISS is obtained by substituting the predetermined time dependence in Eq. (1),
\[\mathcal{H}=\widehat{H}[(x_{i},y_{i})_{i=1}^{4},(x_{i}=x_{i}(t),y_{i}=y_{i}(t) )_{i=5}^{8}], \tag{4}\]
so that \(\mathcal{H}=\mathcal{H}[(x_{i},y_{i})_{i=1}^{4},t]\). The resulting dynamics consists of two d.o.f. for each inner planet, corresponding to the \(x_{i}\) and \(y_{i}\) variables, respectively. Therefore, the forced secular ISS is described by 8 d.o.f. and an explicit time dependence. As a result of the forcing from the outer planets, no trivial integrals of motion exist and its orbital solutions live in a 16-dimensional phase space.
A truncated Hamiltonian \(\mathcal{H}_{2n}\) for the forced ISS is readily obtained by substituting Eq. (3) in the truncated Hamiltonian \(\widehat{H}_{2n}\) of the entire Solar System. At the lowest degree, \(\mathcal{H}_{2}\) generates a linear, forced Laplace-Lagrange (LL) dynamics. This can be analytically integrated by introducing complex proper mode variables \((u_{i},v_{i})_{i=1}^{4}\) via a time-dependent canonical transformation \((x_{i},-j\overline{x}_{i})\rightarrow(u_{i},-j\overline{u}_{i})\), \((y_{i},-j\overline{y}_{i})\rightarrow(v_{i},-j\overline{v}_{i})\)[4]. Action-angle pairs \((\overline{\mathbf{x}}_{i},\chi_{i})\), \((\Psi_{i},\psi_{i})\) are introduced as
\[u_{i}=\sqrt{\overline{\mathbf{x}}_{i}}\mathrm{E}^{-j\chi_{i}},\;v_{i}=\sqrt{\Psi_{ i}}\mathrm{E}^{-j\psi_{i}}. \tag{5}\]
When expressed in the proper modes, the truncated Hamiltonian can be expanded as a finite Fourier series:
\[\mathbb{H}_{2n}(\mathbf{I},\mathbf{\theta},t)=\sum_{\mathbf{k},\mathbf{\ell}}\widetilde{ \mathbb{H}}_{2n}^{\mathbf{k},\mathbf{\ell}}(\mathbf{I})\mathrm{E}^{j(\mathbf{k}\cdot\mathbf{\theta} +\mathbf{\ell}\cdot\mathbf{\phi}(t))}, \tag{6}\]
where \(\mathbf{I}=(\mathbf{\mathfrak{X}},\mathbf{\Psi})\) and \(\mathbf{\theta}=(\mathbf{\mathfrak{X}},\mathbf{\psi})\) are the 8-dimensional vectors of the action and angle variables, respectively, and we introduce the external angles \(\mathbf{\phi}(t)=-\mathbf{\omega}_{0}t\). The
wave vectors \((\mathbf{k},\mathbf{\ell})\) belong to a finite subset of \(\mathbb{Z}^{8}\times\mathbb{Z}^{7}\). At degree 2, one has \(\mathbb{H}_{2}=-\mathbf{\omega}_{\rm LL}\cdot\mathbf{I}\), where \(\mathbf{\omega}_{\rm LL}=(\mathbf{g}_{\rm LL},\mathbf{s}_{\rm LL})\in\mathbb{R}^{4}\times \mathbb{R}^{4}\) are the LL fundamental precession frequencies of the inner planet perihelia and nodes. Hamiltonian \(\mathbb{H}_{2n}\) is in quasi-integrable form.
The quasi-periodic form of the outer orbits in Eq. (3) contains harmonics of order higher than one, that is, \(\|\mathbf{m}_{il}\|_{1}>1\) and \(\|\mathbf{n}_{il}\|_{1}>1\) for some \(i\) and \(l\), where \(\|\cdot\|_{1}\) denotes the 1-norm. Therefore, the dynamics of \(\mathcal{H}_{2n}\) and \(\mathbb{H}_{2n}\) are not exactly the same [4]. Still, the difference is irrelevant for the results of this work, so we treat the two Hamiltonians as equivalent from now on. Despite the simplifications behind Eqs. (1) and (3), the forced secular ISS has been shown to constitute a realistic model that is consistent with the predictions of reference integrations of the Solar System [2; 5; 6; 20]. It correctly reproduces the finite-time maximum Lyapunov exponent (FT-MLE) and the statistics of the high eccentricities of Mercury over 5 Gyr [4]. Table 1 presents a summary of the different Hamiltonians and corresponding dynamics we consider in this work.
## III Lyapunov spectrum
Ergodic theory provides a way, through the Lyapunov characteristic exponents (LCEs), to introduce a fundamental set of timescales for any differentiable dynamical system \(\dot{\mathbf{z}}=\mathbf{F}(\mathbf{z},t)\) defined on a phase space \(\mathcal{P}\subseteq\mathbb{R}^{P}\)[31; 32; 33; 34]. If \(\mathbf{\Phi}(\mathbf{z},t)\) denotes the associated flow and \(\mathbf{z}(t)=\mathbf{\Phi}(\mathbf{z}_{0},t)\) the orbit that emanates from the initial condition \(\mathbf{z}_{0}\), the LCEs \(\lambda_{1}\geq\lambda_{2}\geq\dots\geq\lambda_{P}\) are the logarithms of the eigenvalues of the matrix \(\mathbf{\Lambda}(\mathbf{z}_{0})\) defined as
\[\lim_{t\to\infty}\big{(}\mathbf{M}(\mathbf{z}_{0},t)^{\rm T}\mathbf{M}(\mathbf{z}_{0},t)\big{)}^{1/2t}=\mathbf{\Lambda}(\mathbf{z}_{0}), \tag{7}\]
where \(\mathbf{M}(\mathbf{z}_{0},t)=\partial\mathbf{\Phi}/\partial\mathbf{z}_{0}\) is the fundamental matrix and T stands for transposition [32; 33]. Introducing the Jacobian \(\mathbf{J}=\partial\mathbf{F}/\partial\mathbf{z}\), the fundamental matrix allows us to write the solution of the variational equations \(\dot{\mathbf{\zeta}}=\mathbf{J}(\mathbf{z}(t),t)\mathbf{\zeta}\) as \(\mathbf{\zeta}(t)=\mathbf{M}(\mathbf{z}_{0},t)\mathbf{\zeta}_{0}\), where \(\mathbf{\zeta}(t)\in\mathcal{T}_{\mathbf{z}(t)}\mathcal{P}\) belongs to the tangent space of \(\mathcal{P}\) at point \(\mathbf{z}(t)\) and \(\mathbf{\zeta}_{0}=\mathbf{\zeta}(0)\). The multiplicative ergodic theorem of Oseledec [31] states that if \(\rho\) is an ergodic (i.e. invariant and indecomposable) measure for the time evolution and has compact support, then the limit in Eq. (7) exists for \(\rho\)-almost all \(\mathbf{z}_{0}\), and the LCEs are \(\rho\)-almost everywhere constant and only depend on \(\rho\)[32]. Moreover, one has
\[\lim_{t\to\infty}\frac{1}{t}\log\|\mathbf{M}(\mathbf{z}_{0},t)\mathbf{\zeta}_{0}\|= \lambda^{(i)}\;\;\text{if}\;\;\mathbf{\zeta}_{0}\in E^{(i)}_{\mathbf{z}_{0}}\setminus E ^{(i+1)}_{\mathbf{z}_{0}}, \tag{8}\]
for \(\rho\)-almost all \(\mathbf{z}_{0}\), where \(\lambda^{(1)}>\lambda^{(2)}>\ldots\) are the LCEs without repetition by multiplicity, and \(E^{(i)}_{\mathbf{z}_{0}}\) is the subspace of \(\mathbb{R}^{P}\) corresponding to the eigenvalues of \(\mathbf{\Lambda}(\mathbf{z}_{0})\) that are smaller than or equal to \(\exp\lambda^{(i)}\), with \(\mathcal{T}_{\mathbf{z}_{0}}\mathcal{P}=E^{(1)}_{\mathbf{z}_{0}}\supset E^{(2)}_{\bm {z}_{0}}\supset\cdots\). The specific choice of the \(\mathbb{R}^{P}\)-norm \(\|\cdot\|\) in Eq. (8) is irrelevant [32; 34]. Once the LCEs have been introduced, a characteristic timescale can be defined from each positive exponent as \(\lambda_{i}^{-1}\). In the case of the maximum Lyapunov exponent \(\lambda_{1}\), the corresponding timescale is commonly called the Lyapunov time.
For a Hamiltonian system with \(p\) d.o.f. (i.e., \(P=2p\)), the fundamental matrix is symplectic and the set of LCEs is symmetric with respect to zero, that is,
\[\Delta\lambda_{i}:=\lambda_{i}+\lambda_{2p-i+1}=0\;\;\text{for all}\;\;1\leq i \leq p. \tag{9}\]
If the Hamiltonian is time independent, a pair of exponents vanishes. In general, the existence of an integral of motion \(\mathfrak{C}=\mathfrak{C}(\mathbf{z})\) implies a pair of null exponents, one of them being associated with the direction of the tangent space that is normal to the surface of constant \(\mathfrak{C}\) [e.g. 33].
The ISS is a clear example of a dynamical system that is out of equilibrium. Its phase-space density diffuses seamlessly over any meaningful timescale [5; 28]. Therefore, the infinite time limit in Eq. (7) is not physically relevant. The non-null probability of a collisional evolution of the inner planets [5; 6; 35; 36] implies that such limit does not even exist as a general rule. Most of the orbital solutions stemming from the current knowledge of the Solar System are indeed asymptotically unstable [4; 7]. Physically relevant quantities are the finite-time LCEs (FT-LCEs), \(\lambda_{i}(\mathbf{z}_{0},t)\), defined from the eigenvalues \(\mathfrak{m}_{1}\geq\mathfrak{m}_{2}\geq\dots\geq\mathfrak{m}_{P}\) of the time-dependent symmetric positive-defined matrix \(\mathbf{M}(\mathbf{z}_{0},t)^{\rm T}\mathbf{M}(\mathbf{z}_{0},t)\) as
\[\lambda_{i}(\mathbf{z}_{0},t)=\frac{1}{2t}\log\mathfrak{m}_{i}(\mathbf{z}_{0},t). \tag{10}\]
The time dependence of the phase-space density translates in the fact that no ergodic measure is realized by the dynamics, and the FT-LCEs depend on the initial condition \(\mathbf{z}_{0}\) in a non-trivial way[37].
The FT-MLE of the forced secular ISS has been numerically computed over 5 Gyr for an ensemble of stable orbital solutions of the Hamiltonian \(\mathcal{H}\) with initial
\begin{table}
\begin{tabular}{l l c} Hamiltonian & Description & Reference \\ \hline \(\mathcal{H}\) & Gauss’s dynamics in complex Poincaré variables & Eq. (4) \\ \(\mathcal{H}_{2n}\) & Truncation of Gauss’s dynamics at total degree \(2n\) in eccentricities and inclinations & Ref. [4] \\ \(\mathbb{H}_{2n}\) & Truncated dynamics in the action-angle variables of the Laplace-Lagrange dynamics \(\mathcal{H}_{2}\) & Eq. (6) and Ref. [4] \\ \(\mathbb{H}_{2n}^{\mathbbm{z}}\) & Fourier harmonics that involve outer planet modes other than \(g_{5}\) are dropped from \(\mathbb{H}_{2n}\) & Eq. (24) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the different models of forced secular ISS considered in this work. Gauss’s dynamics results from first-order averaging of the \(N\)-body Hamiltonian over the mean longitudes of the planets. The dynamics generated by \(\mathcal{H}_{2n}\) and \(\mathbb{H}_{2n}\) are practically equivalent and treated as such. The \(\mathbb{H}_{2n}^{\mathbbm{z}}\) models are introduced and discussed in Sec. IV.4.
conditions very close to their nominal values[38]. Its long-term distribution is quite large and does not shrink over time [4, Fig. 3]. At 5 Gyr, the probability density function (PDF) of the Lyapunov time peaks at around 4 Myr, it decays very fast below 2 Myr, while its 99th percentile reaches 10 Myr [4, Fig. 4]. The significant width of the distribution relates to the aforementioned out-of-equilibrium dynamics of the ISS, as the FT-MLE of each orbital solution continues to vary over time. The dependence of the exponent on the initial condition is associated with the non-ergodic exploration of the phase space by the dynamics. As a remark, the fact that the lower tail of the FT-MLE distribution, estimated from more than 1000 solutions, does not extend to zero implies that invariant (KAM) tori are rare in a neighborhood of the nominal initial conditions (if they exist at all). This fact excludes that the dynamics is in a Nekhoroshev regime [12, 39], in agreement with the indications of a multidimensional resonance overlapping at the origin of chaos [19, 40]. In such a case, the long dynamical half-life of the ISS should not be interpreted in terms of an exponentially-slow Arnold diffusion.
Computations of the FT-MLE of the Solar System planets have been reported for more than thirty years [1, 3]. However, the retrieval of the entire spectrum of exponents still represents a challenging task. Integrating an \(N\)-body orbital solution for the Sun and the eight planets that spans 5 Gyr requires the order of a month of wall-clock time [e.g. 41]. The computation by a standard method of the entire Lyapunov spectrum for a system with \(p\) d.o.f. also requires the simultaneous time evolution of a set of \(2p\) tangent vectors [42]. On the top of that, a computation of the exponents for an ensemble of trajectories is advisable for a non-ergodic dynamics [4]. These considerations show how demanding the computation of the Lyapunov spectrum of the Solar System planets is. By contrast, a 5-Gyr integration of the forced ISS takes only a couple of hours for Gauss's dynamics (\(\mathcal{H}\)) and a few minutes at degree 4 (\(\mathcal{H}_{4}\)). This dynamical model thus provides a unique opportunity to compute all the FT-LCEs that are mainly related to the secular evolution of the inner orbits.
We compute the Lyapunov spectrum of the truncated forced ISS using the standard method of Benettin _et al._[43], based on Gram-Schmidt orthogonalization. Manipulation of the truncated Hamiltonian \(\mathcal{H}_{2n}\) in TRIP allows us to systematically derive the equations of motion and the corresponding variational equations, which we integrate through an Adams PECE method of order 12 and a timestep of 250 years. Parallelization of the time evolution of the 16 tangent vectors, between two consecutive reorthonormalization steps of the Benettin _et al._[43] algorithm, significantly reduces the computation time. Figure 1a shows the positive FT-LCEs expressed as angular frequencies over the next 10 Gyr for the Hamiltonian truncated at degree 4. The FT-LCEs are computed for 150 stable solutions, with initial conditions very close to the nominal values of Gauss's dynamics and random sets of initial tangent vectors [19, Appendix C]. The figure shows the [5th, 95th] percentile range of the marginal PDF of each exponent estimated from the ensemble of solutions. For large times, the exponents of each solution become independent of the initial tangent vectors, the renormalization time, and the norm chosen for the phase-space vectors (see Appendix A and Fig. 8a). In this asymptotic regime, the Benettin _et al._[43] algorithm purely retrieves the FT-LCEs as defined in Eq. (10), and the width of their distributions only reflects the out-of
Figure 1: Positive FT-LCEs \(\lambda_{i}\) of the forced secular ISS from Hamiltonians \(\mathcal{H}_{4}\) (a) and \(\mathbb{H}_{4}^{\star}\) [Eq. (24)] (b), and corresponding characteristic timescales \(\lambda_{i}^{-1}\). The bands represent the [5th, 95th] percentile range of the marginal PDFs estimated from an ensemble of 150 stable orbital solutions with very close initial conditions. The lines denote the distribution medians. The \(\mathbb{H}_{4}^{\star}\) model is introduced and discussed in Sec. IV.4.
equilibrium dynamics of the system. The convergence of our numerical computation is also assessed by verifying the symmetry of the spectrum stated in Eq. (9) (see Appendix A and Fig. 8b).
The spectrum in Fig. 1a has distinctive features. A set of intermediate exponents follow the MLE, ranging from \(0.1\) to \(0.01^{\prime\prime}\,\mathrm{yr}^{-1}\), while the smallest ones fall below \(0.01^{\prime\prime}\,\mathrm{yr}^{-1}\). Figure 1a reveals the existence of a hierarchy of exponents and corresponding timescales that spans two orders of magnitude, down to a median value of \(\lambda_{8}^{-1}\approx 500\) Myr. The number of positive exponents confirms that no integral of motion exists, as one may expect from the forcing of the outer planets. We also compute the spectrum for the Hamiltonian truncated at degree 6. As shown in Appendix A (Fig. 9), the asymptotic distributions of the exponents are very similar to those at degree 4. This result suggests that long-term diffusion of the phase-space density is very close in the two cases. The different instability rates of the two truncated dynamics mainly relates to the geometry of the instability boundary, which is closer to the initial position of the system for \(\mathcal{H}_{6}\) than for \(\mathcal{H}_{4}\)[7].
The relevance of the Lyapunov spectrum in Fig. 1a emerges from the fact that the existence of an integral of motion implies a pair of vanishing exponents. This is a pivotal point: By a continuity argument, the presence of positive exponents much smaller than the leading one constitutes a compelling indication that there are dynamical quantities whose chaotic decoherence over initially very close trajectories takes place over timescales much longer than the Lyapunov time. In the long term, such quantities should diffuse much more slowly than any LL action variable. Therefore, Fig. 1a suggests that the secular orbits of the inner planets are characterized by a slow-fast dynamics that is much more pronounced than the well-known timescale separation arising from the LL integrable approximation. The existence of slow quantities, which are _a priori_ complicated functions of the phase-space variables, is crucial in the context of finite-time stability, as they can effectively constrain the long-term diffusion of the phase-space density toward the unstable states. The next section addresses the emergence of these slow quantities from the symmetries of the Fourier harmonics that compose the Hamiltonian.
## IV Quasi-integrals of motion
The emergence of a chaotic behavior of the planetary orbits can be explained in terms of the pendulum-like dynamics generated by each Fourier harmonic that composes the Hamiltonian in Eq. (6) [e.g. 44]. One can write \(\mathbb{H}_{2n}(\mathbf{I},\mathbf{\theta},t)=\widetilde{\mathbb{H}}_{2n}^{\mathbf{0}, \mathbf{0}}(\mathbf{I})+\sum_{i=1}^{\mathcal{M}_{2n}}\mathcal{F}_{i}(\mathbf{I},\mathbf{ \theta},t)\), with
\[\mathcal{F}_{i}(\mathbf{I},\mathbf{\theta},t)=\widetilde{\mathbb{H}}_{2n}^{\mathbf{k}^{i},\mathbf{\ell}^{i}}(\mathbf{I})\,\mathrm{E}^{j\left(\mathbf{k}^{i}\cdot\mathbf{\theta}+\mathbf{ \ell}^{i}\cdot\mathbf{\phi}(t)\right)}+\text{c.c.}, \tag{11}\]
where \((\mathbf{k}^{i},\mathbf{\ell}^{i})\neq(\mathbf{0},\mathbf{0})\), \(\mathcal{M}_{2n}\) is the number of harmonics in \(\mathbb{H}_{2n}\) with a non-null wave vector, and c.c. stands for complex conjugate. Chaos arises from the interaction of resonant harmonics, that is, those harmonics \(\mathcal{F}_{i}\) whose frequency combination \(\mathbf{k}^{i}\cdot\dot{\mathbf{\theta}}+\mathbf{\ell}^{i}\cdot\dot{\mathbf{\phi}}(t)\) vanishes at some point along the motion. Using the computer algebra system TRIP, the harmonics of \(\mathbb{H}_{10}\) that enter into resonance along the 5-Gyr nominal solution of Gauss's dynamics have been systematically retrieved, together with the corresponding time statistics of the resonance half-widths \(\Delta\omega\)[19]. The resonances have then been ordered by decreasing time median of their half-widths. The resulting ranking of resonances is denoted as \(\mathcal{R}_{1}\) from now on. Table 2 recalls the 30 strongest resonances that are active for more than 1% of the 5-Gyr time span of the orbital solution. The wave vector of each harmonic is identified by the corresponding combination of frequency labels \((g_{i},s_{i})_{i=1}^{8}\), that is, \(\mathbf{k}\cdot\mathbf{\omega}_{\mathbf{\omega}_{\mathbf{i}}}+\mathbf{\ell}\cdot\mathbf{\omega}_{o}\), with \(\mathbf{\omega}_{\mathbf{\omega}}=(g_{1},g_{2},g_{3},g_{4},s_{1},s_{2},s_{3},s_{4})\). Table 2 also shows
\begin{table}
\begin{tabular}{c c c c c} \(i\) & Fourier harmonic \(\mathcal{F}_{i}\) & \(\mathcal{O}_{i}\) & \(\tau_{\text{res}}^{\text{res}}\) & \(\Delta\omega_{i}\) \\ \hline
[MISSING_PAGE_POST]
g_{3}+g_{4}+s_{1}-s_
the order of each harmonic, defined as the even integer \(\mathcal{O}=\|(\mathbf{k},\mathbf{\ell})\|_{1}\). The support of the asymptotic ensemble distribution of the FT-MLE shown in Fig. 1a overlaps in a robust way with that of the time distribution of the half-width of the strongest resonances. In other words,
\[2\pi\lambda_{1}\approx\Delta\omega^{\mathcal{R}_{1}}, \tag{12}\]
where \(\Delta\omega^{\mathcal{R}_{1}}\) stands for the half-width of the uppermost resonances of ranking \(\mathcal{R}_{1}\). Equation (12) shows the dynamical sources of chaos in the ISS by connecting the top of the Lyapunov spectrum with the head of the resonance spectrum. Computer algebra allows us to establish such a connection in an unbiased way despite the multi-dimensional nature of the dynamics. We stress that such analysis is built on the idea that the time statistics of the resonant harmonics along a 5-Gyr ordinary orbital solution should be representative of their ensemble statistics (defined by a set of stable solutions with very close initial conditions) at some large time of the order of billions of years. This assumption was inspired by the good level of stationarity that characterizes the ensemble distribution of the MLE beyond 1 Gyr [4, 19], and that extends to the entire spectrum in Fig. 1a.
We remark that, strictly speaking, ranking \(\mathcal{R}_{1}\) is established on the Fourier harmonics of the Lie-transformed Hamiltonian \(\mathbb{H}^{\prime}_{2n}\)[19, Appendix G]. New canonical variables are indeed defined to transform \(\mathbb{H}_{2n}\) in a Birkhoff normal form to degree 4. The goal is to let the interactions of the terms of degree 4 in \(\mathbb{H}_{2n}\) appear more explicitly in the amplitudes of the harmonics of higher degrees in \(\mathbb{H}^{\prime}_{2n}\), the physical motivation being that the non-linear interaction of the harmonics at degree 4 constitutes the primary source of chaos [19]. Keeping in mind the quasi-identity nature of the Lie transform, here we drop for simplicity the difference between the two Hamiltonians. Moreover, all the new analyses of this work involve the original variables of Eq. (5).
### Quasi-symmetries of the resonant harmonics
In addition to the dynamical interactions responsible for the chaotic behavior of the orbits, Table 2 provides information on the geometry of the dynamics in the action variable space. Ranking the Fourier harmonics allows us to consider partial Hamiltonians constructed from a limited number \(m\) of leading terms [7, 19], that is,
\[\mathbb{H}_{2n,m}=\widetilde{\mathbb{H}}_{2n}^{\mathbf{0},\mathbf{0}}+\sum_{i= 1}^{m}\mathcal{F}_{i}. \tag{13}\]
The dynamics of a Hamiltonian reduced to a small set of harmonics is generally characterized by several symmetries and corresponding integrals of motion. We are interested in how these symmetries are progressively destroyed when one increases the number of terms taken into account in Eq. (13).
Consider a set of \(m\) harmonics of \(\mathbb{H}_{2n}\) and a dynamical quantity that is a linear combination of the action variables, that is,
\[C_{\mathbf{\gamma}}=\mathbf{\gamma}\cdot\mathbf{I}, \tag{14}\]
\(\mathbf{\gamma}\in\mathbb{R}^{8}\) being a parameter vector. From Eq. (11), the partial contribution of the \(m\) harmonics to the time derivative of \(C_{\mathbf{\gamma}}\) along the flow of \(\mathbb{H}_{2n}\) is
\[\dot{C}_{\mathbf{\gamma},m}=2\sum_{i=1}^{m}\mathbf{\gamma}\cdot\mathbf{k}^{i}\operatorname{ Im}\{\widetilde{\mathbb{H}}_{2n}^{\mathbf{k}^{i},\mathbf{\ell}}(\mathbf{I})\operatorname{E}^{j \left(\mathbf{k}^{i}\cdot\mathbf{\theta}+\mathbf{\ell}^{i}\cdot\mathbf{\phi}(t)\right)}\}, \tag{15}\]
and \(\dot{C}_{\mathbf{\gamma}}=\dot{C}_{\mathbf{\gamma},\mathcal{M}_{2n}}\), where \(\mathcal{M}_{2n}\) is the total number of harmonics with a non-null wave vector that appear in \(\mathbb{H}_{2n}\). Any quantity \(C_{\mathbf{\gamma}}\) with \(\mathbf{\gamma}\cdot\mathbf{k}^{i}=0\) is conserved by the one-d.o.f. dynamics generated by the single harmonic \(\mathcal{F}_{i}\). In other words, such a quantity would be an integral of motion if \(\mathcal{F}_{i}\) were the only harmonic to appear in the Hamiltonian. Considering now \(m\) different harmonics, these do not contribute to the change of the quantity \(C_{\mathbf{\gamma}}\) if \(\mathbf{\gamma}\perp\operatorname{span}(\mathbf{k}^{1},\mathbf{k}^{2},\dots,\mathbf{k}^{m})\), that is, if the vector \(\mathbf{\gamma}\) belongs to the orthogonal complement of the linear subspace of \(\mathbb{R}^{8}\) spanned by the wave vectors \((\mathbf{k}^{i})_{i=1}^{m}\). We also consider the quantity
\[C_{\mathbf{\gamma}}^{\prime}=\mathbb{H}_{2n}+\mathbf{\gamma}\cdot\mathbf{I}. \tag{16}\]
Because of the explicit time dependence in the Hamiltonian, the partial contribution of a set of \(m\) harmonics to the time derivative of \(C_{\mathbf{\gamma}}^{\prime}\) along the flow of \(\mathbb{H}_{2n}\) is
\[\dot{C}_{\mathbf{\gamma},m}^{\prime}=2\sum_{i=1}^{m}(\mathbf{\gamma}\cdot\mathbf{k}^{i}+ \mathbf{\ell}^{i}\cdot\mathbf{\omega}_{0})\operatorname{Im}\{\widetilde{\mathbb{H}}_{2 n}^{\mathbf{k}^{i},\mathbf{\ell}}(\mathbf{I})\operatorname{E}^{j\left(\mathbf{k}^{i}\cdot\mathbf{\theta}+ \mathbf{\ell}^{i}\cdot\mathbf{\phi}(t)\right)}\}, \tag{17}\]
and one has \(\dot{C}_{\mathbf{\gamma}}^{\prime}=\dot{C}_{\mathbf{\gamma},\mathcal{M}_{2n}}^{\prime}\). Quantity \(C_{\mathbf{\gamma}}^{\prime}\) is unchanged by the \(m\) harmonics if \(\mathbf{\gamma}\cdot\mathbf{k}^{i}+\mathbf{\ell}^{i}\cdot\mathbf{\omega}_{0}=0\) for \(i\in\{1,2,\dots,m\}\). Dynamical quantities \(C_{\mathbf{\gamma}}\) or \(C_{\mathbf{\gamma}}^{\prime}\) that are unaffected by a given set of leading harmonics, that is, with null partial contribution in Eq. (15) or (17), are denoted as quasi-integrals of motion from now on. More specifically, we build our analysis on ranking \(\mathcal{R}_{1}\), since the resonant harmonics are those responsible for changes that accumulate stochastically over long timescales, driving chaotic diffusion.
In the framework of the aforementioned considerations, the resonances listed in Table 2 possess three different symmetries.
First symmetry.The rotational invariance of the entire Solar System implies the d'Alembert rule \(\sum_{i=1}^{8}k_{i}+\sum_{i=1}^{7}\ell_{i}=0\), where \(\mathbf{k}=(k_{1},k_{2},\dots,k_{8})\) and \(\mathbf{\ell}=(\ell_{1},\ell_{2},\dots,\ell_{7})\)[21, 24, 45, 46]. Moreover, the Jupiter-dominated eccentricity mode \(g_{5}\) is the only fundamental Fourier mode of the outer planet forcing to appear in Table 2. The quantity
\[\mathcal{E}_{2n}:=C_{g_{5}\mathbf{1}_{8}}^{\prime}=\mathbb{H}_{2n}+g_{5}{\sum_{i= 1}^{4}}(\mathfrak{X}_{i}+\Psi_{i}), \tag{18}\]
with \(\mathbf{1}_{8}=(1,1,\dots,1)\in\mathbb{R}^{8}\), is therefore unaffected by the resonances listed in Table 2. In an equivalent way, the
time-dependent canonical transformation \(\mathbf{\theta}\to\mathbf{\theta}+g_{5}t\,\mathbf{1}_{8}\), with unchanged action variables, allows us to remove the explicit time dependence in these harmonics. Quantity \(\mathcal{E}_{2n}\) coincides with the transformed Hamiltonian and the harmonics in Table 2 do not contribute to its time derivative.
Second symmetry.We write the eccentricity and inclination parts of the harmonic wave vectors explicitly, that is, \(\mathbf{k}=(\mathbf{k}^{\rm{ecc}},\mathbf{k}^{\rm{inc}})\) with \(\mathbf{k}^{\rm{ecc}},\mathbf{k}^{\rm{inc}}\in\mathbb{R}^{4}\). One can visually check that the harmonics in Table 2 verify the relation \(\sum_{i=1}^{4}k_{i}^{\rm{inc}}=0\), where \(\mathbf{k}^{\rm{inc}}\ =\ (k_{1}^{\rm{inc}},\ldots,k_{4}^{\rm{inc}})\). Therefore, denoting \(\mathbf{\gamma}_{1}=(\mathbf{0}_{4},\mathbf{1}_{4})\), the quantity
\[\mathcal{C}_{\rm{inc}}:=C_{\mathbf{\gamma}_{1}}=\Psi_{1}+\Psi_{2}+\Psi_{3}+\Psi_{4} \tag{19}\]
is conserved by these resonances. \(\mathcal{C}_{\rm{inc}}\) is the angular momentum deficit (AMD) [47] contained in the inclination d.o.f. This symmetry can then be interpreted as a remnant of the conservation of the AMD of the entire (secular) Solar System. We remark that the AMD contained in the eccentricity d.o.f., \(\mathcal{C}_{\rm{ecc}}=\sum_{i=1}^{4}\mathbf{\mathfrak{X}}_{i}\), is not invariant under the leading resonances because of the eccentricity forcing mainly exerted by Jupiter through the mode \(g_{5}\). The conservation of \(\mathcal{C}_{\rm{inc}}\) depends on two facts: the inclination modes \(s_{6},s_{7},s_{8}\) of the external forcing do not appear in Table 2; low-order harmonics like \(2g_{1}-s_{1}-s_{2}\), \(2g_{1}-2s_{1}\), and \(2g_{1}-2s_{2}\) are never resonant (even if they can raise large quasi-periodic contributions), so that two AMD reservoirs \(\mathcal{C}_{\rm{ecc}}\) and \(\mathcal{C}_{\rm{inc}}\) are decoupled in Table 2. We recall that the absence of an inclination mode \(s_{5}\) in the external forcing relates to the fixed direction of the angular momentum of the entire Solar System [21, 2, 46].
Third symmetry.The first two symmetries could be expected to some extent on the basis of physical intuition of the interaction between outer and inner planets. However, it is not easy to even visually guess the third one from Table 2. Consider the \(30\times 8\) matrix \(\mathbf{\mathsf{K}}_{30}\) whose rows are the wave vectors \((\mathbf{k}^{i})_{i=1}^{30}\) of the listed resonances. A singular value decomposition shows that the rank of \(\mathbf{\mathsf{K}}_{30}\) is equal to 6. Therefore, the linear subspace \(V_{30}={\rm span}(\mathbf{k}^{1},\mathbf{k}^{2},\ldots,\mathbf{k}^{30})\) spanned by the wave vectors has dimension 6. A Gram-Schmidt orthogonalization allows us to determine two linearly independent vectors that span its orthogonal complement \(V_{30}^{\perp}\). One choice consists in \(V_{30}^{\perp}={\rm span}(\mathbf{\gamma}_{2},\mathbf{\gamma}_{2}^{\perp})\), with
\[\begin{split}\mathcal{C}_{2}:=\mathcal{C}_{\mathbf{\gamma}_{2}}& =-\mathfrak{X}_{3}-\mathfrak{X}_{4}+\Psi_{1}+\Psi_{2}+2\Psi_{3}+2 \Psi_{4},\\ \mathcal{C}_{2}^{\perp}:=\mathcal{C}_{\mathbf{\gamma}_{2}^{\perp}}& =\phantom{-}\mathfrak{X}_{3}+\mathfrak{X}_{4}+\Psi_{1}+\Psi_{2}. \end{split} \tag{20}\]
Since the second symmetry clearly requires that \(\mathbf{\gamma}_{1}\in V_{30}^{\perp}\), the three quantities \(\mathcal{C}_{\rm{inc}},\mathcal{C}_{2},\mathcal{C}_{2}^{\perp}\) are not independent and one has indeed \(\mathcal{C}_{\rm{inc}}=(\mathcal{C}_{2}+\mathcal{C}_{2}^{\perp})/2\). We remark that \((\mathcal{C}_{2}-\mathcal{C}_{2}^{\perp})/2=-\mathfrak{X}_{3}-\mathfrak{X}_{ 4}+\Psi_{3}+\Psi_{4}\). The additional symmetry can thus be interpreted in terms of a certain decoupling between the d.o.f. 3,4 and 1,2, representing in the proper modes the Earth-Mars and Mercury-Venus subsystems, respectively.
The aforementioned symmetries, that exactly characterize the resonances listed in Table 2, naturally represent quasi-symmetries when considering the entire spectrum of resonances \(\mathcal{R}_{1}\). They are indeed broken at some point by weak resonances (see Sect. IV.3). Quantities \(\mathcal{E}_{2n}\), \(\mathcal{C}_{\rm{inc}}\), and \(\mathcal{C}_{2}\) are the corresponding QIs of motion. The persistence of the three symmetries under the 30 leading resonances is somewhat surprising. Concerning \(\mathcal{C}_{\rm{inc}}\) and \(\mathcal{C}_{2}\), for example, one might reasonably expect that, since the ISS has 8 d.o.f., the subspace spanned by the wave vectors of just a dozen of harmonics should already have maximal dimension, destroying all possible symmetries.
We remark that, differently from \(\mathcal{C}_{\rm{inc}}\) and \(\mathcal{C}_{2}\), the quantity \(\mathcal{E}_{2n}\) is a non-linear function of the action-angle variables. However, as far as stable orbital evolutions are concerned, the convergence of the series expansion of the Hamiltonian is sufficiently fast that the linear LL approximation \(\mathcal{E}_{2}=\mathbb{H}_{2}+g_{5}\mathbf{1}_{8}\cdot\mathbf{I}=C_{\mathbf{\gamma}_{3}}\), with \(\mathbf{\gamma}_{3}=-\mathbf{\omega}_{\rm{LL}}+g_{5}\mathbf{1}_{8}\), reproduces reasonably well \(\mathcal{E}_{2n}\) along the flow of \(\mathbb{H}_{2n}\) for \(n>1\). The vector \(\mathbf{\gamma}_{3}\) is used in Sect. V, together with \(\mathbf{\gamma}_{1}\) and \(\mathbf{\gamma}_{2}\), to deal with the geometry of the linear action subspace spanned by the QIs. The explicit expressions of these vectors are given in Appendix B. We mention that, differently from \(\mathbf{\gamma}_{1}\) and \(\mathbf{\gamma}_{2}\), the components of \(\mathbf{\gamma}_{3}\) are not integer and they have the dimension of a frequency.
### Slow variables
The QIs of motion \(\mathcal{E}_{2n},\mathcal{C}_{\rm{inc}},\mathcal{C}_{2}\) are clearly strong candidates for slow variables once evaluated along the orbital solutions. In what follows, to assess the slowness of a dynamical quantity when compared to the typical variations of the action variables, we consider the variance of its time series along a numerical solution.
We define the dimensionless QIs
\[\widehat{\mathcal{C}}_{\rm{inc}}=\frac{\mathcal{C}_{\rm{inc}}}{\|\mathbf{\gamma}_{1 }\|\mathrm{C}_{0}},\quad\widehat{\mathcal{C}}_{2}=\frac{\mathcal{C}_{2}}{\|\mathbf{ \gamma}_{2}\|\mathrm{C}_{0}},\quad\widehat{\mathcal{E}}_{2n}=\frac{\mathcal{E} _{2n}}{\|\mathbf{\gamma}_{3}\|\mathrm{C}_{0}}, \tag{21}\]
where \(\mathrm{C}_{0}\) stands for the current total AMD of the inner planets, that is, the value of \(\mathcal{C}_{\rm{ecc}}+\mathcal{C}_{\rm{inc}}\) at time zero. We stress that, by introducing the unit vectors \(\widehat{\mathbf{\gamma}}_{i}=\mathbf{\gamma}_{i}/\|\mathbf{\gamma}_{i}\|\) for \(i\in\{1,2,3\}\), one has \(\widehat{\mathcal{C}}_{\rm{inc}}=C_{\widehat{\mathbf{\gamma}}_{1}}/\mathrm{C}_{0}\) and \(\widehat{\mathcal{C}}_{2}=C_{\widehat{\mathbf{\gamma}}_{2}}/\mathrm{C}_{0}\). At degree 2, one also has \(\widehat{\mathcal{E}}_{2}=C_{\widehat{\mathbf{\gamma}}_{3}}/\mathrm{C}_{0}\). We then consider the ensembles of numerical integrations of \(\mathcal{H}_{4}\) and \(\mathcal{H}_{6}\), with very close initial conditions and spanning 100 Gyr in the future, that have been presented in Ref. [7]. The top row of Fig. 2 shows the time evolution over 5 Gyr of the dimensionless QIs and of two components of the dimensionless action vector \(\widehat{\mathbf{I}}=\mathbf{I}/\mathrm{C}_{0}\) along the nominal orbital solutions of the two ensembles. We subtract from each time series its mean over the plotted time span. The time series are low-pass filtered by employing the Kolmogorov-Zurbenko (KZ) filter with three iterations of the moving average [4, 48]. A cutoff frequency of 1 Myr\({}^{-1}\) is chosen to highlight the long-term diffusion that can be hidden by short-time quasi-periodic oscillations. This is in line with our definition of quasi-integrals based
on contribution from resonant harmonics only. Figure 2 clearly shows that the QIs are slowly-diffusing variables when compared to an arbitrary function of the action variables. The behavior of the QIs along the nominal orbital solutions of Fig. 2 is confirmed by a statistical analysis in Appendix C. Figure 10 shows the time evolution of the distributions of the same quantities as Fig. 2 over the stable orbital solutions of the entire ensembles of 1080 numerical integrations of Ref. [7]. Figure 11 details the growth of the QI dispersion over time.
We remark that \(\mathcal{C}_{2}\) and \(\mathcal{E}_{2n}\) show very similar time evolutions along stable orbital solutions, as can be seen in the top row Fig. 2. This is explained by the interesting observation that the components of the unit vectors \(\widehat{\mathbf{\gamma}}_{2}\) and \(\widehat{\mathbf{\gamma}}_{3}\) differ from each other by only a few percent, as shown in Appendix B. However, we stress that the two vectors are in fact linearly independent: \(\mathcal{C}_{2}\) does not depend on the actions \(\mathfrak{X}_{1}\) and \(\mathfrak{X}_{2}\), while \(\mathcal{E}_{2n}\) does. The two QIs move away from each other when high eccentricities of Mercury are reached, that is, for large excursions of the Mercury-dominated action \(\mathfrak{X}_{1}\).
### Weak resonances and Lyapunov spectrum
A fundamental result from Table 2 is that the symmetries introduced in Sect. IV.1 are still preserved by resonances that have half-widths an order of magnitude smaller than those of the strongest terms. It is natural to extract from ranking \(\mathcal{R}_{1}\) the weak resonances that break the three symmetries. A new ranking of resonances \(\mathcal{R}_{2}\) is defined in this way. Table 3 reports the 10 strongest symmetry-breaking resonances that change \(\mathcal{E}_{2n},\mathcal{C}_{\text{inc}},\mathcal{C}_{2}\), respectively. As in Table 2, only harmonics that are resonant for more than 1% of the 5-Gyr time span of the nominal solution of Gauss's dynamics are shown. The leading symmetry-breaking resonances have half-widths of about \(0.01^{\prime\prime}\,\text{yr}^{-1}\). For each QI, the dominant contribution comes from harmonics involving Fourier modes of the outer planet forcing other than \(g_{5}\): the Saturn-dominated modes \(g_{6},s_{6}\) and the modes \(g_{7},s_{7}\) mainly associated to Uranus. In the case of \(\mathcal{C}_{\text{inc}}\), there is also a contribution that starts at about \(0.006^{\prime\prime}\,\text{yr}^{-1}\) with \(\mathcal{F}_{8}=4g_{1}-g_{2}-g_{3}-s_{1}-2s_{2}+s_{4}\) and comes from high-order internal resonances, that is, resonances that involve only the d.o.f. of the inner planets. We remark that the decrease of the resonance half-width with the index of the harmonic in Table 3 is steeper for \(\mathcal{C}_{\text{inc}}\) than for \(\mathcal{E}_{2n},\mathcal{C}_{2}\), and is accompanied by a greater presence of high-order resonances. This may notably explain why the secular variations of \(\mathcal{C}_{\text{inc}}\) are somewhat smaller in the top row of Fig. 2. We finally point out the important symmetry-breaking role of the modes \(g_{7},s_{7}\), representing the forcing mainly exerted by Uranus. Differently from what one might suppose, these modes cannot be completely neglected when addressing the long-term diffusion of ISS. This recalls the role of the modes \(s_{7}\) and \(s_{8}\) in the spin dynamics of Venus [49], and is basically a manifestation of the long-range nature of the gravitational interaction.
As we state in Sect. III, a pair of Lyapunov exponents would vanish if there were an exact integral of motion.
In the presence of a weakly broken symmetry, one may expect a small positive Lyapunov exponent whose value relates to the half-width of the strongest resonances driving the time variation of the corresponding QI. Such an argument is a natural extension of the correspondence between the FT-MLE and the top of the resonance spectrum given in Eq. (12). Comparison of Table 3 with the Lyapunov spectrum in Fig. 1a shows that the time statistics of the half-widths of the symmetry-breaking resonances of ranking \(\mathcal{R}_{2}\) overlaps with the ensemble distribution of the three smallest FT-LCEs, that is, \(\lambda_{6},\lambda_{7},\lambda_{8}\). One can indeed write
\[2\pi\lambda_{6}\approx\Delta\omega^{\mathcal{R}_{2}}, \tag{22}\]
where \(\Delta\omega^{\mathcal{R}_{2}}\) stands for the half-width of the uppermost resonances of ranking \(\mathcal{R}_{2}\). Table 3 and Fig. 1a suggest a relation between the QIs and the smallest Lyapunov exponents:
\[\lambda_{6},\lambda_{7},\lambda_{8}\longleftrightarrow\mathcal{E}_{2n}, \mathcal{C}_{\text{inc}},\mathcal{C}_{2}. \tag{23}\]
Equation (23) is not a one-to-one correspondence, nor should it be understood as an exact relation since, for example, \(\lambda_{6}\) is not well separated from the larger exponents. Its physical meaning is that the QIs are among the slowest d.o.f. of the ISS dynamics. Such a claim is one of the core points of this work. In Sect. V, we show its statistical validity in the geometric framework established by a principal component analysis of the orbital solutions. Moreover, Sect. IV.4 shows that Eq. (23) can be stated more precisely in the case of a simplified dynamics that underlies \(\mathbb{H}_{2n}\). We remark that \(\mathcal{E}_{2n},\mathcal{C}_{\text{inc}},\mathcal{C}_{2}\) constitute a set of three QIs that are independent and _nearly in involution_, and it is thus meaningful to associate three different Lyapunov exponents with them. On the one hand, the independence is easily checked at degree 2 as the vectors \(\mathbf{\gamma}_{1},\mathbf{\gamma}_{2},\mathbf{\gamma}_{3}\) are linearly independent. On the other hand, one has the Poisson bracket \(\{\mathcal{C}_{\text{inc}},\mathcal{C}_{2}\}=0\), since the two quantities are functions of the action variables only. One also has \(\{\mathcal{E}_{2n},\mathcal{C}_{\text{inc}}\}=\{\mathbb{H}_{2n},\mathcal{C}_{ \text{inc}}\}=\dot{\mathcal{C}}_{\text{inc}}\) and \(\{\mathcal{E}_{2n},\mathcal{C}_{2}\}=\dot{\mathcal{C}}_{2}\). Only weak resonances contribute to
\begin{table}
\begin{tabular}{c c c c c} \(i\) & Fourier harmonic \(\mathcal{F}_{i}\) & \(\mathcal{O}_{i}\) & \(\tau_{i}^{\text{res}}\) & \(\Delta\omega_{i}\) \\ \hline \multicolumn{6}{c}{\(\mathcal{E}_{2n}\)} \\ \hline \multicolumn{6}{c}{No resonances} \\ \hline \multicolumn{6}{c}{\(\mathcal{C}_{\text{inc}}\)a } \\ \hline
[MISSING_PAGE_POST]
g_{4}+2g_{5}\) & 10 & 0.02\% & \(9e-6_{6e-6}^{0e-6}\) \\
10 & \(2g_{1}-5g_{2}+g_{3}+2g_{5}\) & 10 & 0.14\% & \(6e-6_{6e-6}^{0e-6}\) \\ \end{tabular}
\end{table}
Table 4: Top of ranking \(\mathcal{R}_{3}\). First 10 symmetry-breaking resonances of \(\mathbb{H}_{10}\) along the 5-Gyr nominal solution of Gauss’s dynamics, that only involve \(g_{5}\) among the external modes and change \(\mathcal{E}_{2n}\), \(\mathcal{C}_{\text{inc}}\), and \(\mathcal{C}_{2}\), respectively.
\begin{table}
\begin{tabular}{c c c c} \(i\) & Fourier harmonic \(\mathcal{F}_{i}\) & \(\mathcal{O}_{i}\) & \(\tau_{i}^{\text{res}}\) & \(\Delta\omega_{i}\) \\ \hline \multicolumn{6}{c}{\(\mathcal{E}_{2n}\)} \\ \hline
1 & \(g_{1}+g_{2}-2g_{5}+s_{2}-s_{7}\) & 6 & 1\% & \(0.018_{0.016}^{0.028}\) \\
2 & \(g_{1}-2g_{2}+g_{6}-s_{2}+s_{6}\) & 6 & 4\% & \(0.017_{0.008}^{0.024}\) \\
3 & \(g_{3}-g_{6}+s_{2}-s_{4}\) & 4 & 10\% & \(0.017_{0.008}^{0.022}\) \\
4 & \(g_{5}-g_{7}+s_{3}-s_{4}\) & 4 & 5\% & \(0.017_{0.021}^{0.022}\) \\
5 & \(g_{4}-g_{6}+s_{2}-s_{4}\) & 4 & 4\% & \(0.016_{0.021}^{0.022}\) \\
6 & \(g_{2}-2g_{4}+g_{6}\) & 4 & 12\% & \(0.016_{0.009}^{0.027}\) \\
7 & \(g_{1}-g_{3}-g_{5}
these Poisson brackets and the three QIs are therefore nearly in involution.
### New truncation of the Hamiltonian
The fundamental role of the external modes \(g_{6},g_{7},s_{6},s_{7}\) in Table 3 raises the question of which symmetry-breaking resonances persist if one excludes all the Fourier harmonics that involve external modes other than \(g_{5}\). Therefore, we define a new ranking \(\mathcal{R}_{3}\) by extracting such resonances from ranking \(\mathcal{R}_{2}\). Table 4 reports the 10 strongest resonances per each broken symmetry. The difference with respect to Table 3 is manifest. As \(g_{5}\) is the only external mode remaining, there are no resonances left that can contribute to the time evolution of \(\mathcal{E}_{2n}\). For the remaining two QIs, the only harmonics that appear in Table 4 are of order 8 or higher, and this is accompanied by a significant drop in the half-width of the leading resonances. In the case of \(\mathcal{C}_{\text{inc}}\), the half-width of the uppermost resonances is now around \(0.005^{\prime\prime}\,\text{yr}^{-1}\). One can appreciate that the activation times \(\tau^{\text{res}}\) of the resonances do not exceed a few percent, differently from Table 3. The most impressive change is, however, related to \(\mathcal{C}_{2}\): only harmonics of order 10 appear in Table 4 and the half-width of the uppermost resonances drops by two orders of magnitude. We stress that such harmonics are resonant for very short periods of time along the 5 Gyr spanned by the nominal solution of Gauss's dynamics. To retrieve the time statistics of the resonances affecting \(\mathcal{C}_{2}\), we indeed choose to repeat the computations of Ref. [19] by increasing the cutoff frequency of the low-pass filter applied to time series of the action-angle variables from \((5\text{ Myr})^{-1}\) to \(1\text{ Myr}^{-1}\)[19, Appendices F.2 and G.5]. The filtered time series have then been resampled with a timestep of 50 kyr. Many harmonics we show in Table 4 and related to \(\mathcal{C}_{2}\) are resonant for a few timesteps and their time statistics is very tentative. More precise estimations of the half-widths should be obtained over an ensemble of different orbital solutions, possibly spanning more than 5 Gyr. In any case, the fundamental point here is the drastic reduction in the size of the uppermost resonances with respect to Table 3, and this is a robust result. We remark that resonances of order 12 and higher may also carry an important contribution at these scales, but they are excluded by the truncation at degree 10 adopted in Ref. [19] to establish the resonant harmonics, so they do not appear in the tables of this work.
These one-to-one correspondences are a particular case of Eq. (23) and support the physical intuition behind it. In Sect. V, we prove the validity of Eq. (27) in the geometric framework established by a principal component analysis of the orbital solutions of \(\mathbb{H}^{\bullet}_{2n}\).
_Numerical integrations._ We compute ensembles of 1080 orbital solutions of the dynamical models \(\mathbb{H}^{\bullet}_{4}\) and \(\mathbb{H}^{\sharp}_{6}\), with initial conditions very close to the nominal ones of Gauss's dynamics and spanning 100 Gyr in the future. This closely follows what we did in Ref. [7] in the case of the models \(\mathcal{H}_{2n}\). The bottom row of Fig. 2 shows the filtered dimensionless QIs along the nominal solutions of the two models over the first 5 Gyr. The hierarchy of the QIs stated in Eq. (27) is manifest. The quantity \(\mathcal{C}_{2}\) has secular variations much slower than \(\mathcal{C}_{\mathrm{inc}}\), while the latter is itself slower with respect to its counterpart in the orbital solutions of \(\mathcal{H}_{2n}\). We remark that, as \(\mathcal{E}^{\bullet}_{2n}\) is an exact integral of motion for the model \(\mathbb{H}^{\bullet}_{2n}\), we do not plot it. From Fig. 2 it is also evident how difficult can be the retrieval of the short-lasting resonances affecting \(\mathcal{C}_{2}\) from a solution of \(\mathbb{H}^{\bullet}_{2n}\) spanning only a few billion years.
The hierarchy of the QIs is confirmed by a statistical analysis in Appendix C. Figure 10 shows the entire time evolution of the distributions of the filtered dimensionless QIs over the stable orbital solutions of the ensembles of 1080 numerical integrations. Figure 11 details the growth of the QI dispersion over time. As suggested by Table 4, the drop in the diffusion rates of the QIs when switching from \(\mathcal{H}_{2n}\) to \(\mathbb{H}^{\bullet}_{2n}\) is manifest.
## V Statistical detection of slow variables
Section IV shows how the slow-fast nature of the ISS dynamics, indicated by the Lyapunov spectrum, emerges from the quasi-symmetries of the resonant harmonics of the Hamiltonian. QIs of motion can be introduced semi-analytically and they constitute slow quantities when evaluated along stable orbital solutions. In this section, we consider the slow variables that can be systematically retrieved from a numerically integrated orbital solution by means of a statistical technique, the principal component analysis. We show that, in the case of the forced secular ISS, the slowest variables are remarkably close to the QIs, and this can be established in a precise geometric framework.
### Principal component analysis
PCA is a widely used classical technique for multivariate analysis [50; 51]. For a given dataset, PCA aims to find an orthogonal linear transformation of the variables such that the new coordinates offer a more condensed and representative view of the data. The new variables are called principal components (PCs). They are uncorrelated and ordered according to decreasing variance: the first PC and last one have, respectively, the largest and the smallest variance of any linear combination of the original variables. While one is typically interested in the PCs of largest variance, in this work we employ the variance of the time series of a dynamical quantity to assess its slowness when compared to the typical variations of the action variables (see Sect. IV.2). We thus perform a PCA of the action variables \(\mathbf{I}\) and focus on the last PCs, as they give a pertinent statistical definition of slow variables. We stress that, when coupled to a low-pass filtering of the time series, the statistical variance provides a measure of chaotic diffusion.
_Implementation._ Our procedure for the PCA is described briefly as follows [for general details see, e.g., 52; 53]. Let \(\mathbf{I}(t)=(\mathbf{\mathfrak{X}}(t),\mathbf{\Psi}(t))\) be the 8-dimensional time series of the action variables evaluated along a numerical solution of the equations of motion. As in Sect. IV.2, we apply the KZ low-pass filter with three iterations of the moving average and a cutoff frequency of 1 Myr\({}^{-1}\) to obtain the filtered time series \(\tilde{\mathbf{I}}(t)\)[4; 48]. In this way, the short-term quasi-periodic oscillations are mostly suppressed, which better reveals the chaotic diffusion over longer timescales. We finally define the mean-subtracted filtered action variables over the time interval \([t_{0},t_{0}+T]\) as \(\tilde{\mathbf{I}}(t)=\tilde{\mathbf{I}}(t)-n^{-1}\sum_{i=0}^{n-1}\tilde{\mathbf{I}}(t_{0 }+i\Delta t)\), where the mean is estimated by discretization of the time series with a sampling step \(\Delta t\) such that \(T=(n-1)\Delta t\). The discretized time series over the given interval is stored in an \(8\times n\) matrix:
\[\mathbf{D}=[\tilde{\mathbf{I}}(t_{0}),\ \tilde{\mathbf{I}}(t_{0}+\Delta t),\ \ldots,\ \tilde{\mathbf{I}}(t_{0}+(n-1)\Delta t)]. \tag{28}\]
The PCA of the data matrix \(\mathbf{D}\) consists in a linear transformation \(\mathbf{P}=\mathbf{A}^{\mathrm{T}}\mathbf{D}\), where \(\mathbf{A}\) is an \(8\times 8\) orthogonal matrix (i.e. \(\mathbf{A}^{-1}=\mathbf{A}^{\mathrm{T}}\)) defined as follows. By writing \(\mathbf{A}=[\mathbf{a}_{1},\ldots,\mathbf{a}_{8}]\), the column vectors \(\mathbf{a}_{i}\in\mathbb{R}^{8}\) are chosen to be the normalized eigenvectors of the sample covariance matrix, in order of decreasing eigenvalues: \((n-1)^{-1}\mathbf{D}\mathbf{D}^{\mathrm{T}}=\mathbf{A}\mathbf{\Sigma}\mathbf{A }^{\mathrm{T}}\), where \(\mathbf{\Sigma}=\mathrm{diag}(\sigma_{1},\ldots,\sigma_{8})\) and \(\sigma_{1}\geq\cdots\geq\sigma_{8}\). The PCs are defined as the new variables after the transformation, that is, \(\mathrm{PC}_{i}=\mathbf{a}_{i}\cdot\mathbf{I}\) with \(i\in\{1,\ldots,8\}\). The uncorrelatedness and the ordering of the PCs can be easily seen from the diagonal form of their sample covariance matrix, \((n-1)^{-1}\mathbf{P}\mathbf{P}^{\mathrm{T}}=\mathbf{\Sigma}\), from which it follows that the variance of \(\mathrm{PC}_{i}\) is \(\sigma_{i}\).
Among all the linear combinations in the action variables \(\mathbf{I}\), the last PC, i.e., \(\mathrm{PC}_{8}\), has the smallest variance over the time interval \([t_{0},t_{0}+T]\) of a given orbital solution. The second last PC, i.e., \(\mathrm{PC}_{7}\), has the second smallest variance and is uncorrelated with \(\mathrm{PC}_{8}\), and so on. It follows that the linear subspace spanned by the last \(k\) PCs is the \(k\)-dimensional subspace of minimum variance: the variance of the sample projection onto this subspace is the minimum among all the subspaces of the same dimension. These properties indicate that the last PCs provide a pertinent statistical definition of slow variables along numerically integrated solutions of a dynamical system. The linear structure of the PCA, in particular, seems adapted to quasi-integrable systems close
to a quadratic Hamiltonian, like the ISS. In such a case, one may reasonably expect that the slow variables are, to a first approximation, linear combinations of the action variables. We remark that the mutual orthogonality allows us to associate a _linear_ d.o.f. to each PC.
Aggregated sampleInstead of considering a specific solution, it is also possible to take the same time interval from \(m\) different solutions, and stack them together to form \(\mathbf{a}\) aggregated sample: \(\mathbf{D}_{\mathrm{agg}}=[\mathbf{D}_{1},\mathbf{D}_{2},\ldots,\mathbf{D}_{m}]\), where \(\mathbf{D}_{i}\) is the data matrix of Eq. (28) for the \(i\)th solution. Since this work deals with a non-stationary dynamics, as the ISS ceaselessly diffuses in the phase space [7], we always consider the same time interval for each of the \(m\) solutions. The aggregated sample is useful in capturing globally the behavior of the dynamics, because it averages out temporary and rare episodes arising along specific solutions.
### Principal components and quasi-integrals
Both the QIs and the last PCs represent slow variables, but are established through two different methods. Equations (23) and (27) claim that the QIs found semi-analytically in Sect. IV are among the slowest d.o.f. of the ISS dynamics. This naturally suggests to compare the three QIs with the three last PCs retrieved from numerically integrated orbital solutions. In this part, we first introduce the procedure that we implement to establish a consistent and systematic correspondence between QIs and PCs. We then present both a visual and a quantitative geometric comparison between them.
#### v.2.1 Tweaking the QIs
The three last components \(\mathrm{PC}_{8}\), \(\mathrm{PC}_{7}\), \(\mathrm{PC}_{6}\) are represented by the set of vectors \(S_{\mathrm{PCs}}=\{\mathbf{a}_{8},\mathbf{a}_{7},\mathbf{a}_{6}\}\) belonging to \(\mathbb{R}^{8}\). By construction, these PCs have a linear, hierarchical, and orthogonal structure. In other words: the PCs are linear combinations of the action variables \(\mathbf{I}\); denoting by \(\preceq\) the order of statistical variance, one has \(\mathrm{PC}_{8}\preceq\mathrm{PC}_{7}\preceq\mathrm{PC}_{6}\); the unit vectors \((\mathbf{a}_{i})_{i=6}^{8}\) are orthogonal to each other. On the other hand, the QIs of motion \(\mathcal{C}_{\mathrm{inc}},\mathcal{C}_{2},\mathcal{E}_{2n}\) do not possess these properties. Therefore, we adjust them in such a way to reproduce the same structure.
LinearityWhile \(\mathcal{C}_{\mathrm{inc}}\) and \(\mathcal{C}_{2}\) are linear functions of the action variables, \(\mathcal{E}_{2n}\) is not when \(n>1\). Nevertheless, as we explain in Sect. IV.1, as far as one considers stable orbital solutions, the linear LL approximation \(\mathcal{E}_{2}=\mathbf{\gamma}_{3}\cdot\mathbf{I}\) reproduces \(\mathcal{E}_{2n}\) reasonably well. Therefore, we consider the three linear QIs of motion \(\mathcal{C}_{\mathrm{inc}},\mathcal{C}_{2},\mathcal{E}_{2}\), which are respectively represented by the set of \(\mathbb{R}^{8}\)-vectors \(S_{\mathrm{QIs}}=\{\mathbf{\gamma}_{1},\mathbf{\gamma}_{2},\mathbf{\gamma}_{3}\}\). In this way, the 3-dimensional linear subspaces of the action space spanned by the sets \(S_{\mathrm{QIs}}\) and \(S_{\mathrm{PCs}}\) can be compared.
OrderingWe define a set of QIs that are ordered by statistical variance, as it is the case for the PCs. We follow two different approaches according to model \(\mathbb{H}_{2n}^{\bullet}\) in Eq. (24) or \(\mathbb{H}_{2n}\) in Eq. (6) (clearly \(n>1\)).
\(\mathbb{H}_{2n}^{\bullet}\)**:**: A strong hierarchy of statistical variances among the QIs emerges from the size of the leading symmetry-breaking resonances in Table 4 and from the orbital solutions in Figs. 2, 10, and 11. One has \(\mathcal{E}_{2n}^{\bullet}\prec\mathcal{C}_{2}\prec\mathcal{C}_{\mathrm{inc}}\). While \(\mathcal{E}_{2n}^{\bullet}\) is an exact non-linear integral of motion, we expect that its linear truncation \(\mathcal{E}_{2}^{\bullet}=\mathcal{E}_{2}\) varies more than \(\mathcal{C}_{2}\) and \(\mathcal{C}_{\mathrm{inc}}\). Therefore, we consider the ordered set of QIs of motion \(\{\mathcal{C}_{2},\mathcal{C}_{\mathrm{inc}},\mathcal{E}_{2}\}\) represented by the ordered set of vectors \(S_{\mathrm{QIs}}^{\prime}=\{\mathbf{\gamma}_{2},\mathbf{\gamma}_{1},\mathbf{\gamma}_{3}\}\).
\(\mathbb{H}_{2n}\)**:**: Since the leading resonances affecting the QIs in Table 3 have comparable sizes, there is no clear order of statistical variances that can be inferred. We then implement a systematic approach that orders the QIs by simply inheriting the ordering of the PCs. More precisely, we define a set of ordered vectors \(S_{\mathrm{QIs}}^{\prime}\) through the projections of the three last PCs onto the linear subspace generated by the QIs: \(S_{\mathrm{QIs}}^{\prime}=\{\mathrm{proj}_{S_{\mathrm{QIs}}}(\mathbf{a}_{8}), \mathrm{proj}_{S_{\mathrm{QIs}}}(\mathbf{a}_{7}),\mathrm{proj}_{S_{\mathrm{QIs}}}( \mathbf{a}_{6})\}\)[54]. As a result, the new set of QIs mirrors the hierarchical structure of the PCs. We stress that \(S_{\mathrm{QI}}^{\prime}\) spans the same subspace of \(\mathbb{R}^{8}\) as \(S_{\mathrm{QI}}\), since the ordered QIs are just linear combinations of the original ones.
OrthogonalityWe apply the Gram-Schmidt process to the ordered set \(S_{\mathrm{QIs}}^{\prime}\) to obtain the orthonormal basis \(S_{\mathrm{QIs}}^{\prime\prime}=\{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},\mathbf{\alpha}_{ 3}\}\). The set \(S_{\mathrm{QIs}}^{\prime\prime}\) clearly spans the same subspace as \(S_{\mathrm{QIs}}\). Moreover, the Gram-Schmidt process preserves the hierarchical structure, that is, the two \(m\)-dimensional subspaces spanned by the first \(m\leq 3\) vectors of \(S_{\mathrm{QIs}}^{\prime}\) and \(S_{\mathrm{QIs}}^{\prime\prime}\), respectively, are identical.
In the end, we obtain a linear, ordered, and orthogonal set of _modified_ QIs of motion \(\{\mathrm{QI}_{1},\mathrm{QI}_{2},\mathrm{QI}_{3}\}\), where \(\mathrm{QI}_{i}=\mathbf{\alpha}_{i}\cdot\mathbf{I}\).
#### v.2.2 Visual comparison
We now visually compare the vectors \(\mathbf{\alpha}_{1,2,3}\) of the modified QIs with the corresponding vectors \(\mathbf{a}_{8,7,6}\) of the last three PCs. We use the ensembles of 1080 numerically integrated orbital solutions of the models \(\mathcal{H}_{4}\) and \(\mathbb{H}_{4}\) considered in Sects. IV.2 and IV.4, respectively. The nominal solution of each set is denoted as sol. #1 from now on. For the model \(\mathcal{H}_{4}\), we also consider two other solutions: sol. #2 that represents a typical evolution among the 1080 solutions, and sol. #3 representing a rarer one. The particular choice of these two solutions is detailed in Sect. V.2.3.
Hamiltonian \(\mathbb{H}_{4}^{\bullet}\)The modified QIs can be explicitly derived in this case and comprise interpretable physical
quantities. One has QI\({}_{1}\) proportional to \(\mathcal{C}_{2}\) and QI\({}_{2}\) proportional to \(\mathcal{C}_{2}^{\perp}\). Moreover, QI\({}_{3}\) is the component of \(\mathcal{E}_{2}\) that is orthogonal to both \(\mathcal{C}_{2}\) and \(\mathcal{C}_{2}^{\perp}\). Figure 3 shows the comparison between the modified QIs and the corresponding PCs for three different time intervals along sol. \(\#1\) of \(\mathbb{H}_{4}^{\bullet}\) (see Fig. 2 bottom left for its time evolution). The agreement of the pairs (QI\({}_{1}\), PC\({}_{8}\)), (QI\({}_{2}\), PC\({}_{7}\)), and (QI\({}_{3}\), PC\({}_{6}\)) across different intervals is manifest and even impressive. One can appreciate that the "slower" the PC, the more similar it is to its corresponding QI. The overlap between the modified QIs and the three last PCs means that the QIs of motion span the slowest 3-dimensional linear subspace of the action space. Therefore, to a linear approximation, they represent the three slowest d.o.f. of the \(\mathbb{H}_{4}^{\bullet}\) dynamics. The quasi-integral \(\mathcal{C}_{2}\) represents the slowest linear d.o.f.: it coincides with the last principal component PC\({}_{8}\), which has the smallest variance among all the linear combinations of the action variables. \(\mathcal{C}_{\rm inc}\) and \(\mathcal{E}_{2}\) represent the second and the third slowest linear d.o.f., respectively: the component of \(\mathcal{C}_{\rm inc}\) orthogonal to \(\mathcal{C}_{2}\), i.e., \(\mathcal{C}_{2}^{\perp}\), matches the second last principal component PC\({}_{7}\); the component of \(\mathcal{E}_{2}\) orthogonal to the subspace generated by (\(\mathcal{C}_{2}\), \(\mathcal{C}_{\rm inc}\)) matches the third
Figure 3: Vectors \(\boldsymbol{\alpha}_{1,2,3}\) representing the three modified QIs (QI\({}_{1,2,3}\), black circles) compared to the corresponding vectors \(\boldsymbol{a}_{8,7,6}\) of the three last PCs (PCs\({}_{8,7,6}\), red dots), for the intervals \([0,500]\) Myr (left-hand column), \([1000,2000]\) Myr (middle column) and \([0,5000]\) Myr (right-hand column) of sol. \(\#1\) and of the aggregated sample of 1080 solutions of model \(\mathbb{H}_{4}^{\bullet}\). Here, QI\({}_{1}\) is proportional to \(\mathcal{C}_{2}\) and QI\({}_{2}\) is proportional to \(\mathcal{C}_{2}^{\perp}\); see Eq. (20).
Figure 4: Vectors \(\boldsymbol{\alpha}_{1,2,3}\) representing the three modified QIs (QI\({}_{1,2,3}\), black circles) compared to the corresponding vectors \(\boldsymbol{a}_{8,7,6}\) of the three last PCs (PCs\({}_{8,7,6}\), red dots), for the intervals \([0,500]\) Myr (left-hand column), \([1000,2000]\) Myr (middle column) and \([0,5000]\) Myr (right-hand column) of sol. \(\#1\), sol. \(\#2\), and sol. \(\#3\) and of the aggregated sample of 1080 solutions of model \(\mathcal{H}_{4}\).
last principal component \(\mathrm{PC}_{6}\). The strong hierarchical structure of the slow variables for the simplified dynamics \(\mathbb{H}_{4}^{\bullet}\) is clearly confirmed by the almost frozen basis vectors of the PCs.
_Hamiltonian \(\mathcal{H}_{4}\)._ In this case, the QIs of motion \(\mathcal{C}_{\mathrm{inc}},\mathcal{C}_{2},\mathcal{E}_{2}\) do not show a clear hierarchical structure in terms of statistical variance. Therefore, we consider the whole subspace spanned by the three QIs with respect to that spanned by the three last PCs. Since it is not easy to visually compare two 3-dimensional subspaces of \(\mathbb{R}^{8}\), we compare their basis vectors instead. The basis \(\mathbf{\alpha}_{1,2,3}\) of modified quasi-integrals \(\mathrm{QI}_{1,2,3}\) is computed according to the algorithm presented in Sect. V.2.1.
Figure 4 presents the comparison between the modified QIs and the corresponding PCs across three different time intervals of three solutions of \(\mathcal{H}_{4}\) (see Fig. 5 for their time evolution). The first two, sols. #1 and #2, show thorough agreement between the pairs of QIs and PCs across all intervals, which indicates close proximity between the two subspaces \(V_{\mathrm{QIs}}=\mathrm{span}(S_{\mathrm{QIs}})\) and \(V_{\mathrm{PCs}}=\mathrm{span}(S_{\mathrm{PCs}})\). One can appreciate that the directions of the basis vectors are quite stable. The last component \(\mathrm{PCs}\), in particular, remains close to \(\mathcal{C}_{\mathrm{inc}}\). The slowest linear d.o.f. of \(\mathcal{H}_{4}\) can thus be deduced to be close to \(\mathcal{C}_{\mathrm{inc}}\), in line with the discussion in Sect. IV.3. Such a result shows how interesting physical insight can be gained through the PCA. Some changes in the basis vectors can arise, however, as for the first time interval of sol. #2. This may be expected from a dynamical point of view. Differently from \(\mathbb{H}_{4}^{\bullet}\), there is no pronounced separation between the slowest d.o.f. at the bottom of the Lyapunov spectrum in Fig. 1a: the marginal distributions of consecutive exponents can indeed touch or overlap each other. Therefore, the hierarchy of slow variables is not as frozen as in \(\mathbb{H}_{4}^{\bullet}\) and it can change along a given orbital solution.
Solutions #1 and #2 represent typical orbital evolutions. If the same time intervals of all the 1080 solutions are stacked together to form an aggregated sample on which the PCA is applied, the features mentioned above persist: the agreement between QIs and PCs, the stability of the basis vectors, and the similarity between \(\mathrm{PC}_{8}\) and \(\mathcal{C}_{\mathrm{inc}}\) (see Fig. 4). Once again, the PCA confirms that the subspace spanned by the three QIs is overall close to the slowest 3-dimensional linear subspace of the action space. Therefore, to a linear approximation, they represent the three slowest d.o.f. of the \(\mathcal{H}_{4}\) dynamics. We remark that the slowness of the 3-dimensional subspace spanned by the QIs is a much stronger constraint than the observation that each QI is a slow variable. To give an example, let \(Q=\widehat{\mathbf{q}}\cdot\mathbf{I}\) be a slow variable with unit vector \(\widehat{\mathbf{q}}\). If \(\mathbf{\epsilon}\) is an arbitrary small vector, i.e. \(\|\mathbf{\epsilon}\|\ll 1\), then \(Q^{\prime}=(\widehat{\mathbf{q}}+\mathbf{\epsilon})\cdot\mathbf{I}\) can also be considered as a slow variable, whereas the normalized difference of two quantities, \(\widehat{\mathbf{\epsilon}}\cdot\mathbf{I}\), is generally not. Therefore, the linear subspace spanned by \(Q\) and \(Q^{\prime}\), that is, by \(\widehat{\mathbf{q}}\) and \(\widehat{\mathbf{\epsilon}}\), is not a slow 2-dimensional subspace.
Solution #3 in Fig. 4 represents an edge case (see Fig. 5 for its time evolution). Typically, the variances of the QIs are at least one order of magnitude smaller than those of the action variables, which allows a clear separation. Nevertheless, the distinction between the QIs and faster d.o.f. can be more difficult in two rare possibilities. Firstly, if the change in a QI accumulates continually in one direction, its variance can inflate over a long time interval. This is the case for the interval [0, 5] Gyr of sol. #3. Secondly, the variance of a variable that is typically fast can suddenly dwindle during a certain period of time, for example, \(\Psi_{3}\) over the interval [1, 2] Gyr of sol. #3. In both cases, the slow subspace defined by the three last PCs can move away from the QI subspace due to the contamination by d.o.f. that are typically faster.
This is reflected in the mismatch of \(\text{QI}_{3}\) and \(\text{PC}_{6}\) on the last two time intervals of sol. #3 in Fig. 4. We remark that \(\text{PC}_{8,7}\) are still relatively close to \(\text{QI}_{1,2}\), which indicates that the slowest 2-dimensional subspace spanned by \(\text{PC}_{8,7}\) still resides inside the QI subspace. It should be stressed that this disagreement between QIs and PCs does not mean that the QIs are not slow variables in this case. The mismatch has a clear dynamical origin instead. The resonance tables of this work have been retrieved from a single, very long orbital solution, with the idea that its time statistics is representative of the ensemble statistics over a set of initially very close solutions [19]. Therefore, the QIs derived from these tables characterize the dynamics in a global sense. The network of resonances can temporarily change in an appreciable way along specific solution, or be very particular along rare orbital solutions. In these cases, a mismatch between the last PCs and the present QIs may naturally arise. Moreover, the contamination of the QIs by d.o.f. that are typically faster may also be expected from the previously mentioned lack of a strong hierarchical structure of the slow variables. The Lyapunov spectrum in Fig. 1a shows that the marginal distributions of the exponents \(\lambda_{5}\) and \(\lambda_{6}\), for example, are not separate but overlap each other.
#### iv.2.3 Distance between the subspaces of PCs and QIs
The closeness of the two 3-dimensional linear subspaces \(V_{\text{PCs}},V_{\text{QIs}}\subset\mathbb{R}^{8}\) spanned by the sets of vectors \(S_{\text{PCs}}\) and \(S_{\text{QIs}}\), respectively, can be quantitatively measured in terms of a geometric distance. This can be formulated using the principal (canonical) angles [55, 56, 57].
Let \(A\) and \(B\) be two sets of \(m\leq n\) independent vectors in \(\mathbb{R}^{n}\). The principal vectors \((\mathbf{p}_{k},\mathbf{q}_{k})_{k=1}^{m}\) are defined recursively as solutions to the optimization problem:
\[\begin{array}{l}\text{maximize}\quad\mathbf{p}\cdot\mathbf{q}\\ \text{subject to}\quad\left|\begin{array}{l}\mathbf{p}\in\text{span}(A),\ \mathbf{q}\in\text{ span}(B),\\ \|\mathbf{p}\|=1,\ \|\mathbf{q}\|=1,\\ \mathbf{p}\cdot\mathbf{p}_{i}=0,\ \mathbf{q}\cdot\mathbf{q}_{i}=0,\ \ i=1,\ldots,k-1,\end{array}\right.\end{array} \tag{29}\]
for \(k=1,\ldots,m\). The principal angles \(0\leq\theta_{1}\leq\cdots\leq\theta_{m}\leq\pi/2\) between the two subspaces \(\text{span}(A)\) and \(\text{span}(B)\) are then defined by
\[\cos\theta_{k}=\mathbf{p}_{k}\cdot\mathbf{q}_{k},\quad k=1,\ldots,m. \tag{30}\]
The principal angle \(\theta_{1}\) is the smallest angle between all pairs of unit vectors in \(\text{span}(A)\) and \(\text{span}(B)\); the principal angle \(\theta_{2}\) is the smallest angle between all pairs of unit vectors that are orthogonal to the first pair; and so on. Given the matrices defining the two subspaces, the principal angles can be computed from the singular value decomposition of their correlation matrix. The result is the canonical correlation matrix \(\text{diag}(\cos\theta_{1},\ldots,\cos\theta_{m})\). This cosine-based method is often ill-conditioned for small angles. In such case, a sine-based algorithm can be employed [58]. In this work, we use the combined technique detailed in Ref. [59].
Once the principal angles have been introduced, different metrics can be defined to measure the distance between two subspaces. In this work, we choose the normalized chordal distance [57]:
\[d(A,B)=\left(\frac{1}{m}\sum_{k=1}^{m}\sin^{2}\theta_{k}\right)^{1/2}. \tag{31}\]
The distance is null if \(A\) and \(B\) are the same subspace and equal to 1 when they are orthogonal. We use this metric to show that the subspace closeness suggested by Figs. 3 and 4 is indeed statistically significantly. More precisely, we provide evidence against the null hypothesis that the distribution of distances between \(V_{\text{PCs}}\) and \(V_{\text{PCs}}\), arising from the \(\mathbb{H}_{4}^{\bullet}\) and \(\mathcal{H}_{4}\) dynamics, coincides with that of randomly drawn 3-dimensional subspaces of \(\mathbb{R}^{8}\). The PDF of the distance between two random 3-dimensional
Figure 6: PDF of the distance between two random 3-dimensional linear subspaces of \(\mathbb{R}^{8}\) (blue, \(10^{5}\) draws) compared with the PDF of the distance between the two subspaces \(V_{\text{PCs}}\) (\(\text{PC}_{8,7,6}\)) and \(V_{\text{QIs}}\) (\(\text{QI}_{1,2,3}\)) arising from the time interval [0, 5] Gyr of 1080 solutions of \(\mathbb{H}_{4}^{\bullet}\) (top) and 10 800 solutions of \(\mathcal{H}_{4}\) (bottom) (green). For each model, the subspace distance from the same time interval of representative solutions (vertical red lines) and of the aggregated sample of all the solutions (vertical dark green line) are indicated. The subspace distance is given by Eq. (31).
subspaces of \(\mathbb{R}^{8}\) is shown in Fig. 6 in blue (such random subspaces can be easily generated by sampling sets of 3 vectors uniformly on the unit 7-sphere [60]). While the range of possible distances is [0,1], the distribution concentrates on the right-hand side of the interval, with a probability of approximately 99.3% that the distance is larger than 0.6. In this regard, we remark that the notion of distance in high-dimensional spaces is very different from our intuition in a 3-dimensional world. If we draw randomly two vectors in a very high-dimensional space, it is extremely likely that they will be close to mutual orthogonality.
The upper panel of Fig. 6 shows in green the PDF of the distance between \(V_{\text{PCs}}\) and \(V_{\text{Qls}}\) arising from the time interval [0, 5] Gyr of the 1080 orbital solutions of model \(\mathbb{H}_{4}^{\bullet}\). In the lower panel, we consider a larger ensemble of \(10\,800\) solutions of model \(\mathcal{H}_{4}\) spanning the same time interval [7], and plot the corresponding PDF of the distance between \(V_{\text{PCs}}\) and \(V_{\text{Qls}}\). In both cases, the distance stemming from the aggregated sample of all the solutions is indicated by a vertical dark green line. We also report the distances from the specific solutions considered in Figs. 3 and 4 as vertical red lines. As the PDFs of both models peak at small distances, there results a strong evidence that the distribution of distances between the subspaces spanned by the PCs and the QIs is not that of random subspaces. In this sense, the closeness of the subspaces \(V_{\text{PCs}}\) and \(V_{\text{Qls}}\) is a statistically robust result. In the case of the simplified dynamics \(\mathbb{H}_{4}^{\bullet}\), the PDF peaks around a median of roughly 0.08 and has small variance. Switching to model \(\mathcal{H}_{4}\), the median increases to about 0.26 and the PDF is more spread out, with a long tail toward larger distances. The differences between the PDFs of the two models follow quite naturally the discussion in Sect. V.2.2: a quasi frozen hierarchy of the slowest variables for \(\mathbb{H}_{4}^{\bullet}\); a larger variance for \(\mathcal{H}_{4}\) related to contamination by d.o.f. that are typically faster and to variations of the resonant network with respect to the nominal solution of Gauss's dynamics which is used to infer the QIs. Solution #3 in Fig. 4 represents in this sense an edge case of the distance distribution, while sol. #2 is a typical solution close to the PDF median.
## VI Implications on long-term stability
The existence of slow variables can have crucial implications on the stability of the ISS. The QIs of motion can effectively constraint in an adiabatic way the chaotic diffusion of the planet orbits over long timescales, forbidding in general a dynamical instability over a limited time span, e.g., several billions of years. Here we give compelling arguments for such a mechanism.
Figure 7 shows the cumulative distribution function (CDF) of the first time that Mercury eccentricity reaches a value of 0.7, from the ensembles of 1080 orbital solutions of \(\mathbb{H}_{4}^{\bullet}\) and \(\mathbb{H}_{6}^{\bullet}\) introduced in Sect. IV.4. We recall that such a high eccentricity is a precursor of the dynamical instability (i.e., close encounters, collisions, or ejections of planets) of the ISS [6]. We also report the same CDF for the models \(\mathcal{H}_{4}\) and \(\mathcal{H}_{6}\), which we recently computed in Ref. [7]. One can appreciate that the time corresponding to a probability of instability of 1% is greater than 100 Gyr for the \(\mathbb{H}_{4}^{\bullet}\) model, while it is about 15 Gyr for \(\mathcal{H}_{4}\). At degree 6, this time still increases from 5 Gyr for \(\mathcal{H}_{6}\) to about 20 Gyr in \(\mathbb{H}_{6}^{\bullet}\). The dynamics arising from \(\mathbb{H}_{4}^{\bullet}\) and \(\mathbb{H}_{6}^{\bullet}\) can be considered as stable in an astronomical sense. Recalling that the main difference between \(\mathbb{H}_{2n}^{\bullet}\) and \(\mathbb{H}_{2n}\) relates to the smallest Lyapunov exponents (Fig. 1), and this is accompanied by a much slower diffusion of the QIs for \(\mathbb{H}_{2n}^{\bullet}\) (Figs. 2, 10, and 11), Fig. 7 indicates that the dynamical half-life of the ISS is linked to the speed of diffusion of these slow quantities in a critical way. We stress that the slower diffusion toward the dynamical instability in the \(\mathbb{H}_{2n}^{\bullet}\) model derives from neglecting the external forcing mainly exerted by Saturn, Uranus, and Neptune.
We also observe that, to a linear approximation, the knowledge of \(\mathcal{C}_{\text{inc}}\) and \(\mathcal{E}_{2}\) allows us to bound the variations of the action variables \(\mathfrak{X},\boldsymbol{\Psi}\). Recalling that the actions are positive quantities, from Eq. (19) one sees that fixing a value of \(\mathcal{C}_{\text{inc}}\) puts an upper bound to the variations of the inclination actions \(\boldsymbol{\Psi}\). As a consequence, at degree 2 in eccentricities and inclinations, fixing a value of the QI
\[\mathcal{E}_{2}=\boldsymbol{\gamma}_{3}\cdot\boldsymbol{I}=\boldsymbol{\gamma }_{3}^{\text{ecc}}\cdot\mathfrak{X}+\boldsymbol{\gamma}_{3}^{\text{inc}}\cdot \boldsymbol{\Psi}, \tag{32}\]
with \(\boldsymbol{\gamma}_{3}=(\boldsymbol{\gamma}_{3}^{\text{ecc}},\boldsymbol{ \gamma}_{3}^{\text{inc}})\), also bounds the upper variations of the eccentricity actions \(\mathfrak{X}\), since the components of \(\boldsymbol{\gamma}_{3}^{\text{ecc}}\) have all the same sign, as those of \(\boldsymbol{\gamma}_{3}^{\text{inc}}\) (see Appendix B). This is an important point, as we state in Sect. I that the lack of any bound on the chaotic variations of the planet orbits is one of the reasons that complicate the understanding of their long-term stability. We
Figure 7: Cumulative distribution function of the first time that Mercury eccentricity reaches a value of 0.7, from 1080 orbital solutions of different models over 100 Gyr. The shaded regions represent the 90% piecewise confidence intervals from bootstrap.
remark that the secular planetary phase space can be bound by fixing the value of the total AMD, that is, \(\mathcal{C}_{\rm ecc}+\mathcal{C}_{\rm inc}\)[47]. A statistical study of the density of states that are _a priori_ accessible can then be realized [61]. It is not, however, fully satisfying to consider a fixed value of total AMD of the ISS, as we show that \(\mathcal{C}_{\rm ecc}\) is changed by some of the leading resonances of the Hamiltonian, as a result of the eccentricity forcing mainly exerted by Jupiter through the mode \(g_{5}\). Moreover, the destabilization of the ISS consists indeed in a large transfer of eccentricity AMD, \(\mathcal{C}_{\rm ecc}\), from the outer system to the inner planets through the resonance \(g_{1}-g_{5}\)[62, 6, 5, 36]. It should be noted that \(\mathcal{C}_{\rm ecc}\) can still be consider as a slow quantity with respect to an arbitrary function of the action variables, as it is only changed by the subset of the leading resonances involving the external mode \(g_{5}\). This slowness has indeed been observed on stable orbital solutions of the Solar System [47] and supports the statistical hypothesis in Ref. [61] that allows one to obtain a very reasonable first guess of the long-term PDFs of the eccentricities and inclinations of the inner planets.
The emerging picture explains the statistical stability of the ISS over billions of years in a physically intuitive way. The chaotic behavior of the planet orbits arises from the interaction of a number of leading resonant harmonics of the Hamiltonian, which determine the Lyapunov time. The strongest resonances are characterized by some exact symmetries, which are only broken by weak resonant interactions. These quasi-symmetries naturally give birth to QIs of motion, quantities that diffuse much more slowly than the LL action variables, constraining the variations of the orbits. The long dynamical half-life of the ISS is connected to the speed of this diffusion, which eventually drives the system to the instability. It should be stressed that, besides the speed of diffusion, the lifetime of the inner orbits also depends on the initial distance of the system from the instability boundary defined by the resonance \(g_{1}-g_{5}\). This geometric aspect includes the stabilizing role of general relativity [6, 5], which moves the system away from the instability boundary by \(0.43^{\prime\prime}\,{\rm yr}^{-1}\), and the destabilizing effect of terms of degree 6 in eccentricities and inclinations of the planets [7].
## VII Discussion
This work introduces a framework that naturally justifies the statistical stability shown by the ISS over a timescale comparable to its age. Considering a forced secular model of the inner planet orbits, the computation of the Lyapunov spectrum indicates the existence of very different dynamical timescales. Using the computer algebra system TRIP, we systematically analyze the Fourier harmonics of the Hamiltonian that become resonant along a numerically integrated orbital solution spanning 5 Gyr. We uncover three symmetries that characterize the strongest resonances and that are broken by weak resonant interactions. These quasi-symmetries generate three QIs of motion that represent slow variables of the secular dynamics. The size of the leading symmetry-breaking resonances suggests that the QIs are related to the smallest Lyapunov exponents. The claim that the QIs are among the slowest d.o.f. of the dynamics constitutes the central point of this work. On the one hand, it is supported by the analysis of the underlying Hamiltonian \(\mathbb{H}_{2n}^{\bullet}\), in which one neglects the forcing mainly exerted by Saturn, Uranus, and Neptune and, as a consequence, the diffusion of the QIs is greatly reduced. On the other hand, the geometric framework established by the PCA of the orbital solutions independently confirms that the QIs are statistically the slowest linear variables of the dynamics. We give strong evidence that the QIs of motion play a critical role in the statistical stability of the ISS over the Solar System lifetime, by adiabatically constraining the long-term chaotic diffusion of the orbits.
### Inner Solar System among classical quasi-integrable systems
It is valuable to contextualize the dynamics of the ISS in the class of classical quasi-integrable systems. A comparison with the Fermi-Pasta-Ulam-Tsingou problem, in particular, deserves to be made. This concerns the dynamics of a one-dimensional chain of identical masses coupled by nonlinear springs. For weak nonlinearity, the normal modes of oscillation remain far from the energy equipartition expected from statistical mechanics for a very long time [13]. One way to explain the lack of energy equipartition reported by Fermi and collaborators is through the closeness of the FPUT problem to the integrable Toda dynamics [63, 64, 65]. This translates in a very slow thermalization of the action variables of the Toda problem and of the corresponding integrals of motion along the FPUT flow [66, 67, 68, 69, 15]. In the framework of the present study, the very long dynamical half-life of the ISS is also likely to be the result of the slow diffusion of some dynamical quantities, the QIs of motion. We find, in particular, an underlying Hamiltonian \(\mathbb{H}_{2n}^{\bullet}\) for which this diffusion is greatly reduced, as a consequence of neglecting the forcing mainly exerted by Saturn, Uranus, and Neptune. This results in a dynamics that can be considered as stable in an astronomical sense. We stress that, differently from the FPUT problem, \(\mathbb{H}_{2n}^{\bullet}\) is not integrable as the Toda Hamiltonian. It is indeed chaotic and shares with the original Hamiltonian \(\mathbb{H}_{2n}\) the leading Lyapunov exponents. The QIs that we find in this work are only a small number of functions of the action-angle variables of the integrable LL dynamics, and are related to the smallest Lyapunov exponents of the dynamics. Our study suggests that in the FPUT problem the very slow thermalization occurring beyond the Lyapunov time might be understood in terms of combinations of the Toda integrals of motion diffusing over very different timescales.
The long-term diffusion in chaotic quasi-integrable systems should be generally characterized by a broad range of timescales that results from the progressive, hierarchical breaking of the symmetries of the underlying integrable problem by resonant interactions [70; 71; 72]. A hierarchy of Lyapunov exponents spanning several orders of magnitude, in particular, should be common among this class of systems [e.g., 73].
### Methods
The long-term dynamics of the ISS is described by a moderate but not small number of d.o.f., which places it far from the typical application fields of celestial mechanics and statistical physics. The first discipline often studies dynamical models with very few degrees of freedom, while the second one deals with the limit of a very large number of bodies. Chaos also requires a statistical description of the inner planet orbits. But the lack of a statistical equilibrium, resulting from a slow but ceaseless diffusion of the system, places the ISS outside the standard framework of ergodic theory. The kind of approach we develop in this work is heavily based on computer algebra, in terms of systematic series expansion of the Hamiltonian, manipulation of the truncated equations of motion, extraction of given Fourier harmonics, retrieval of polynomial roots, etc. [4; 19]. This allows us to introduce QIs of motion in a 16-dimensional dynamics by analyzing how action-space symmetries are progressively broken by resonant interactions. Our effective method based on the time statistics of resonances arising along a single, very long numerical integration is alternative to formal approaches that define QIs via series expansions [e.g. 74; 75]. The practical usefulness of these formal expansions for a dynamics that covers an intricate, high-dimensional network of resonances seems indeed doubtful. Through the retrieval of the half-widths of the symmetry-breaking resonances, computer algebra also permits us to extend the correspondence between the Lyapunov spectrum and the spectrum of resonances well beyond the standard relation linking the Lyapunov time to the strongest resonances[76].
In the context of dynamical systems with a number of d.o.f. that is not small, this work also considers an approach based on PCA. The role of this statistical technique can be twofold. We use PCA as an independent test to systematically validate the slowness of the QIs. While being introduced semi-analytically as dynamical quantities that are not affected by the leading resonances, they can indeed be related to the last PCs. By extension, the first PCs should probe the directions of the main resonances. This leads to a second potential application of the PCA, which should offer a way to retrieve the principal resonant structure of a dynamical system. In this sense, PCA represents a tool to systematically probe numerical integrations of a complex dynamics and distill important hidden insights. We emphasize that PCA is the most basic linear technique of dimensionality reduction and belongs to the more general class of the unsupervised learning algorithms. There are more sophisticated methods of feature extraction that can be more robust [e.g. 77; 78] and can incorporate nonlinearity [79]. These methods are often less intuitive to understand, less straightforward to apply, and harder to interpret than PCA. Yet, they might be more effective and worth pursuing for future works.
With long-term numerical integration and a computer algebra system at one's disposal, the entire strategy we develop in this work can in principle be applied to other planetary systems and quasi-integrable Hamiltonian dynamics with a moderate number of d.o.f.
###### Acknowledgements.
The authors thank M. Gastineau for his assistance with TRIP. F.M. is supported by a grant of the French Agence Nationale de la Recherche (AstroMeso ANR-19-CE3l-0002-01). N.H.H. is supported by a Ph.D. scholarship of the Capital Fund Management (CFM) Foundation for Research. This project has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Advanced Grant AstroGeo-885250). This work was granted access to the HPC resources of MesoPSL financed by the Region Ile-de-France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the program Investisements d'Avenir supervised by the Agence Nationale pour la Recherche.
## Appendix A Lyapunov spectrum
_Convergence._ We perform two tests to address the convergence of our implementation of the Benettin _et al._[43] method. We first compute the FT-LCEs for a single initial condition of \(\mathcal{H}_{4}\) and an ensemble of 150 different random sets of initial tangent vectors. Figure (a)a shows the [5th, 95th] percentile range of the resulting marginal distributions of the positive FT-LCEs over a time span of 10 Gyr. The distributions shrink with increasing time, eventually collapsing on single time-dependent values. In this asymptotic regime, the Benettin _et al._[43] algorithm loses memory of the initial tangent vectors and purely retrieves the FT-LCEs as defined in Eq. (10). Therefore, Fig. (a)a shows asymptotically the dependence of the FT-LCEs on the initial condition \(\mathbf{z}_{0}\) and represents their statistical distribution over the phase-space domain explored by the dynamics in a non-ergodic way. The convergence of the computation is clearly slower for smaller exponents, but a comparison with Fig. (a)a indicates that, even in the case of \(\lambda_{8}\), the numerical uncertainty on the FT-LCEs of each orbital solution at 10 Gyr is negligible with respect to the width of their ensemble distributions.
To quantitatively estimate the numerical precision on the computed FT-LCEs, we exploit the symmetry of the spectrum stated in Eq. (9). For a single orbital solution, the relative numerical error on each exponent \(\lambda_{i}\) can be estimated as
\[\epsilon_{i}=\left|\frac{\Delta\lambda_{i}}{\lambda_{i}}\right|. \tag{10}\]
We plot in Fig. 8b the medians of \(\epsilon_{i}\) for the ensemble of 150 orbital solutions of Fig. 1a. The relative errors decrease asymptotically with time, as expected. Even in the case of the smallest exponent, \(\lambda_{8}\), the median error is less than 10% at 10 Gyr.
_Hamiltonian \(\mathcal{H}_{6}\)._ We compute for comparison the FT-LCEs of the forced ISS truncated at degree 6 in eccentricities and inclinations, that is, \(\mathcal{H}_{6}\). We consider 150 stable orbital solutions with initial conditions very close to the nominal values of Gauss's dynamics and random sets of initial tangent vectors, as we do for the truncation at degree 4. Figure 9 shows the [5th, 95th] percentile range of the marginal PDF of each FT-LCE estimated from the ensemble of solutions. Apart from being somewhat larger, the asymptotic distributions of the exponents are very similar to those of \(\mathcal{H}_{4}\) shown in Fig. 1a.
Appendix B Vectors \(\boldsymbol{\gamma_{1}}\), \(\boldsymbol{\gamma_{2}}\), \(\boldsymbol{\gamma_{3}}\)
We report here the explicit expressions of the vectors \((\boldsymbol{\gamma}_{i})_{i=1}^{3}\). We first give the components of the vector \(\boldsymbol{\omega}_{\text{LL}}\) of the fundamental precession frequencies of the inner orbits in the forced Laplace-Lagrange dynamics (including the leading correction of general relativity) [4]:
\[\boldsymbol{\omega}_{\text{LL}}=(\boldsymbol{g}_{\text{LL}}, \boldsymbol{s}_{\text{LL}})\approx \tag{11}\] \[(5.87,7.46,17.4,18.1,-5.21,-6.59,-18.8,-17.7),\]
in units of arcsec yr\({}^{-1}\) (see [80, 81, 82] for comparison with the frequencies of the Laplace-Lagrange dynamics of the entire Solar System). One then has
\[\boldsymbol{\gamma}_{1}=(\boldsymbol{0}_{4},\boldsymbol{1}_{4})=(0,0,0,0,1,1,1,1), \tag{12}\] \[\boldsymbol{\gamma}_{2}=(0,0,-1,-1,1,1,2,2),\] \[\boldsymbol{\gamma}_{3}=-\boldsymbol{\omega}_{\text{LL}}+g_{5} \boldsymbol{1}_{8}\approx\] \[(-1.61,-3.20,-13.2,-13.9,9.47,10.8,23.0,22.0),\]
with the components of \(\boldsymbol{\gamma}_{3}\) in units of arcsec yr\({}^{-1}\). We recall that \(g_{5}\approx 4.257^{\prime\prime}\,\text{yr}^{-1}\) is a constant in the forced model of the ISS. The corresponding unit vectors \((\widehat{\boldsymbol{\gamma}}_{i})_{i=1}^{3}\) are given by
\[\widehat{\boldsymbol{\gamma}}_{1}=(0,0,0,0,1,1,1,1)/2, \tag{13}\] \[\widehat{\boldsymbol{\gamma}}_{2}=(0,0,-1,-1,1,1,2,2)/2\sqrt{3},\] \[\widehat{\boldsymbol{\gamma}}_{3}\approx(-0.04,-0.08,-0.33,-0.35,0.24,0.27,0.58,0.55).\]
Since \(1/2\sqrt{3}\approx 0.289\), the components of \(\widehat{\boldsymbol{\gamma}}_{3}\) are only a few percent away from those of \(\widehat{\boldsymbol{\gamma}}_{2}\). Therefore, along stable orbital solutions with typical bounded variations of the Mercury-dominated action variable \(\mathfrak{X}_{1}\), the two quantities \(\mathcal{C}_{2}\) and \(\mathcal{E}_{2n}\) exhibit very similar time evolutions. This is not the case anymore when Mercury orbit reaches high eccentricities.
Figure 8: (a) Positive FT-LCEs of Hamiltonian \(\mathcal{H}_{4}\) and corresponding characteristic timescales for a single initial condition and an ensemble of 150 random sets of initial tangent vectors. The bands represent the [5th, 95th] percentile range of the marginal PDFs. The lines denote the distribution medians. (b) Medians of the relative numerical errors \(\epsilon_{i}\) on the FT-LCEs \(\lambda_{i}\), as defined in Eq. (10), for the ensemble of 150 orbital solution of Fig. 1a.
## Appendix C Ensemble distributions of the quasi-integrals over time
To retrieve the long-term statistical behavior of the QIs, we consider the ensembles of 1080 numerical integrations of the dynamical models \(\mathcal{H}_{4}\) and \(\mathcal{H}_{6}\), with very close initial conditions and spanning 100 Gyr in the future, that have been presented in Ref. [7]. We also consider the similar ensembles of solutions for the simplified Hamiltonians \(\mathbb{H}_{4}^{\bullet}\) and \(\mathbb{H}_{6}^{\bullet}\) that we introduce in Sect. IV.4. We report in Fig. 10 the time evolution of the ensemble PDFs of the low-pass filtered dimensionless QIs and dimensionless actions \(\mathfrak{X}_{1},\Psi_{3}\) for the different models (the cutoff frequency of the time filter is set to 1 Myr\({}^{-1}\), as in Sec. IV.2). More precisely, to highlight the growth of the statistical dispersion, we consider at each time the PDF of the signed deviation from the ensemble mean, so that all the plotted distributions have a null mean. At each time, the PDF estimation takes into account only the stable orbital solutions, that is, those solutions whose running maximum of Mercury eccentricity is smaller than 0.7 [7]. Figure 10 shows that the QIs are indeed slow quantities when compared to the LL action variables. The growth of the QI dispersion is detailed in Fig. 11, where we report the time evolution of the interquartile range (IQR) of their distributions. After a transient phase lasting about 100 Myr and characterized by the exponential separation of close trajectories, the time growth of the IQR follows a power law typical of diffusion processes. Figures 10 and 11 clearly show the slower diffusion of \(\mathcal{C}_{\rm inc}\) and \(\mathcal{C}_{2}\) in the model \(\mathbb{H}_{2n}^{\bullet}\) when compared to \(\mathcal{H}_{2n}\). We recall that \(\mathcal{E}_{2n}^{\bullet}\) is an exact integral of motion for the model \(\mathbb{H}_{2n}^{\bullet}\) (see Sect. IV.4) and its PDF has null dispersion.
Figure 9: Positive FT-LCEs \(\lambda_{i}\) of Hamiltonian \(\mathcal{H}_{6}\) and corresponding characteristic timescales \(\lambda_{i}^{-1}\). The bands represent the [5th, 95th] percentile range of the marginal PDFs estimated from an ensemble of 150 stable orbital solutions with very close initial conditions. The lines denote the distribution medians.
Figure 10: Time evolution over 100 Gyr of the PDF of the signed deviation from the mean of the low-pass filtered dimensionless QIs and dimensionless actions \(\mathfrak{X}_{1},\Psi_{3}\). Estimation from an ensemble of 1080 numerical orbital solutions for different models (\(\mathcal{H}_{4}\), \(\mathcal{H}_{6}\), \(\mathbb{H}_{4}^{\bullet}\), and \(\mathbb{H}_{6}^{\bullet}\)). _First row_: \(\widetilde{\mathcal{C}}_{\text{inc}}\). _Second row_: \(\widetilde{\mathcal{C}}_{2}\). _Third row_: \(\widetilde{\mathcal{E}}_{4}\) (\(\mathcal{H}_{4}\)) and \(\widetilde{\mathcal{E}}_{6}\) (\(\mathcal{H}_{6}\)). _Fourth row_: \(\widetilde{\Psi}_{3}\). The time of each curve is color coded. At each time, the estimation only takes into account stable solutions, that are those with a running maximum of Mercury eccentricity smaller than 0.7. The quantity \(\widetilde{\mathcal{E}}_{2n}^{\bullet}\) is an exact integral of motion for the model \(\mathbb{H}_{2n}^{\bullet}\) and its PDF has null dispersion. |
2306.09957 | Stable nodal projection method on octree grids | We propose a novel collocated projection method for solving the
incompressible Navier-Stokes equations with arbitrary boundaries. Our approach
employs non-graded octree grids, where all variables are stored at the nodes.
To discretize the viscosity and projection steps, we utilize supra-convergent
finite difference approximations with sharp boundary treatments. We demonstrate
the stability of our projection on uniform grids, identify a sufficient
stability condition on adaptive grids, and validate these findings numerically.
We further demonstrate the accuracy and capabilities of our solver with several
canonical two- and three-dimensional simulations of incompressible fluid flows.
Overall, our method is second-order accurate, allows for dynamic grid
adaptivity with arbitrary geometries, and reduces the overhead in code
development through data collocation. | Matthew Blomquist, Scott R. West, Adam L. Binswanger, Maxime Theillard | 2023-06-16T16:42:31Z | http://arxiv.org/abs/2306.09957v1 | # Stable nodal projection method on octree grids
###### Abstract
We propose a novel collocated projection method for solving the incompressible Navier-Stokes equations with arbitrary boundaries. Our approach employs non-graded octree grids, where all variables are stored at the nodes. To discretize the viscosity and projection steps, we utilize supra-convergent finite difference approximations with sharp boundary treatments. We demonstrate the stability of our projection on uniform grids, identify a sufficient stability condition on adaptive grids, and validate these findings numerically. We further demonstrate the accuracy and capabilities of our solver with several canonical two- and three-dimensional simulations of incompressible fluid flows. Overall, our method is second-order accurate, allows for dynamic grid adaptivity with arbitrary geometries, and reduces the overhead in code development through data collocation.
keywords: incompressible Navier-Stokes, collocated, node-based, Projection, Stability, Octree grids, sharp interface +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Incompressible fluid flows are ubiquitous in science and engineering applications and lie at the heart of numerous research questions. Developing control strategies to minimize drag [24], understanding arterial wall deformation in the human heart [31], and even optimizing the energy consumption of automotive spray painting operations [58] all require a detailed understanding of incompressible flows. For the majority of these phenomena, analytical solutions do not exist, and experimental approaches can be difficult and costly to create. Numerical simulations are the natural choice for studying these problems. Still, despite decades of computational advancement, their development remains challenging, especially when irregular geometries, adaptive grids, or complex boundary conditions are involved. For this reason, it is essential to develop computational fluid dynamics tools that are accessible and straightforward to implement, which can be achieved, for example, by minimizing the number of unique data structures and simplifying data access patterns.
The first step in developing a numerical simulation for incompressible flows is typically to discretize the fluid domain using a numerical mesh. This mesh can be represented as a structured set of elements or nodes, as in the Cartesian [30] and curvilinear [38] styles, or by an unstructured mesh [54]. Structured meshes are usually easy to generate and can sometimes yield additional accuracy (_e.g._ supra-convergence for nodal finite differences), but special care is needed to handle irregular geometries. Unstructured meshes, conversely, can tessellate irregular geometries, but their generation can be computationally expensive [50]. With problems that involve moving geometries or in the context of adaptive grids, unstructured grids often require that computational costs be paid at each time step. For that reason, it is typically preferable to use structured meshes, such as non-graded quad/octrees, and develop the tools to handle irregular geometries. This is the approach we take herein.
The primary challenge with numerically solving the incompressible Navier-Stokes equations is related to the coupling between the mass and momentum equations, which manifests as an incompressibility constraint on the velocity field that must be satisfied at each time step. One method for enforcing this constraint is to
solve a monolithic system of coupled equations for the velocity and pressure unknowns [56]. This approach can often be beneficial when additional physical constraints are coupled with an incompressible flow solver, such as in two-phase flows [1; 13] and fluid-structure interaction problems [26; 32; 14; 61]. However, the monolithic systems may not be diagonally dominant and thus require the use of expensive solvers (_e.g._ GMRES). For this reason, most computational frameworks for solving the incompressible Navier-Stokes equations use splitting methods.
The projection method, pioneered by Chorin [15] and later proposed as a fractional step method by Temam [62; 63], decouples the momentum equation and the incompressibility condition by leveraging the Helmholtz-Hodge decomposition. This creates a two-step time-stepping procedure for solving the Navier-Stokes equations, where an intermediate velocity field is first computed with the pressure omitted. Then, the intermediate velocity field is projected onto the divergence-free subspace to recover the incompressible velocity at the new time step. In this procedure, the pressure is never directly computed but can be reconstructed from the curl-free component of the intermediate velocity field (see _e.g._ 2.5). The decoupling of the momentum equation and the incompressibility condition via this projection method introduces temporal splitting errors that manifest on the boundary condition and can limit the order of accuracy of numerical methods in both space and time. However, the development of high-order projection methods has been an active area of research for the last three decades, and a number of standard approaches exist (see [53; 35; 11; 28; 67; 43]).
The overall stability of the projection method relies on the stability of each step. In the first step, the computation of the intermediate velocity field requires the solution of an advection-diffusion problem. The stability of this step is dictated by the properties of the temporal integration scheme and is thus easily assessed. The stability of the projection step, however, is linked to the spatial discretization of the gradient, divergence, and Laplacian operators. The natural way to ensure that the projection step is stable is to preserve the analytical properties of these operators and impose the divergence-free constraint at the discrete level. Unfortunately, this creates complications when using a collocated arrangement. For example, the discrete Laplacian is no longer a composition of the discrete gradient and divergence when using collocated variables with standard central difference formulas on Cartesian grids. These complications can be overcome by using interpolation procedures (_e.g._ see [48]), but these procedures can themselves become challenging in the presence of irregular boundaries or adaptive grids.
An alternate approach is to use a staggered layout, such as the Marker and Cell (MAC) representation introduced by Harlow and Welch [30], and illustrated in Figure 1. In this staggered arrangement, pressure data is located at the cell centers, and the velocities are located at the cell faces. In this context, the analytical properties of the operators are preserved, and the projection step enforces the divergence-free constraint with an orthogonal projection of the intermediate velocity field onto the divergence-free space. For this reason, the MAC layout has long been recognized as an ideal choice for solving the incompressible
Figure 1: Data layouts - (a) Within the Marker and Cell representation the pressure components (\(\blacksquare\)) are stored at the cell centers, the x-velocity (\(\blacktriangleright\)) are stored at the vertical faces, and the y-velocity components (\(\blacktriangle\)) are stored at the horizontal faces. (b) In [29], Gomez _et al._ proposed a different staggered storage, where the pressure components are now stored at the nodes, and the velocity components are stored along the edges. (c) In our approach, all quantities (\(\blacktriangleright\)) are collocated at the nodes.
Navier-Stokes equations [27; 36; 70].
A different approach is to relax the requirement that the operators are preserved exactly at the discrete level and approximate the projection step. This notion of using an approximate projection was first introduced by Almgren _et al._[5] as a means of circumventing the numerical difficulties associated with exact discrete projections. In their approach, the discrete Laplacian operator is consistent, but not exactly the composition of the divergence and gradient, and the divergence of the resulting velocity field is only approximately zero up to the second order in the mesh spacing. This approximate projection uses velocities collocated at the cell centers with pressure data stored at the nodes. This design choice provides a symmetric discretization and generates a well-behaved linear system suitable for a multigrid solver. Additionally, the collocation of the velocities greatly simplifies the application of high-order upwind techniques and enables the overall time-stepping algorithm to achieve second-order convergence in both space and time. For a detailed analysis of approximate projection methods, we refer the reader to [4] and the references therein, which cover the use of approximate projections for a variety of incompressible flow problems and the combination of approximate projection methods with adaptive spatial and temporal meshes.
The inherent multiscale nature of the incompressible Navier-Stokes equations calls for adaptive mesh refinement (AMR) to optimize computation and memory overhead. Typically, only small localized regions of the domain need high grid resolution, for example, where high vorticity is present, such as in boundary layers or vortex cores. One of the earliest examples of using AMR is from Berger and Oliger [9], where finer grids were adaptively placed over a coarse grid covering the domain. In [3], Almgren _et al._ combined an approximate projection method with an adaptive refinement strategy to solve the three-dimensional variable density incompressible Navier-Stokes equations. This block-structured approach was comprised of a nested hierarchy of logically-rectangular girds with refinement in both space and time. This AMR approach has been extended to numerous applications, and we refer the interested reader to the survey of [18] and to [59] for more details. For a modern block-structured AMR framework, we refer the reader to [72].
Tree-based approaches to AMR [57] are an alternative to block-structured AMR and the strategy used herein. They combine efficiency with simplicity by using recursive splitting schemes with non-overlapping regions. One of the earliest uses of graded octrees (a tree in which the size ratio between adjacent cells is at most two) for solving the incompressible Euler equations can be seen from Popinet in [55], where finite volume discretizations are used on collocated cell-based quantities. This work uses the same approach as Almgren _et al._ in [5] to approximate the Laplacian but also requires an approximation for the gradient operator. This leads to a nonuniform stencil and a non-symmetric system of linear equations for the pressure. In [42], Losasso _et al._ proposed a symmetric discretization of the Poisson equation on octrees for free surface flows. This discretization resulted in a symmetric linear system, solved using a standard preconditioned conjugate gradient method. Their approach was only first-order accurate in the case of a non-graded mesh but was later extended to second-order accuracy [41], and later employed in the context of single phase [8; 28; 20], multiphase [67], and free surface [22; 6] applications.
In [45], Min _et al._ developed a collocated projection method for the incompressible Navier-Stokes equations on non-graded adaptive grids. Their method utilized the Poisson solver of [47], which produced a non-symmetric but diagonally dominant linear system where the gradients of the solution were also second-order accurate. To ensure stability, instead of using an approximate projection, the authors manually enforced the orthogonality property between the divergence-free velocity field and the gradient of the Hodge variable when computing the updated velocity field. While this orthogonalization procedure results in an exact projection, this method is only stable if the normal velocity is null along the boundary.
For a more versatile approach, Guittet _et al._[28] developed a stable projection method for the incompressible Navier-Stokes equations on non-graded octree grids using a MAC layout. While this layout is a natural framework for a stable projection, the poor alignment between the pressure and velocity variables necessitated the use of complicated discretizations for the momentum equation and the projection step. The authors had to develop a Voronoi-based finite volume approach to treat the viscous terms implicitly and an expensive least squares interpolation was required to implement the semi-Lagrangian scheme for the advective terms. Though complicated discretizations were required, this framework was later extended to simulate active [65; 69] and interfacial [16; 67] flows.
An alternative staggered layout was proposed by Gomez _et al._[29], in which the pressure was stored at the nodes and velocity stored along the edges (see Figure 1). This configuration of the staggered layout
offers several advantages over the traditional MAC layout. The pressure gradient and velocity are naturally aligned, simplifying the construction of finite difference operators and granting higher accuracy. Unfortunately, this staggered layout, like the MAC configuration, requires multiple data structures and solvers.
The method we present here is entirely collocated at the nodes of an arbitrary octree. It is carefully designed to achieve stability while reproducing solid interfaces and accompanying boundary layers with high fidelity. The overall stability of our method relies on the properties of our projection operator, which we analyze by relating the staggered and collocated approaches. We use a second-order semi-Lagrangian Backward Difference Formula scheme to update the momentum equation, treating the advective term explicitly and the viscous one implicitly. Arbitrary interface and boundary conditions are treated in a sharp manner using a level-set representation and the hybrid Finite-Volume/Finite-Difference discretizations from [69].
This manuscript is organized as follows. In Section 2, we begin by recalling the incompressible Navier-Stokes equations and the general projection method used to solve them numerically. In Section 3, we present the principle result of this work: our collocated projection operator on Cartesian grids. We prove its stability on uniform grids, construct a sufficient stability condition for adaptive grids, and provide numerical evidence of its stability for various boundary conditions and grid configurations. In Section 4, we integrate our projection operator into a complete solver for the incompressible Navier-Stokes equations and observe overall second-order convergence. In Section 5, we illustrate the robustness and efficiency of our solver by using it to simulate several common validation problems of incompressible fluid flows in two and three spatial dimensions. We conclude in Section 6.
## 2 Mathematical model
### Incompressible Navier-Stokes equations
We consider a fluid set in a computational domain \(\Omega=\Omega^{+}\cup\Omega^{-}\subset\mathbb{R}^{2,3}\), where \(\Omega^{-}\) represents the fluid phase and \(\Omega^{+}\) represents all solid objects present in the fluid (see Figure 2), whose dynamics are modeled by the incompressible Navier-Stokes equations
\[\rho\left(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot \nabla\mathbf{u}\right) =-\nabla p+\mu\Delta\mathbf{u}+\mathbf{f}\qquad\forall\mathbf{x} \in\Omega^{-}, \tag{1}\] \[\nabla\cdot\mathbf{u} =0\qquad\forall\mathbf{x}\in\Omega^{-}, \tag{2}\]
where \(\mathbf{u}\) is the fluid velocity, \(p\) is the pressure, \(\rho\) is the constant density, \(\mu\) is the constant viscosity, and \(\mathbf{f}\) are external forces such as the gravitational force. We denote the boundary of \(\Omega^{-}\) by \(\Gamma\) and the boundary of the computational domain \(\Omega\) by \(\partial\Omega\).
Figure 2: Computational domain shown in two dimensions. The fluid domain, \(\Omega^{-}\), is enclosed by the domain boundary, \(\partial\Omega\), and the interface, \(\Gamma\). Fluid properties, \(\rho\) and \(\mu\), are constant throughout the fluid domain. An arbitrary solid domain, \(\Omega^{+}\) is shown as a shaded region.
### General projection method
The classical projection method is a fractional-step scheme for solving the incompressible Navier-Stokes equations. In the first step, often referred to as the viscosity or advection step, we advance the velocity field \(\mathbf{u}^{n}\) at time step \(t_{n}\) to an intermediate velocity field \(\mathbf{u}^{*}\) by solving the momentum equation (1) where the pressure term is omitted. For example, using a standard first-order semi-explicit scheme, \(\mathbf{u}^{*}\) is constructed as the solution of
\[\rho\left(\frac{\mathbf{u}^{*}-\mathbf{u}^{n}}{\Delta t}+\mathbf{u}^{n}\cdot \nabla\mathbf{u}^{n}\right)=\mu\Delta\mathbf{u}^{*}+\mathbf{f}. \tag{3}\]
Next, we project \(\mathbf{u}^{*}\) onto the divergence-free space to enforce the incompressibility condition (2). To do so, we use the Helmholtz-Hodge decomposition to separate the intermediate velocity into curl-free and divergence-free components as follows:
\[\mathbf{u}^{*}=\mathbf{u}^{n+1}+\nabla\phi, \tag{4}\]
where \(\mathbf{u}^{n+1}\) is the divergence-free velocity field at time \(t_{n+1}\) and \(\phi\) is the Hodge variable. Taking the divergence of the above equation (4), we obtain a Poisson equation for the Hodge variable,
\[\Delta\phi=\nabla\cdot\mathbf{u}^{*}. \tag{5}\]
Using the appropriate boundary conditions, the above equation is solved to compute the Hodge variable, and the divergence-free velocity is recovered as
\[\mathbf{u}^{n+1}=\mathbf{u}^{*}-\nabla\phi. \tag{6}\]
Note that equation (6) can be rewritten as an operator applied to \(\mathbf{u}^{*}\),
\[\mathbf{u}^{n+1}=\left(\mathcal{I}-\nabla\Delta^{-1}\nabla\cdot\right)\mathbf{ u}^{*}. \tag{7}\]
Thus, we define the generic projection operator \(P\) as,
\[P=\mathcal{I}-\nabla\Delta^{-1}\nabla\cdot. \tag{8}\]
### Properties of the projection operator
The analytic operator \(P\), defined by equation (8), is indeed a projection (_i.e._\(P^{2}=P\)) as
\[P^{2}=\left(\mathcal{I}-\nabla\Delta^{-1}\nabla\cdot\right)^{2} =\mathcal{I}-2\nabla\Delta^{-1}\nabla\cdot+\nabla\Delta^{-1}\nabla \cdot\nabla\left(\nabla\cdot\nabla\right)^{-1}\nabla\cdot, \tag{9}\] \[=\mathcal{I}-\nabla\Delta^{-1}\nabla\cdot. \tag{10}\]
The projection property ensures that the projected field is exactly divergence-free and, thus, that the incompressibility condition is exactly satisfied. Moreover, since the gradient and divergence are the negative transpose of each other (\(\nabla^{T}=-\nabla\cdot\)), the projection is symmetric \(P^{T}=P\), orthogonal _i.e._
\[\left\|\mathbf{u}^{n+1}\right\|^{2}=\left\|P\mathbf{u}^{*}\right\| ^{2} =\left\|\mathbf{u}^{*}\right\|^{2}-2<\mathbf{u}^{*}|\nabla\Phi>+ \left\|\nabla\Phi\right\|^{2}, \tag{11}\] \[=\left\|\mathbf{u}^{*}\right\|^{2}+2<\nabla\cdot\mathbf{u}^{*}| \Phi>+\left\|\nabla\Phi\right\|^{2},\] (12) \[=\left\|\mathbf{u}^{*}\right\|^{2}+2<\nabla\cdot\nabla\Phi|\Phi> +\left\|\nabla\Phi\right\|^{2},\] (13) \[=\left\|\mathbf{u}^{*}\right\|^{2}-\left\|\nabla\Phi\right\|^{2}, \tag{14}\]
and therefore contracting,
\[\left\|P\mathbf{u}^{*}\right\|\leq\left\|\mathbf{u}^{*}\right\|. \tag{15}\]
In the discrete case, if the composition and negative transpose properties are preserved, \(P\) remains contracting, and, provided that the temporal integrator is stable, the entire update from \(t_{n}\) to \(t_{n+1}\) is stable.
### Boundary conditions
We focus on single-phase flows around solid objects; hence, the possible boundary conditions are no-slip at the walls and interface and possible inflow/outflow boundary conditions at the walls of the computational domain. The no-slip boundary condition is a Dirichlet boundary condition on the velocity, \(\mathbf{u}|_{\Gamma}\), where \(\mathbf{u}|_{\Gamma}\) is zero if the interface is static and equal to the velocity of the interface if it is moving. This, along with equation (4), implies that the boundary condition of the intermediate velocity field \(\mathbf{u}^{*}\) is
\[\mathbf{u}^{*}|_{\Gamma}=\mathbf{u}|_{\Gamma}+\nabla\phi^{n+1}|_{\Gamma}. \tag{16}\]
When solving for the Hodge variable in the projection step, we prescribe the homogeneous Neumann boundary conditions, _i.e._
\[(\nabla\phi\cdot\mathbf{n})|_{\Gamma}=0, \tag{17}\]
where \(\mathbf{n}\) is the outward-facing normal vector to the boundary. Note that the value of \(\phi\) is not computed when finding the intermediate velocity field \(\mathbf{u}^{*}.\) To keep the viscosity and projection decoupled, we can use the Hodge variable at the previous time step, \(\phi^{n}\), to serve as an initial guess to the correct boundary condition of \(\mathbf{u}^{*}\) and iterate until we have convergence in both \(\mathbf{u}^{*}\) and \(\phi\). In [28; 20; 43], it was observed that using this initial guess led to very few iterations being needed to reach convergence. Other iterative corrections can be designed, (see for example [69]). The \(\nabla\phi^{n+1}\) term in the above (16) should be seen as a splitting error. It can be shown to be first-order in time. Even in the absence of iterative correction, the presence of this term will only introduce converging errors on the boundary conditions [11].
Potential flux boundary conditions on the walls of the computational domain \(\partial\Omega\) correspond to Neumann boundary conditions. This means that the boundary condition for the intermediate velocity field \(\mathbf{u}^{*}\) is
\[(\nabla\mathbf{u}^{*}\cdot\mathbf{n})|_{\partial\Omega}=(\nabla\mathbf{u} \cdot\mathbf{n})|_{\partial\Omega}+(\nabla\nabla\Phi\cdot\mathbf{n})_{\partial \Omega}. \tag{18}\]
The boundary condition for the Hodge variable is a Dirichlet boundary condition. Again the last term in the above equation should be seen as a splitting and converging (\(\mathcal{O}(\Delta t)\)) error. In this case, an iterative correction for this term would require differentiating the Hodge variable twice, which may produce noisy and inaccurate results. Therefore, this term is often disregarded. If boundary conditions are prescribed on the pressure directly, the corresponding boundary conditions on the Hodge variable are obtained from the pressure reconstruction formula.
### Pressure reconstruction
In our formulation of the projection method, pressure is never directly computed. If desired, we can compute it using (3) and (4) by asking that the discrete momentum equation must be satisfied for \(\mathbf{u}^{n+1}\) and \(p\), and obtaining
\[\rho\left(\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}+\mathbf{u}^{n} \cdot\nabla\mathbf{u}^{n}\right)=-\nabla p+\mu\Delta\mathbf{u}^{n+1}. \tag{19}\]
Since \(\mathbf{u}^{n+1}=\mathbf{u}^{*}-\nabla\phi\) from the Helmholtz-Hodge decomposition
\[\rho\left(\frac{\mathbf{u}^{*}-\nabla\phi-\mathbf{u}^{n}}{\Delta t}+\mathbf{u }^{n}\cdot\nabla\mathbf{u}^{n}\right)=-\nabla p+\mu\Delta(\mathbf{u}^{*}- \nabla\phi), \tag{20}\]
and because \(\mathbf{u}^{*}\) is defined as the solution of (3), this simplifies into
\[-\rho\frac{\nabla\phi}{\Delta t}=-\nabla p-\mu\Delta\nabla\phi, \tag{21}\]
or equivalently,
\[\nabla p=\frac{\rho}{\Delta t}\nabla\phi-\mu\nabla\Delta\phi, \tag{22}\]
and so, up to an additive constant, we arrive at our pressure reconstruction formula
\[p=\frac{\rho}{\Delta t}\phi-\mu\Delta\phi. \tag{23}\]
We point out that using a different time integrator may alter the above formula (see _e.g._[28; 67]).
## 3 Node-based projection operator on Cartesian grids
In this section, we introduce our collocated projection method, analyze its properties on periodic uniform and adaptive Cartesian grids, and verify the stability of the method numerically. We find that the composition projection property is lost due to the collocation, and thus our operator is not a projection. However, we see that our operator can be iterated to recover the canonical projection onto the divergence-free space. We derive a technical sufficient condition for these iterations to converge and, in the periodic uniform case, are able to prove that our operator is, in fact, contracting.
The convergence analysis relies on the connection between the staggered and collocated frameworks. To this effect, we show that the collocated projection can be related to the staggered one through interpolation procedures and leverage important facts about the discrete projection operator on staggered grids. The interpolations are solely introduced for the purpose of proving the stability of the collocated projection; they do not appear in the numerical implementation.
### Definitions and key observation
Our nodal projection operator \(\mathcal{P}_{N}\) is defined from the nodal gradient \(\mathcal{G}_{N}\), divergence \(\mathcal{D}_{N}\) and Laplacian \(\mathcal{L}_{N}\) as
\[\mathcal{P}_{N}=\mathcal{I}-\mathcal{G}_{N}\mathcal{L}_{N}^{-1}\mathcal{D}_{N}, \tag{24}\]
where \(\mathcal{I}\) is the appropriate identity operator and \(\mathcal{L}_{N}^{-1}\) is the operator that for any vector \(X\), returns \(Y\) the solution of \(\mathcal{L}_{N}Y=X\) with problem-dependent homogeneous boundary conditions 1. Again, the nodal Laplacian is not the composition of the nodal divergence and nodal gradient (_i.e._\(\mathcal{L}_{N}\neq\mathcal{G}_{N}\mathcal{D}_{N}\)) and our nodal projection operator is not a true projection (_i.e._\(\mathcal{P}_{N}^{2}\neq\mathcal{P}_{N}\)). This means that the projected velocity field is not guaranteed to be divergence-free (_i.e._\(\mathcal{D}_{N}\mathcal{P}_{N}X\neq 0\)). However, we can show that our projection operator only preserves the incompressible modes of the velocity field. Therefore, if we iterate our projection operator and if the iterated operator converges, then it must converge to the canonical orthogonal projection.
Footnote 1: If the boundary conditions are chosen so that the solution is not uniquely defined, we will enforce that the solution is zero at one specific location.
To demonstrate this, we assume that \(\mathcal{P}_{N}^{k}\to\mathcal{P}_{N}^{\infty}\) as \(k\to\infty\), and remark that if \(\lambda_{i}\) are the eigenvalues of \(\mathcal{P}_{N}\), the eigenvalues of \(\mathcal{P}_{N}^{\infty}\) are \(\lim_{k\to\infty}\lambda_{i}^{k}\), and since they are finite, they can only be \(1\) or \(0\). In addition, the eigenvectors of \(\mathcal{P}_{N}^{\infty}\) corresponding to the eigenvalue \(1\) are the eigenvectors of \(\mathcal{P}_{N}\) for the same eigenvalue. Therefore \(\mathcal{P}_{N}^{\infty}\) is the projection on the eigenspace of \(\mathcal{P}_{N}\) corresponding to the eigenvalue \(1\). Clearly, any divergence-free modes \(X\) belong to this eigenspace since
\[\mathcal{P}_{N}^{\infty}X=\lim_{k\to\infty}\mathcal{P}_{N}^{k}X=X. \tag{25}\]
Figure 3: Connection between the discrete divergence on collocated (\(\mathcal{D}_{N}\)) and staggered grids (\(\mathcal{D}\)). The arrows represent the finite difference coefficients used in each contributing term.
Now, if \(X\) is an eigenvector associated to the eigenvalue 1, \(\mathcal{P}_{N}X=X\), and so
\[\mathcal{G}_{N}\mathcal{L}_{N}^{-1}\mathcal{D}_{N}X=0, \tag{26}\]
meaning that \(\mathcal{L}_{N}^{-1}\mathcal{D}_{N}X\) must be constant. Assuming that a homogeneous Dirichlet boundary condition is enforced somewhere, this constant must be zero
\[\mathcal{L}_{N}^{-1}\mathcal{D}_{N}X=0, \tag{27}\]
and thus \(\mathcal{D}_{N}X=0\), indicating that \(X\) is incompressible. The eigenspace for the eigenvalue 1 is the incompressible space, and therefore \(P_{N}^{\infty}\), if it exists, is the canonical projection on the incompressible space.
In the subsequent sections, we construct a sufficient condition for this limit to exist. However, we note that in practice, this limit may not be reached as only a small number of iterations will be performed, and the advanced velocity field may not be exactly divergence-free. These deviations can be kept arbitrarily small by adjusting the stopping criteria, and in practice, a few iterations are needed to achieve a reasonably small error tolerance. In fact, a single iteration may be enough to obtain an overall second-order solution (see Section 4.2.2).
### Uniform Cartesian grids
#### 3.2.1 Periodic domains
In this case, we define \(\mathcal{G}_{N}\), \(\mathcal{L}_{N}\), \(\mathcal{D}_{N}\) to be the standard periodic second-order finite difference gradient, Laplacian, and divergence. As Figure 3 illustrates, the nodal divergence operator \(\mathcal{D}_{N}\) can be seen as the composition of the staggered discrete divergence \(\mathcal{D}\) with the linear interpolation operator from the set of nodes \(N\) to the set of edges \(E\)\(\mathcal{I}_{N}^{E}:N\to E\)
\[\mathcal{D}_{N}=\mathcal{D}\mathcal{I}_{N}^{E}. \tag{28}\]
Similarly, the discrete nodal gradient \(\mathcal{G}_{N}\) is related to the staggered operator and the interpolation from the edges to the nodes \(\mathcal{I}_{E}^{N}\) as
\[\mathcal{G}_{N}=\mathcal{I}_{E}^{N}\mathcal{G}. \tag{29}\]
The staggered operators satisfy the negative transpose property
\[\mathcal{D}^{T}=-\mathcal{G}, \tag{30}\]
Figure 4: Spectrum of the projection operator with periodic boundary conditions (left) and no-slip boundary conditions (right) on a uniform grid. We observe that the eigenvalues are in the interval [0,1], which suggests that the operator is stable. The density of the eigenvalues for each grid resolution is shown.
and the two interpolations are transpose of each other
\[(\mathcal{I}_{E}^{N})^{T}=\mathcal{I}_{N}^{E}. \tag{31}\]
Finally, we observe that the nodal Laplacian \(\mathcal{L}_{N}\) is identical to the staggered one \(\mathcal{L}\)
\[\mathcal{L}_{N}=\mathcal{L}. \tag{32}\]
Using these observations, we can rewrite the projection operator (24) as
\[\mathcal{P}_{N}=\mathcal{I}-I_{E}^{N}\mathcal{G}\mathcal{L}^{-1}\mathcal{D}I_ {N}^{E}. \tag{33}\]
All these ingredients allow us to follow the contraction's proof (see Section 2.3): the norm of the projected velocity is
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}-2<X|I_{E}^{N} \mathcal{G}\mathcal{L}^{-1}\mathcal{D}I_{N}^{E}X>+\left\|I_{E}^{N}\mathcal{G} \mathcal{L}^{-1}\mathcal{D}I_{N}^{E}X\right\|^{2}, \tag{34}\]
setting \(\mathcal{L}^{-1}\mathcal{D}I_{N}^{E}X=\Phi\), and using the transpose properties of the divergence and interpolation we get
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}+2<\mathcal{D}I_{N}^{ E}X|\Phi>+\left\|I_{E}^{N}\mathcal{G}\Phi\right\|^{2}, \tag{35}\]
which is identical to
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}+2<\mathcal{L} \mathcal{L}^{-1}\mathcal{D}I_{N}^{E}X|\Phi>+\left\|I_{E}^{N}\mathcal{G}\Phi \right\|^{2}, \tag{36}\]
giving us
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}+2<\mathcal{L}\Phi| \Phi>+\left\|I_{E}^{N}\mathcal{G}\Phi\right\|^{2}, \tag{37}\]
and since \(\mathcal{L}=\mathcal{D}\mathcal{G}\) and \(\mathcal{G}=-\mathcal{D}^{T}\)
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}-2<\mathcal{G}\Phi| \mathcal{G}\Phi>+\left\|I_{E}^{N}\mathcal{G}\Phi\right\|^{2}, \tag{38}\]
we have
\[\left\|\mathcal{P}_{N}X\right\|^{2}=\left\|X\right\|^{2}-2\left\|\mathcal{G} \Phi\right\|^{2}+\left\|I_{E}^{N}\mathcal{G}\Phi\right\|^{2}, \tag{39}\]
and so
\[\left\|\mathcal{P}_{N}X\right\|^{2}\leq\left\|X\right\|^{2}-\left(2-\left\|I_ {E}^{N}\right\|^{2}\right)\left\|\mathcal{G}\Phi\right\|^{2}. \tag{40}\]
Thus, if \(\left\|I_{E}^{N}\right\|<\sqrt{2}\), we have \(\left\|\mathcal{P}_{N}X\right\|\leq\left\|X\right\|\) and \(\mathcal{P}_{N}\) is contracting, and therefore converging. Here, the interpolations are averaging operators, so their \(L^{\infty}\)-norms are less than or equal to one, and the projection is contracting.
In Figure 4, we display the spectrum of the projection operator, computed directly from the discrete numerical operator and for increasing grid resolution. Since the operator is contracting, we expect its spectrum to lie in the interval \([0,1]\), and indeed it does. We color the eigenvalues by the density of the eigenvalue for that grid resolution. Note that for all grid resolutions, the eigenvalue with the highest density is the \(\lambda=1\) eigenvalue, which corresponds to an eigenvector with an incompressible mode.
#### 3.2.2 No-slip boundary conditions
For a more realistic context, we turn our attention to no-slip boundary conditions. On rectangular domains, this means that we set the \(x\)-velocity \(u=0\) along the horizontal walls, the \(y\)-velocity \(v=0\) along the vertical walls, and the normal gradient of the Hodge variable to be zero on all walls. Currently, we do not have proof of the stability of this operator, as a general formulation of this operator with these boundary conditions, unlike in the periodic case, is not easily known. Instead, we reconstruct the numerical operator by projecting the canonical basis vectors of \(\mathbb{R}^{n}\), where \(n\) is the dimension of the square matrix operator, with the boundary condition set beforehand. The resulting spectrum for increasing grid resolution is depicted in Figure 4, and is contained in \([0,1]\). The addition of the boundary condition introduces an extra constraint on the velocity field, causing the projection to be even more restrictive and thus the admissible incompressible modes (_i.e._ eigenvector associated to \(\lambda=1\)) to be rarer. In addition, the no-slip boundary condition introduces a \(0\) eigenvalue to the operator, which corresponds to the canonical basis vectors that are non-zero only along the walls with the prescribed boundary condition.
### Extension to Adaptive Grids
The negative transpose property is lost on adaptive grids, so the projected velocity norm cannot easily be bounded. Instead, we focus on the spectrum of the projection, and since we know that the incompressible space is the eigenspace associated with the eigenvalue 1 (see Section 3.1), we only need to find a sufficient condition for all eigenvalues associated to compressible eigenvectors to be strictly less than 1 in absolute value. In other words, a condition under which our projection is critically stable.
We start our study by formally defining all involved operators and proving the stability of the staggered projection. We then leverage this property to find a sufficient convergence condition. For the sake of clarity, this entire theoretical study is done on two-dimensional periodic grids. The extension to three dimensions is tedious but straightforward.
#### 3.3.1 Construction of the discrete differential operators
To discretize our differential operators, we follow the work of Min and Gibou [47]. The central idea is to define ghost values at T-junctions (often called hanging nodes) to circumvent the lack of direct neighbors. This is done using standard Taylor analysis and derivative approximation. For example, referring to Figure 5, node \(n_{0}\) does not have a direct neighbor to the right, so we introduce a ghost node \(n_{r}\) on the face delimited by nodes \(n_{rt}\) and \(n_{rb}\). For any nodal quantity \(\phi\), sampled at the existing nodes, we can calculate a third-order accurate ghost value \(\phi_{r}\) using the information at \(n_{0}\), at its direct neighbors in all three other directions (_i.e._\(n_{l},n_{t},n_{b}\)), and at the neighboring nodes \(n_{rt}\) and \(n_{rb}\) as
\[\phi_{r}=\frac{r_{b}\phi_{r_{t}}+r_{t}\phi_{t_{b}}}{r_{t}+r_{b}}-\frac{r_{t}r_ {b}}{t+b}\left(\frac{\phi_{t}-\phi_{0}}{t}-\frac{\phi_{0}-\phi_{b}}{b}\right). \tag{41}\]
Now, we should see each node as having four direct neighbors, one in each direction, with at most one ghost neighbor. Using these new neighborhoods, we can construct discretizations for the standard differential operators, such as the nodal Laplacian \(\mathcal{L}_{N}\), divergence \(\mathcal{D}_{N}\) and gradient \(\mathcal{G}_{N}\). As Figure 6 illustrates, these operators evaluated at any node \(n_{0}\) for any given sampled Hodge variable \(\phi\) and any given velocity field
Figure 5: Finite difference discretization on quadtree grids. Here, node \(n_{0}\) has no direct neighbor to the right, and thus a ghost node \(n_{r}\) (\(\bigcirc\)) must be constructed using the existing neighboring nodes (\(\blacktriangledown\)). Standard central discretizations can then be constructed using this ghosted neighborhood.
\(\mathbf{u}=(u,v)\) are expressed in terms of their neighborhood information as
\[\mathcal{L}_{N}\phi\big{|}_{0} =\frac{2}{r+\ell}\left(\frac{\phi_{r}-\phi_{0}}{r}-\frac{\phi_{0}- \phi_{\ell}}{\ell}\right)+\frac{2}{t+b}\left(\frac{\phi_{t}-\phi_{0}}{t}-\frac{ \phi_{0}-\phi_{b}}{b}\right), \tag{42}\] \[\mathcal{D}_{N}(u,v)\big{|}_{0} =\frac{u_{r}-u_{\ell}}{r+\ell}+\frac{v_{t}-v_{b}}{t+b},\] (43) \[\mathcal{G}_{N}\phi\big{|}_{0} =\left(\frac{\ell}{r+\ell}\frac{\phi_{r}-\phi_{0}}{r}+\frac{r}{r+ \ell}\frac{\phi_{0}-\phi_{\ell}}{\ell},\quad\frac{b}{t+b}\frac{\phi_{t}-\phi_{ 0}}{t}+\frac{t}{t+b}\frac{\phi_{0}-\phi_{b}}{b}\right). \tag{44}\]
Note that our definitions of the Laplacian and gradient are identical to those in previous studies (see _e.g._[47; 45; 66; 68; 69; 37]), and in particular, they have been shown to yield supra convergence, in the sense that both the solution and its gradient converge with second-order accuracy [47]. The divergence operator, however, does not follow the construction proposed by Min and Gibou [47; 45]. Here, we use the standard central difference formula to construct a first-order accurate approximation of the divergence operator instead of creating weighted averages of the forward and backward derivative and constructing a second-order centered difference approximation. It was observed in [45] that using the second-order divergence in a purely nodal context leads to an unstable projection operator.
To establish the connection between the staggered and collocated projection, we must formally define all discrete differential operators and the interpolations needed to bridge between them, as well as the spaces they act on. Using ghost nodes allows us to see these operators as natural extensions of their uniform counterparts.
#### 3.3.2 Interpolations: notations and definitions
From the set of nodes and edges \(N\) we define \(\tilde{N}\) as the set of all grid nodes and ghost neighbors (depicted as empty circles in Figure 6), and refer to it as the set of ghosted nodes. For each ghost node, we define a
Figure 6: Connections between the staggered and collocated operators on periodic quadtree grids. The nodal Laplacian \(\mathcal{L}_{N}\) is the composition of the staggered divergence \(\mathcal{D}\) and gradient \(\mathcal{G}\). The nodal divergence \(\mathcal{D}_{N}\) and gradient \(\mathcal{G}_{N}\) are related to their staggered counterpart through linear interpolation. Ghost variables are depicted as empty symbols (\(\mathsf{P}\) and \(\mathsf{O}\)). As in Figure 5, arrows represent the finite difference coefficients. For the gradient and Laplacian, only the x-contribution is illustrated. The y-contribution is computed similarly.
ghost edge (depicted as empty triangles in Figure 6), centered between the ghost node and the corresponding hanging node. We denote by \(\tilde{E}\) the set of ghosted edges, _i.e._ grid, and ghost edges. As we did before, for any two discrete spaces \(A\) and \(B\), defined from the grid structure, we will define \(\mathcal{I}_{A}^{B}:A\to B\) as the interpolation from \(A\) to \(B\).
As Figure 6 illustrates, we define the interpolation from the set of ghosted nodes to ghosted edges, \(\mathcal{I}_{\tilde{N}}^{\tilde{E}}:\tilde{N}\to\tilde{E}\), and the interpolation from the set of ghosted edges to the set of nodes, \(\mathcal{I}_{\tilde{E}}^{N}:\tilde{E}\to N\), as the standard linear interpolations from the nearest neighbors. We also define \(\mathcal{I}_{N}^{\tilde{N}}:N\to\tilde{N}\) as the ghost operator, which returns the value at the grid nodes from the interpolated value at the ghost nodes. Similarly, we define \(\mathcal{I}_{\tilde{E}}^{\tilde{E}}:E\to\tilde{E}\).
We define the operator \(\mathcal{I}_{\tilde{E}}^{E}\) as the canonical restriction from \(\tilde{E}\) to \(E\). The staggered gradient operator, \(\mathcal{G}:\tilde{N}\to\tilde{E}\), is constructed using the standard second-order central difference scheme and the staggered divergence operator, \(\mathcal{D}:\tilde{E}\to N\) is constructed using a weighted first-order central difference scheme (see Figure 6).
#### 3.3.3 Staggered projection on quadtree grids
We start by observing that the nodal Laplacian is obtained by taking the divergence of the gradient computed for the ghosted values, _i.e._\(\mathcal{L}=\mathcal{DGI}_{N}^{\tilde{N}}\), and so the projection operator \(\mathcal{P}:E\to E\) has the form
\[\mathcal{P}=\mathcal{I}-\mathcal{I}_{\tilde{E}}^{E}\mathcal{G}\mathcal{I}_{N}^ {\tilde{N}}\left(\mathcal{DGI}_{N}^{\tilde{N}}\right)^{-1}\mathcal{D}\mathcal{ I}_{\tilde{E}}^{\tilde{E}}. \tag{45}\]
Because \(\mathcal{I}_{\tilde{E}}^{E}\mathcal{I}_{\tilde{E}}^{\tilde{E}}=\mathcal{I}\) (i.e un-ghosting the ghosted edges' values returns the edges' values), we can rewrite the above definition by defining the operator \(\mathcal{P}_{\tilde{E}}\) as
\[\mathcal{P}=\mathcal{I}_{\tilde{E}}^{E}\underbrace{\left(\mathcal{I}-\mathcal{ G}\mathcal{I}_{N}^{\tilde{N}}\left(\mathcal{DGI}_{N}^{\tilde{N}}\right)^{-1} \mathcal{D}\right)}_{\mathcal{P}_{\tilde{E}}}\mathcal{I}_{E}^{\tilde{E}}. \tag{46}\]
\(\mathcal{P}_{\tilde{E}}\) should be interpreted as the projection on the space of ghosted edges. Squaring it, we see that
\[\mathcal{P}_{\tilde{E}}^{2} =\mathcal{I}-2\mathcal{G}\mathcal{I}_{N}^{\tilde{N}}\left( \mathcal{DGI}_{N}^{\tilde{N}}\right)^{-1}\mathcal{D}+\mathcal{G}\mathcal{I}_{N }^{\tilde{N}}\left(\mathcal{DGI}_{N}^{\tilde{N}}\right)^{-1}\mathcal{D} \mathcal{G}\mathcal{I}_{N}^{\tilde{N}}\left(\mathcal{DGI}_{N}^{\tilde{N}} \right)^{-1}\mathcal{D}, \tag{47}\] \[\mathcal{P}_{\tilde{E}}^{2} =\mathcal{I}-\mathcal{G}\mathcal{I}_{N}^{\tilde{N}}\left(\mathcal{ DGI}_{N}^{\tilde{N}}\right)^{-1}\mathcal{D},\] (48) \[\mathcal{P}_{\tilde{E}}^{2} =\mathcal{P}_{\tilde{E}}, \tag{49}\]
and so \(\mathcal{P}_{\tilde{E}}\) is indeed a projection2.
Footnote 2: Note that, as it is the case in the continuous world, this propriety only relies on the fact that the staggered Laplacian is the composition of the divergence and gradient operators. We, for example, did not have to assume that these two operators are the negative transpose of each other.
#### 3.3.4 Stability of the collocated projection
To connect the staggered and collocated projections, we start by recognizing that the nodal Laplacian \(\mathcal{L}_{N}\) is the composition of the staggered divergence with the staggered gradient (see Figure 6)
\[\mathcal{L}_{N}=\mathcal{DGI}_{N}^{\tilde{N}}. \tag{50}\]
As it was the case on uniform grids, the nodal divergence and gradient remain related to their staggered counterparts through linear interpolations, only now these relationships involve the ghost interpolation operator \(\mathcal{I}_{N}^{\tilde{N}}\)
\[\mathcal{D}_{N} =\mathcal{DI}_{\tilde{N}}^{\tilde{E}}\mathcal{I}_{N}^{\tilde{N}}, \tag{51}\] \[\mathcal{G}_{N} =\mathcal{I}_{\tilde{E}}^{N}\mathcal{G}\mathcal{I}_{N}^{\tilde{N}}. \tag{52}\]
Using the above expression, the nodal projection operator
\[\mathcal{P}_{N}=\mathcal{I}-\mathcal{G}_{N}\left(\mathcal{L}_{N}\right)^{-1} \mathcal{D}_{N}, \tag{53}\]
can be expressed as
\[\mathcal{P}_{N}=\mathcal{I}-\mathcal{I}_{\bar{E}}^{N}\mathcal{G}\mathcal{I}_{N} ^{\bar{N}}\left(\mathcal{D}\mathcal{G}\mathcal{I}_{N}^{\bar{N}}\right)^{-1} \mathcal{D}\mathcal{I}_{\bar{N}}^{\bar{E}}\mathcal{I}_{N}^{\bar{N}}. \tag{54}\]
We consider a compressible eigenpair \((\lambda,X)\) such that
\[\mathcal{P}_{N}X=\lambda X,\qquad\mathcal{D}_{N}X\neq 0,\qquad\text{and} \qquad\lambda\neq 1. \tag{55}\]
We multiply the whole equation on the left by the operator \(\mathcal{I}_{\bar{N}}^{\bar{E}}\mathcal{I}_{N}^{\bar{N}}\), and define \(Y=\mathcal{I}_{\bar{N}}^{\bar{E}}\mathcal{I}_{N}^{\bar{N}}X\), to obtain
\[\mathcal{I}_{\bar{N}}^{\bar{E}}\mathcal{I}_{N}^{\bar{N}}\mathcal{P}_{N}X=Y- \underbrace{\mathcal{I}_{\bar{N}}^{\bar{E}}\mathcal{I}_{N}^{\bar{N}}\mathcal{ I}_{\bar{E}}^{N}}_{\mathcal{I}_{p}}\underbrace{\mathcal{G}\mathcal{I}_{N}^{\bar{N}} \left(\mathcal{D}\mathcal{G}\mathcal{I}_{N}^{N}\right)^{-1}\mathcal{D}}_{ \mathcal{Q}_{\bar{E}}=\mathcal{I}-\mathcal{P}_{\bar{E}}}Y=\lambda Y, \tag{56}\]
therefore \(\left(\mathcal{I}-\mathcal{I}_{p}\mathcal{Q}_{\bar{E}}\right)Y=\lambda Y,\) and \(\lambda\) is also an eigenvalue for the operator \(\mathcal{M}=\mathcal{I}-\mathcal{I}_{p}\mathcal{Q}_{\bar{E}}\).
To conclude, we will find a condition under which \(M^{k}\) converges as \(k\rightarrow\infty\), which is a sufficient condition for the eigenvalues of \(\mathcal{M}\), other than one, to be strictly less than one in absolute value. To do this, we define \(\epsilon=\mathcal{I}-\mathcal{I}_{p}\), and rewrite \(\mathcal{M}\) as
\[\mathcal{M}=\mathcal{I}-\left(\mathcal{I}-\epsilon\right)\mathcal{Q}_{\bar{E}}, \tag{57}\]
and
\[\mathcal{M}=\mathcal{P}_{\bar{E}}+\epsilon\mathcal{Q}_{\bar{E}}. \tag{58}\]
Since \(\mathcal{P}_{\bar{E}}\) is a projection,
\[\mathcal{Q}_{\bar{E}}\mathcal{P}_{\bar{E}}=(\mathcal{I}-\mathcal{P}_{\bar{E}} )\mathcal{P}_{\bar{E}}=0. \tag{59}\]
Squaring \(\mathcal{M}\), we get
\[\mathcal{M}^{2}=\mathcal{P}_{\bar{E}}^{2}+\mathcal{P}_{\bar{E}}\epsilon \mathcal{Q}_{\bar{E}}+\epsilon\mathcal{Q}_{\bar{E}}\mathcal{P}_{\bar{E}}+ \left(\epsilon\mathcal{Q}_{\bar{E}}\right)^{2}, \tag{60}\]
which, using the fact that \(\mathcal{P}_{\bar{E}}^{2}=\mathcal{P}_{\bar{E}}\) and the orthogonality condition (59), we simplify into
\[\mathcal{M}^{2}=\mathcal{P}_{\bar{E}}\left(\mathcal{I}+\epsilon\mathcal{Q}_{ \bar{E}}\right)+\left(\epsilon\mathcal{Q}_{\bar{E}}\right)^{2}. \tag{61}\]
By recurrence, we obtain that
\[\mathcal{M}^{k}=\mathcal{P}_{\bar{E}}\left(\sum_{i=0}^{k-1}\left(\epsilon \mathcal{Q}_{\bar{E}}\right)^{i}\right)+\left(\epsilon\mathcal{Q}_{\bar{E}} \right)^{k}, \tag{62}\]
from which we conclude that the limit of \(\mathcal{M}^{k}\) as \(k\rightarrow\infty\) exists if and only if \(\rho(\epsilon\mathcal{Q}_{\bar{E}})\), the spectral radius of \(\epsilon\mathcal{Q}_{\bar{E}}\), is less than \(1\), _i.e._
\[\rho\left(\epsilon\mathcal{Q}_{\bar{E}}\right)<1. \tag{63}\]
Before we proceed to the numerical validations, we should discuss the implication of the above conditions. First, we should note that the above condition is satisfied if \(\|\epsilon\mathcal{Q}_{\bar{E}}\|<1\), and so for the condition (63) to be satisfied, it is sufficient to have
\[\|\epsilon\|<\frac{1}{\|\mathcal{Q}_{\bar{E}}\|}. \tag{64}\]
Since the staggered operators \(\mathcal{P}_{\bar{E}}\) and \(\mathcal{Q}_{\bar{E}}\) on uniform grids are orthogonal with norm equal to one, we expect their norms on adaptive grids to be close to one (_i.e._\(\|\mathcal{Q}_{\bar{E}}\|\approx 1\)), and so we expect the above stability condition to be met if \(\|\epsilon\|\lessapprox 1\). In other words, we expect that if the norm of the interpolation error introduced by \(\mathcal{I}_{p}\) is less than one, which is a realistic expectation as \(\mathcal{I}_{p}\) is the product of consistent interpolations, and therefore consistent itself, then the projection is converging.
Intuitively, one could think that for the iterated projection to converge, \(\mathcal{I}_{p}\) must be stable and that, therefore, its norm must be less than one. Our proof suggests that this intuition is wrong and that, somewhat surprisingly, an unstable interpolation can lead to a converging algorithm as long as (64) is satisfied. This is essential as \(\mathcal{I}_{P}\) involves interpolating from the nodes to the ghost nodes, which, as formula (41) illustrates, is done using quadratic interpolating polynomials and can virtually introduce instabilities. The other two interpolations (\(\mathcal{I}_{N}^{\tilde{E}}\) and \(\mathcal{I}_{\tilde{E}}^{N}\)) involved in \(\mathcal{I}_{P}\) are constructed by taking weighted averages, and thus stable.
Our proof does not rely on a specific form for the ghost node interpolation; rather, it tells whether, for a given interpolation, the projection is stable. Unfortunately, our general ghost node construction is too complicated for us to prove whether condition (63) is met or not. Even for simpler constructions, such as a linear one, where the ghost values are obtained as the weighted average of existing values, \(\mathcal{I}_{P}\) is now the product of three averaging stochastic matrices, and thus itself a square stochastic matrix with positive coefficients, we can only conclude that its eigenvalues are in the disk \(D(0,1)\), which is insufficient.
Overall, the condition (63) feels achievable and, in fact, less restrictive than one would have naively thought, and so we are reasonably hopeful that our method is stable. Yet, the constructions of our interpolation operators, even in the low-order scenario, are too complicated for us to prove it formally. In addition, we should point out that boundary effects are left out of our theoretical study. We, therefore, resort to an in-depth computational verification.
### Computational verification
In this section, we verify the stability of our nodal projection operator by computing its convergence properties using a standard example with homogeneous Neumann boundary conditions and by verifying the stability of our nodal projection operator in the presence of different combinations of boundary and interface conditions. For the examples in this section and subsequent tests, wall and boundary conditions are treated using the second-order hybrid Finite-Volume/Finite Difference approach presented in [69], itself based on previous studies [68]. For interface conditions, the fluid quantities in \(\Omega^{-}\) are systematically extended to \(\Omega^{+}\) using the PDE-based approach proposed by Aslam in [7]. All other problem-specific treatments are described in the appropriate sections.
#### 3.4.1 Projection test - supra-convergence of the Hodge variable
We verify the convergence and stability of our projection operator by repeatedly applying the operator to an initially non-divergence-free velocity field and computing the error between the resulting velocity field and the exact divergence-free velocity field, as was done in previous studies [45; 51; 28]. Doing so, we test the stability of our projection and the convergence of the Hodge variable as the grid resolution goes to zero. We consider the two-dimensional velocity field:
\[u^{*}(x,y)= \sin(x)\cos(y)+x(\pi-x)y^{2}\left(\frac{y}{3}-\frac{\pi}{2} \right), \tag{65}\] \[v^{*}(x,y)= -\cos(x)\sin(y)+y(\pi-y)x^{2}\left(\frac{x}{3}-\frac{\pi}{2} \right), \tag{66}\]
in the domain \(\Omega=[0,\pi]^{2}\) with Neumann boundary conditions. The divergence-free velocity field corresponding to \(u^{*}\) and \(v^{*}\) above is:
\[u(x,y)= \sin(x)\cos(y), \tag{67}\] \[v(x,y)= -\cos(x)\sin(y). \tag{68}\]
We perform this test by starting with a randomized quadtree with 240 initial splits. Our goal here is to verify the order of accuracy for our projection operator in extreme scenarios where the grid is highly non-graded. We then successively refine the grid by further splitting each leaf, each time splitting the local spatial resolution in half, and monitor the error. The results of this test are shown in Figure 8, along with the initial random quadtree. As expected, the projection is stable, and we see second-order convergence in the velocity field.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Splits & \(L^{1}\) & Order & \(L^{\infty}\) & Order \\ \hline
0 & 7.82e-02 & - & 3.48e-01 & - \\
1 & 1.96e-02 & 2.00 & 1.05e-01 & 1.73 \\
2 & 8.47e-03 & 1.21 & 3.20e-02 & 1.72 \\
3 & 2.61e-03 & 1.70 & 8.99e-03 & 1.83 \\
4 & 7.16e-04 & 1.87 & 2.61e-03 & 1.79 \\
5 & 1.87e-04 & 1.94 & 7.76e-04 & 1.75 \\ \hline \end{tabular}
\end{table}
Table 1: The final error and order of convergence for the projection test on a randomly generated quadtree with an increasing number of splits.
Figure 8: The \(L^{1}\) (left) and \(L^{\infty}\) (right) errors for the projection test on a randomly generated quadtree with increasing number of splits.
Figure 7: Initial highly non-graded quadtree used for the Projection Test initialized with 240 random splits
#### 3.4.2 Projection test - stability
In this test, we specifically look at the stability of our projection operator in the presence of a variety of boundary and interface conditions. Using the same initial velocity field as in the previous example, (65), we successively apply the projection operator to this field and monitor the norm of the variation between the velocity at the current projection iteration and the velocity after the final projection iteration. For a stable operator, this variation will tend to zero, up to the solver tolerance.
The results of this test are shown in Figure 9. As expected, the variation in the norm of the velocity decreases as we successively apply our projection operator. For these examples, the tolerance of our linear solver was set to \(10^{-12}\). As expected, we see that our collocated projection operator is numerically stable for all boundary and interface conditions tested. We also note that in practice, only a small number of iterations of the projection operator will be used and, as we later show, second-order accuracy in both the velocity and Hodge variable can be achieved with only a single projection applied.
## 4 Nodal Navier-Stokes solver
In this section, we integrate our nodal projection operator into a general framework for solving the incompressible Navier-Stokes equations. We begin by presenting an overview of our time-stepping algorithm and then we provide a brief introduction of the specific numerical techniques used to compute the intermediate velocity field. We conclude this section by solving an analytical solution to the Navier-Stokes equations with our nodal solver and comparing these results with the MAC-based solver presented in [28]. Furthermore, we use this example to discuss the practical number of projection iterations that need to be applied when using our nodal solver.
### Overview of the time stepping algorithm
We advance our velocity field from \(\mathbf{u}^{n}\) to \(\mathbf{u}^{n+1}\) using a standard two-step projection method. In the Viscosity step, we compute the intermediate velocity field, \(\mathbf{u}^{*}\), using a semi-Lagrangian Backward Difference Formula (SLBDF) scheme for the temporal integration of the momentum equation where the viscous terms are treated implicitly. In the Projection step, we repeatedly apply our nodal projection operator with the stopping criteria described in Section 4.1.3. To account for the splitting error on the boundary (see Section 2.4), we iterate the Viscosity and Projection steps following a boundary correction procedure detailed in Section 4.1.3. A summary of our time-stepping algorithm can be seen in Figure 10 and details of the specific techniques used can be found in the subsequent sections.
#### 4.1.1 Viscosity step - SLBDF integrator
To solve for the auxiliary velocity field \(\mathbf{u}^{*}\), we start by integrating the momentum equation (1) using a semi-Lagrangian backward difference (SLBDF) scheme [40; 28; 67] with an adaptive time-step \(\Delta t_{n}=t_{n+1}-t_{n}\), and implicit treatment of the viscous term, namely
\[\rho\left(\alpha\frac{\mathbf{u}^{*}-\mathbf{u}_{d}^{n}}{\Delta t_{n}}+\beta \frac{\mathbf{u}_{d}^{n}-\mathbf{u}_{d}^{n-1}}{\Delta t_{n-1}}\right)=\mu \triangle\mathbf{u}^{*}+\mathbf{f}, \tag{73}\]
where
\[\alpha=\frac{2\Delta t_{n}+\Delta t_{n-1}}{\Delta t_{n}+\Delta t_{n-1}},\ \ \beta=-\frac{\Delta t_{n}}{\Delta t_{n}+\Delta t_{n-1}}. \tag{74}\]
The departing velocities \(\mathbf{u}_{d}^{n}\) and \(\mathbf{u}_{d}^{n-1}\) are obtained by interpolating the velocity field at the departure points \(\mathbf{x}_{d}^{n}\) and \(\mathbf{x}_{d}^{n-1}\) and corresponding time steps. The adaptive time-step, \(\Delta t_{n}\) is determined by pre-setting the CFL number, \(\text{CFL}=\max\left\|\mathbf{u}\right\|\Delta t/\Delta x\). To find the departure points, we follow the characteristic curve backward using a second-order Runge-Kutta method (RK2, midpoint):
\[\hat{\mathbf{x}} =\mathbf{x}^{n+1}-\frac{\Delta t_{n}}{2}\cdot\mathbf{u}^{n+1}( \mathbf{x}^{n+1}), \tag{75}\] \[\mathbf{x}_{d}^{n} =\mathbf{x}^{n+1}-\Delta t_{n}\cdot\mathbf{u}^{n+\frac{1}{2}}( \hat{\mathbf{x}}), \tag{76}\]
Figure 9: Variation in \(L^{2}\) norm of the velocity after successive iterations of the projection operator for different sets of boundary conditions. Wall boundary conditions sets are denoted N for Neumann, D for Dirichlet, and M for mixed (a combination of Dirichlet and Neumann). When an interface is present, the wall boundary condition sets are augmented with IN for a Neumann interface and ID for a Dirichlet interface.
and
\[\bar{\mathbf{x}} =\mathbf{x}^{n+1}-\frac{(\Delta t_{n}+\Delta t_{n+1})}{2}\cdot \mathbf{u}^{n+1}(\mathbf{x}^{n+1}), \tag{77}\] \[\mathbf{x}_{d}^{n-1} =\mathbf{x}^{n+1}-(\Delta t_{n}+\Delta t_{n+1})\cdot\mathbf{u}^{ \mathrm{mid}}(\bar{\mathbf{x}}). \tag{78}\]
We interpolate the intermediate velocities, \(\mathbf{u}^{n+\frac{1}{2}}(\hat{\mathbf{x}})\) and \(\mathbf{u}^{\mathrm{mid}}(\bar{\mathbf{x}})\), from the velocity fields at \(t_{n-1}\) and \(t_{n}\) as
\[\mathbf{u}^{n+\frac{1}{2}}(\hat{\mathbf{x}}) =\frac{2\Delta t_{n-1}+\Delta t_{n}}{2\Delta t_{n-1}}\mathbf{u}^{ n}(\hat{\mathbf{x}})-\frac{\Delta t_{n}}{2\Delta t_{n-1}}\mathbf{u}^{n-1}( \hat{\mathbf{x}}), \tag{79}\] \[\mathbf{u}^{\mathrm{mid}}(\bar{\mathbf{x}}) =\frac{\Delta t_{n}+\Delta t_{n-1}}{2\Delta t_{n-1}}\mathbf{u}^{ n}(\bar{\mathbf{x}})+\frac{\Delta t_{n-1}-\Delta t_{n}}{2\Delta t_{n-1}} \mathbf{u}^{n-1}(\bar{\mathbf{x}}). \tag{80}\]
#### 4.1.2 Viscosity step - improved trajectory reconstruction
As the semi-Lagrangian method shown in Section 4.1.1 relies on knowing what \(\mathbf{u}^{n+1}\) is, we must look to other explicit methods to find the departure point. Here, we discuss two departure point methods that utilize an RK2 scheme and approximate \(\mathbf{u}^{n+1}\).
The first and most obvious choice for utilizing an RK2 method with known information is to replace \(\mathbf{u}^{n+1}\) in (75) and (77) with \(\mathbf{u}^{n}.\) This midpoint rule formulation was popularized by Xiu and Karniadakis [71] and has been a popular integration choice for projection methods that utilize a semi-Lagrangian discretization of the advection term (see, _e.g._[28; 45]). Using \(\mathbf{u}^{n}\) is akin to a constant Taylor approximation to \(\mathbf{u}^{n+1}.\)
This constant approximation is often a sufficient approximation for lower CFL numbers, but for larger CFL numbers, it can cause the computed departure point to lie too far from the characteristic prior to interpolation and can lead to less accurate velocity terms. By doing a first-order Taylor expansion of \(\mathbf{u}^{n+1}(\mathbf{x}^{n+1})\) around \(t_{n}\) and using an Euler approximation for any derivative terms, we get
\[\mathbf{u}^{n+1}(\mathbf{x}^{n+1})=\mathbf{u}^{n}(\mathbf{x}^{n+1})+\frac{ \Delta t_{n}}{\Delta t_{n-1}}\left(\mathbf{u}^{n}(\mathbf{x}^{n+1})-\mathbf{ u}^{n-1}(\mathbf{x}^{n+1})\right)+\mathcal{O}(\Delta t^{2}), \tag{81}\]
Figure 10: Outline of the algorithm for the construction of the solution \(\mathbf{u}^{n+1}\) at time \(t_{n+1}\) from the solution \(\mathbf{u}^{n}\) at the previous time step \(t_{n}\).
which we substitute into (75) and (77). By having a higher-order approximation for \(\mathbf{u}^{n+1}(\mathbf{x}^{n+1})\), we expect our numerical simulations to have more accurate results, especially if a large time-step is used.
While we can mitigate this issue with a higher-order integration scheme, such as a fourth-order Runge-Kutta method, the order of accuracy of the semi-Lagrangian method is determined by both the integration scheme used to find the departure point and the interpolation scheme used to evaluate \(\mathbf{u}\) at the departure point. Since we are using a quadratic interpolation scheme, using an integration scheme that is higher than second-order accurate is superfluous.
We use both integrating schemes for our numerical validation in Section 5. The classical scheme is used to directly compare the nodal projection method with previous numerical studies and experiments, and the new improved integrating scheme is used to compare the accuracy of the solver when implementing each integrating scheme.
#### 4.1.3 Stopping criteria for iterative procedures
As our projection operator is not an exact orthogonal projection, we use an iterative procedure to create an approximately divergence-free velocity field. We perform successive projections until
\[\left\|\mathbf{u}^{n+1}-\mathcal{P}_{N}\mathbf{u}^{n+1}\right\|<\epsilon_{i} \left\|\mathbf{u}^{n+1}\right\|, \tag{82}\]
or a predefined maximum number of iterations, \(K_{max}\), has been reached. Typically, we choose \(\epsilon_{i}=10^{-3}\), set \(K_{max}=5\), and only a few \((1-3)\) iterations are required to reach convergence.
The boundary correction \(\mathbf{c}\) is designed to compensate for the splitting errors (see 2.4) in a similar fashion to what was proposed in [67]. It is designed so that when the correction reaches convergence (see Eq. (72)), the boundary condition on the solid object is satisfied (_i.e._\(\mathbf{u}^{n+1}=\mathbf{u}\)). The parameter \(\omega\) controls the convergence rate and must be chosen in the range \(0<\omega<1\). Similar corrections are performed on the wall of the computational domain \(\partial\Omega\) but are not explicited here for the sake of concision. Typically, we terminate the outer iterations when the error in the interface's velocity (\(\left\|\mathbf{u}^{n+1}-\mathbf{u}|_{\Gamma}\right\|\)) is less than \(\epsilon_{o}=10^{-3}\).
#### 4.1.4 Refinement criteria
As it was done in our previous studies [67; 69; 16; 64; 28], the mesh is dynamically refined near the solid interface and where high gradients of vorticity occur. At each iteration, we recursively apply the following splitting criterion at each cell.
We split each cell \(\mathcal{C}\) if
\[\min_{n\in\mathrm{nodes}(\mathcal{C})}|\phi(n)|\leq B\cdot\mathrm{Lip}(\phi) \cdot\mathrm{diag}(\mathcal{C})\quad\text{and}\quad\mathrm{level}(\mathcal{C })\leq\max_{\mathrm{level}}, \tag{83}\]
or
\[\min_{n\in\mathrm{nodes}(\mathcal{C})}\mathrm{diag}(\mathcal{C})\cdot\frac{ \left\|\nabla\mathbf{u}(n)\right\|}{\left\|\mathbf{u}\right\|_{\infty}}\geq T _{V}\quad\text{and}\quad\mathrm{level}(\mathcal{C})\leq\max_{V}, \tag{84}\]
or
\[\mathrm{level}(\mathcal{C})\leq\min_{\mathrm{level}}. \tag{85}\]
If none of these criteria are met, we merge \(\mathcal{C}\) by removing all its descendants.
Here \(\mathrm{Lip}(\phi)\) is an upper estimate of the minimal Lipschitz constant of the level set function \(\phi\). Since the level set used to create the mesh will be reinitialized (_i.e._\(|\nabla\phi|=1\)), we use \(\mathrm{Lip}(\phi)=1.2\). \(\mathrm{diag}(\mathcal{C})\) is the length of the diagonal of cell \(\mathcal{C}\). \(B\) is the user-specified width of the uniform band around the interface. \(T_{V}\) is the vorticity threshold and \(\max_{\mathrm{level}}\) is the maximum grid level allowed for the vorticity-based refinement. Condition (85) ensures that a minimum resolution of \(\min_{\mathrm{level}}\) is maintained.
### Verification - analytic vortex
#### 4.2.1 Convergence study
We consider the following analytical solution to the Navier-Stokes Equations,
\[u(x,y) =\sin(x)\cos(y)\cos(t), \tag{86}\] \[v(x,y) =-\cos(x)\sin(y)\cos(t),\] (87) \[p(x,y) =0. \tag{88}\]
We set \(\mu=1\), \(\rho=1\), and define the forcing terms as
\[f_{x} =\sin(x)\cos(y)\left(2\mu\cos(t)-\rho\sin(t)\right)+\rho\cos^{2}(t) \sin(x)\cos(x), \tag{89}\] \[f_{y} =\cos(x)\sin(y)\left(\rho\sin(t)-2\mu\cos(t)\right)+\rho\cos^{2}(t )\sin(y)\cos(y). \tag{90}\]
We monitor the convergence of our solver as our mesh is refined and compare the results against the MAC-based solver of [28]. Each of the simulations is run until a final time of \(t_{f}=\pi/3\) is reached with an adaptive time step defined by \(\Delta t_{n}=\Delta x/\max\|u_{n}\|\). We set the maximum number of projections, \(K_{max}\), to \(5\), with an error tolerance of \(10^{-3}\) (see Section 4.1.3), and use the boundary correction procedure explained in Section 4.1.3. The results of this example are shown in Table 2 and in Figure 11.
We see that our nodal projection method achieves second-order convergence in both the \(L^{1}\) and \(L^{\infty}\) norm for velocity as expected based on our discretization. For the Hodge variable, we also see second-order convergence in both \(L^{1}\) and \(L^{\infty}\). If we compare the results of our nodal solver with the MAC-based solver of [28], we notice a distinct improvement in the \(L^{\infty}\) norm for velocity.
#### 4.2.2 Impact of successive projections - determining \(K_{max}\)
In order to develop a practical guideline for the number of projection iterations that should be used in our nodal solver, we again consider the analytic vortex example from the previous section. We continue using the same adaptive time step, grid refinement criteria, and boundary correction procedures previously described. However, instead of using the error threshold described in 4.1.3, we explicitly define the number
\begin{table}
\begin{tabular}{|c|c c c c|c c c c|} \hline \multicolumn{1}{|c|}{} & \multicolumn{4}{c|}{**Velocity**} & \multicolumn{3}{c|}{MAC} \\ \hline Level (max:min) & \(L^{1}\) & Order & \(L^{\infty}\) & Order & \(L^{1}\) & Order & \(L^{\infty}\) & Order \\ \hline
7:3 & 6.84e-04 & - & 2.34e-02 & - & 5.48e-03 & - & 2.18e-02 & - \\
8:4 & 1.25e-04 & 2.46 & 2.72e-03 & 3.11 & 1.76e-03 & 1.64 & 8.65e-03 & 1.33 \\
9:5 & 2.33e-05 & 2.42 & 9.56e-04 & 1.51 & 5.56e-04 & 1.66 & 3.28e-03 & 1.40 \\
10:6 & 5.61e-06 & 2.05 & 2.19e-04 & 2.12 & 1.61e-04 & 1.79 & 1.64e-03 & 1.00 \\
11:7 & 1.36e-06 & 2.05 & 4.44e-05 & 2.30 & 3.62e-05 & 2.15 & 8.84e-04 & 2.09 \\
12:8 & 4.15e-07 & 1.71 & 9.55e-06 & 2.22 & 8.59e-06 & 2.08 & 2.45e-04 & 0.65 \\ \hline \multicolumn{1}{|c|}{} & \multicolumn{4}{c|}{**Hodge**} & \multicolumn{1}{c}{} \\ \hline & \multicolumn{4}{c|}{Nodes} & \multicolumn{4}{c|}{MAC} \\ Level (max:min) & \(L^{1}\) & Order & \(L^{\infty}\) & Order & \(L^{1}\) & Order & \(L^{\infty}\) & Order \\ \hline
7:3 & 4.43e-05 & - & 4.16e-04 & - & 3.17e-04 & - & 1.62e-03 & - \\
8:4 & 1.67e-05 & 1.41 & 9.48e-05 & 2.13 & 3.47e-05 & 3.19 & 2.47e-04 & 2.71 \\
9:5 & 2.55e-06 & 2.71 & 1.44e-05 & 2.72 & 4.48e-06 & 2.95 & 5.85e-05 & 2.08 \\
10:6 & 3.07e-07 & 3.05 & 2.45e-06 & 2.55 & 7.20e-07 & 2.64 & 1.44e-05 & 2.02 \\
11:7 & 3.12e-08 & 3.30 & 5.49e-07 & 2.16 & 7.41e-08 & 3.28 & 2.81e-06 & 2.36 \\
12:8 & 3.67e-09 & 3.09 & 1.05e-07 & 2.39 & 1.14e-08 & 2.70 & 6.72e-07 & 2.06 \\ \hline \end{tabular}
\end{table}
Table 2: Convergence of x-component velocity and Hodge variable \(\Phi\) for the Nodes and MAC [28] implementations.
Figure 11: Visualization of the error convergence for the velocity and Hodge variable.
of projections to be performed at each time step. The results of this test are shown in Table 3 for the explicit number of projections set to 1, 5, and 10.
We emphasize that for this example, only a single projection is required to achieve second-order accuracy in the \(L^{\infty}\) norm for both velocity and the Hodge variable. Increasing the number of projections does not significantly change the solution or the order of accuracy. For this reason, we do not set a minimum number of projections for our nodal solver. Instead, we specify a maximum number of projections, \(K_{max}\), and typically use a default value of 5 projections. In practice and using an error threshold of \(10^{-3}\) (see Section 4.1.3), we observe that a single projection is often sufficient.
#### 4.2.3 Verification of the improved departure point reconstruction scheme
Finally, we use the analytic vortex example to verify that our improved departure point reconstruction scheme remains second-order accurate. Using an identical setup as in Section 4.2, we monitor the convergence of our solver using the original SLBDF scheme presented in [28] and compare the results against the improved scheme presented herein. Again, the maximum number of projections, \(K_{max}\), is set to 5, with an error tolerance for the projection set to \(10^{-3}\). The \(L^{1}\) and \(L^{\infty}\) errors for the x-component velocity and the Hodge variable are shown in Table 4. As expected, we that the improved reconstruction scheme maintains second-order accuracy.
## 5 Validation examples
In this section, we validate our node-based Navier-Stokes solver using several canonical problems in two and three spatial dimensions. We compare the results of our solver with previous numerical studies [25; 23; 28; 51; 10; 12; 49; 44; 34; 20] and experimental results [19] demonstrating the robustness of our solver. Furthermore, we use the Karman vortex example to highlight the advantages of our improved departure point reconstruction scheme and the efficiency of our solver. Finally, we conclude this section by
\begin{table}
\begin{tabular}{|c|c c c c|c c c c|} \multicolumn{8}{c}{**Velocity**} \\ \hline & \multicolumn{4}{c|}{Original Reconstruction} & \multicolumn{4}{c|}{Improved Reconstruction} \\ Level (max:min) & \(L^{1}\) & Order & \(L^{\infty}\) & Order & \(L^{1}\) & Order & \(L^{\infty}\) & Order \\ \hline
7:3 & 6.84e-04 & - & 2.34e-02 & - & 9.08e-04 & - & 2.90e-02 & - \\
8:4 & 1.25e-04 & 2.46 & 2.72e-03 & 3.11 & 1.13e-04 & 3.0 & 2.73e-03 & 3.41 \\
9:5 & 2.33e-05 & 2.42 & 9.56e-04 & 1.51 & 2.38e-05 & 2.24 & 9.58e-04 & 1.51 \\
10:6 & 5.61e-06 & 2.05 & 2.19e-04 & 2.12 & 5.58e-06 & 2.09 & 2.20e-04 & 2.12 \\
11:7 & 1.36e-06 & 2.05 & 4.44e-05 & 2.30 & 1.30e-06 & 2.10 & 4.47e-05 & 2.30 \\
12:8 & 4.15e-07 & 1.71 & 9.55e-06 & 2.22 & 3.95e-07 & 1.72 & 9.55e-06 & 2.23 \\ \hline \multicolumn{8}{c}{**Hodge**} \\ \hline & \multicolumn{4}{c|}{Original Reconstruction} & \multicolumn{4}{c|}{Improved Reconstruction} \\ Level (max:min) & \(L^{1}\) & Order & \(L^{\infty}\) & Order & \(L^{1}\) & Order & \(L^{\infty}\) & Order \\ \hline
7:3 & 4.43e-05 & - & 4.16e-04 & - & 3.59e-05 & - & 3.18e-04 & - \\
8:4 & 1.67e-05 & 1.41 & 9.48e-05 & 2.13 & 1.69e-05 & 1.09 & 9.72e-05 & 1.71 \\
9:5 & 2.55e-06 & 2.71 & 1.44e-05 & 2.72 & 2.57e-06 & 2.72 & 1.47e-05 & 2.72 \\
10:6 & 3.07e-07 & 3.05 & 2.45e-06 & 2.55 & 3.10e-07 & 3.05 & 2.44e-06 & 2.59 \\
11:7 & 3.12e-08 & 3.30 & 5.49e-07 & 2.16 & 3.20e-08 & 3.28 & 5.59e-07 & 2.13 \\
12:8 & 3.67e-09 & 3.09 & 1.05e-07 & 2.39 & 3.83e-09 & 3.06 & 1.08e-07 & 2.37 \\ \hline \end{tabular}
\end{table}
Table 4: Convergence of the x-component velocity and Hodge variable \(\Phi\) using the original and improved departure point reconstruction schemes.
\begin{table}
\begin{tabular}{|c|c c c|c c c|c c c|} \multicolumn{8}{c}{**Velocity**} \\ \hline & \multicolumn{4}{c|}{Projections = 1} & \multicolumn{4}{c|}{Projections = 5} & \multicolumn{4}{c|}{Projections = 10} \\ Level & \multicolumn{4}{c|}{Velocity} & \multicolumn{4}{c|}{Hodge} & \multicolumn{4}{c|}{Velocity} & \multicolumn{4}{c|}{Hodge} \\ Level & \(L^{\infty}\) & Order & \(L^{\infty}\) & Order & \(L^{\infty}\) & Order & \(L^{\infty}\) & Order & \(L^{\infty}\) & Order \\ \hline
7:3 & 2.36e-02 & - & 4.16e-04 & - & 4.19e-02 & - & 9.65e-05 & - & 5.44e-02 & - & 7.01e-05 & - \\
8:4 & 2.79e-03 & 3.08 & 9.49e-05 & 2.13 & 6.86e-03 & 2.61 & 3.24e-05 & 1.58 & 8.74e-03 & 2.64 & 1.44e-05 & 2.28 \\
9:5 & 9.74e-04 & 1.52 & 1.44e-05 & 2.72 & 2.32e-03 & 1.56 & 5.81e-06 & 2.48 & 3.03e-03 & 1.53 & 2.89e-06 & 2.32 \\
10:6 & 2.25e-04 & 2.12 & 2.45e-06 & 2.56 & 4.92e-04 & 2.24 & 9.52e-07 & 2.61 & 6.38e-04 & 2.25 & 6.31e-07 & 2.20 \\
11:7 & 4.58e-05 & 2.29 & 5.49e-07 & 2.16 & 9.92e-05 & 2.31 & 2.11e-07 & 2.17 & 4.58e-05 & 3.80 & 5.48e-07 & 0.20 \\
12:8 & 9.96e-06 & 2.20 & 1.05e-07 & 2.38 & 1.82e-05 & 2.45 & 4.24e-08 & 2.32 & 2.45e-05 & 0.90 & 3.47e-08 & 3.98 \\ \hline \end{tabular}
\end{table}
Table 3: Impact of the number of projections on the \(L^{\infty}\) error for the velocity and Hodge variable \(\Phi\) for the analytic vortex example.
simulating a high Reynolds number flow past the UC Merced Beginnings sculpture to demonstrate that our solver can be used to simulate flows with non-trivial geometries.
### Driven cavity
The lid-driven cavity problem is a canonical example for validating an incompressible flow solver. Following the numerical experiments of Ghia _et al._ [25] and Erturk _et al._ [23], we consider the domain \(\Omega=[0,1]^{2}\) with the top wall moving at a unit velocity and no-slip boundary conditions on the velocity for all four walls. We set the density \(\rho=1\) and the Reynolds number to \(Re=1\ 000\).
The numerical simulations are performed using minimum and maximum quadtree levels of 8 and 10, respectively. We set the CFL number to 1.0 and dynamically refine the mesh based on the procedure discussed in Section 4.1.4. In Figure 12 we plot the results of our simulation with data from the numerical experiments of Ghia _et al._ [25] and Erturk _et al._ [23]. We see excellent agreement between our simulation and the previous studies.
### Oscillating cylinder
We demonstrate the ability of our solver to handle moving interfaces by considering an oscillating cylinder example. The oscillating cylinder is a popular example of a one-way fluid-structure interaction problem that has been studied in both experimental and numerical contexts [19; 33; 60; 28]. In this test, oscillatory motion is imposed on a cylinder in a quiescent fluid and the flow around the cylinder is characterized by the Reynolds number and the Keulegan-Carpenter number defined as
\[KC=\frac{u_{max}}{2rf}.\]
We follow the problem definition of [19] and [28] and set these dimensionless parameters to \(Re=100\) and \(KC=5\). We set the radius of the cylinder to \(r=0.05\), our frequency to \(f=1\), and the amplitude of oscillation to \(x_{0}=1.5914r\). We consider the computational domain of \(\Omega=[-1,1]^{2}\) with homogeneous Dirichlet boundary conditions and define the center position of our cylinder as
\[x_{c}=x_{0}(1-\cos(2\pi ft)).\]
We compare the results of our solver with those of [19] and [28] by computing the drag coefficient. We compute the drag and lift forces on the disk by integrating
\[\mathbf{F}=\int\limits_{\Gamma}(-p\mathbf{I}+2\mu\sigma)\cdot\mathbf{n} \tag{91}\]
Figure 12: x- and y-components of the velocity field in the driven cavity example. The black circles are the results from Ghia _et al._[25] and the red stars are from Erturk _et al._[23]. The line results are taken from our solver. This simulation was run with a minimum and maximum quadtree level of 8 and 10, respectively, and a CFL of 1.0.
where \(\Gamma\) is the boundary of the cylinder, \(\sigma\) is the symmetric stress tensor and \(\mathbf{n}\) is the outward facing normal vector to the cylinder. The drag and lift coefficients, \(C_{d}\) and \(C_{l}\), are computed by re-scaling the components of \(\mathbf{F}\) by \(\rho r\left\|\mathbf{u}\right\|_{\infty}^{2}.\) Numerically, the integral is computed using the quadrature rules of [46] on third-order smooth extensions [7] of the integrands.
For this example, we are able to use an adaptive mesh with minimum and maximum quadtree levels of \(4\) and \(11\), respectively. Additionally, we see good quantitative agreement with both studies for CFL numbers up to \(10\). Figure 13 shows the time history of the drag coefficient for \(Re=100\) and \(KC=5\) and a comparison with previous studies using a CFL of \(10\). In Figure 14, we show the time evolution of the vorticity and the adaptive grid refinement of the oscillating cylinder.
### Von Karman vortex street
Next, we validate our solver by considering the analysis of flow past a cylinder in two and three spatial dimensions.
#### 5.3.1 Two-dimensions
The setup for this problem consists of a cylinder of radius \(r=0.5\) with its center at the location \((8,0)\) in a rectangular domain \(\Omega=[0,32]\times[-8,8].\) For the velocity, we impose a Dirichlet boundary condition of \(u=1\) on the left, top, and bottom walls and a homogeneous Neumann boundary condition on the right wall. We impose homogeneous Neumann boundary conditions on the left, right, and bottom walls for the Hodge variable and a Dirichlet boundary condition of \(\Phi=0\) on the right wall. For this case, we ran our simulation on adaptive quadtree grids with minimum and maximum refinement levels of \(7\) and \(10\), respectively. To solve the advective term of the momentum equation, we use the classical departure point reconstruction method described in Section 4.1.2. We also set a uniform band around the disk to properly capture the boundary layer around the interface.
We solve this problem for several CFL numbers to quantify the impact the time step has on the drag and lift forces on the cylinder and to act as a calibration tool for more complex two- and three-dimensional problems. In Figure 15, we see that as we increase the CFL number, both the average drag coefficient values and the amplitude of the coefficient oscillations increase. Qualitatively (Figure 16), we observe a more severe destabilization of the wake past the cylinder for the higher CFL numbers, which is consistent with the increased forces. Nonetheless, we see that our solver can produce good qualitative results for large CFL numbers. We also see that the drag and lift coefficients are consistent with previous numerical and experimental results (Table 5).
Figure 13: History of the drag coefficient for an oscillating cylinder with \(\mathrm{Re}=100\) and \(\mathrm{KC}=5\). The red crosses are the original experimental data from Dutsch et al. [19] and the orange stars were computed by Guittet _et al._[28]. Data points were digitized using the original figures. This simulation was run with a minimum and maximum quadtree level of \(4\) and \(11\), respectively, and a CFL of \(10.0\).
#### 5.3.2 Impact of departure point reconstruction
We implement the two-dimensional flow past disk problem in Section 5.3.1, with the only difference being that we use our improved departure point method. For \(Re=100\), we see a significant improvement in both the drag and lift coefficients for higher CFL numbers (Figure 17) compared to the implementation found in Section 5.3.1 (Figure 15).
These significant improvements are further realized when looking at the data qualitatively. In Figure 18, we compare a snapshot of the vorticity profile with \(CFL=10\) of the two departure point reconstruction methods. The destabilization is not as present in the improved departure point method (as expected from the
\begin{table}
\begin{tabular}{|l|l l|l l|} \hline & \multicolumn{2}{|c|}{Drag coefficient} & \multicolumn{2}{|c|}{Lift coefficient} \\ & \(Re=100\) & \(Re=200\) & \(Re=100\) & \(Re=200\) \\ \hline Ng _et al._[51] & \(1.368\pm.016\) & \(1.373\pm 0.050\) & \(\pm 0.360\) & \(\pm 0.724\) \\ Braza _et al._[10] & \(1.364\pm.015\) & \(1.400\pm 0.050\) & \(\pm 0.250\) & \(\pm 0.750\) \\ Calhoun [12] & \(1.330\pm.014\) & \(1.172\pm 0.058\) & \(\pm 0.298\) & \(\pm 0.668\) \\ Engelman _et al._[21] & \(1.411\pm.010\) & - & \(\pm 0.350\) & - \\ Guittet _et al._[28] & \(1.401\pm.011\) & \(1.383\pm 0.048\) & \(\pm 0.331\) & \(\pm 0.705\) \\ Present & \(1.387\pm.019\) & \(1.370\pm 0.060\) & \(\pm 0.346\) & \(\pm 0.762\) \\ \hline \end{tabular}
\end{table}
Table 5: Drag and lift coefficients for the two-dimensional flow past disk problem.
Figure 14: Time evolution of the vorticity and adaptive grid refinement for the oscillating cylinder with \(Re=100\) and \(KC=5\).
computed drag and lift computation) and looks similar to the more accurate, lower CFL number simulations.
#### 5.3.3 Three-dimensions
To further validate the solver and demonstrate our solver's capabilities in three dimensions, we solve the three-dimensional flow past sphere problem. The setup is similar to the 2D example in Section 5.3.1, where the radius \(r=0.5\) and is centered at \((8,0,0)\) in the domain \(\Omega=[0,32]\times[-8,8]\times[-8,8]\), with minimum and
Figure 16: Visualization of the 2-dimensional flow past disk problem with Re = 100 and levels = 10:7. The vorticity and grid are shown.
Figure 15: Drag and lift computations for the 2-dimensional flow past disk problem with \(Re=100\) for CFL numbers of 1, 2, 5, and 10.
maximum refinement levels of 4 and 10, respectively. Like in two dimensions, we quantify the capabilities of our solver by computing the drag coefficients in a similar way to equation (91). These results can be seen in Table 6, which shows that our data agree with previous numerical and experimental studies. We also visualize the \(Re=350\) simulation in Figure 19.
\begin{table}
\begin{tabular}{|l|l l l l|} \hline & \(Re=100\) & \(Re=200\) & \(Re=300\) & \(Re=350\) \\ \hline Mittal _et al._[49] & 1.08 & - & 0.67 & 0.62 \\ Marella _et al._[44] & 1.06 & - & 0.621 & - \\ Le Clair _et al._[39] & 1.096 & 0.772 & 0.632 & - \\ Johnson and Patel [34] & 1.1 & 0.8 & 0.656 & - \\ Guittet _et al._[28] & 1.112 & 0.784 & 0.659 & 0.627 \\ Egan _et al._[20] & 1.11 & - & 0.673 & 0.633 \\ Present & 1.058 & 0.768 & 0.646 & 0.601 \\ \hline \end{tabular}
\end{table}
Table 6: Drag coefficients for the three-dimensional flow past sphere problem.
Figure 17: Drag and lift computations for the 2-dimensional flow past disk problem with \(Re=100\) for CFL numbers of 1, 2, 5, and 10, with the improved departure point reconstruction.
Figure 18: Visualization of the vorticity for CFL = 10. The left is the original simulation from Figure 16, and the right is the improved departure point reconstruction.
### Flow past Beginnings
We conclude this section by simulating a high Reynolds number flow past the UC Merced Beginnings sculpture. The model of the sculpture was placed in the domain of \(\Omega=[-12.2\,\mathrm{m},61.0\,\mathrm{m}]\times[-24.4\,\mathrm{m},24.4\,\mathrm{ m}]\times[0.0\,\mathrm{m},36.8\,\mathrm{m}]\), with minimum and maximum octree refinement levels of 1 and 11, respectively. The Beginnings sculpture is approximately \(12.2\,\mathrm{m}\) (\(40\,\mathrm{ft}\)) in height and we center the sculpture at the point, \((x,y,z)=(0,0,0)\). We set the free-stream velocity on the boundary of the domain to \(u_{0}=21.5\,\mathrm{m}\,\mathrm{s}^{-1}\) (\(48\,\mathrm{mph}\)), which represents a windy day at UC Merced. The density of the fluid in the simulation is set to \(\rho=1.3\,\mathrm{kg}\,\mathrm{m}^{-3}\) and the viscosity to \(\mu=1.8\times 10^{-5}\,\mathrm{kg}\,\mathrm{m}^{-1}\,\mathrm{s}^{-1}\), which corresponds to a Reynolds number of \(Re=1.87\times 10^{7}\). The velocity at the bottom wall of the domain, \(z_{min}\), is set to 0 and we set the outflow wall of the domain, \(x_{max}\), to a pressure value of 0. A schematic of the problem setup is shown in Figure 20.
This example demonstrates the ability of our solver to simulate a high Reynolds number flow around an irregular geometry representing a real-world object, akin to a real work science and engineering problem. Using a 40-core machine, our nodal projection method takes approximately 560 seconds per time iteration with approximately 3 325 000 nodes using the default values for \(K_{max}\) and error thresholds. Figure 21, taken at \(t=3\) 032 seconds, demonstrates that our approach can resolve a realistic flow field around the Beginnings statue, including the boundary layer while maintaining stability and computational efficiency.
Figure 19: Visualization of flow past a sphere for \(Re=350\) at several times. The vorticity and grid are shown.
## 6 Conclusions
We presented a novel projection method for the simulation of incompressible flows in arbitrary domains using quad/octrees, where all of the variables are collocated at the grid nodes. By design, our collocated projection operator is an iterative procedure. If it exists, the limit of this iterated procedure is the canonical projection on the incompressible space (Section 3.1). Leveraging the connection between the collocated and MAC layouts, we found a technical sufficient condition for this limit to exist on periodic grids. We then verified the stability and convergence of our projection operator numerically in the presence of various
Figure 21: Visualization of the flow past the UC Merced Beginnings sculpture for \(Re=1.87\times 10^{7}\). A slice of the grid is shown.
Figure 20: Problem schematic for the flow past Beginnings example with a photograph of the statue at UC Merced shown on the left.
boundary and interface conditions. Next, we validated our nodal Navier-Stokes solver using canonical two- and three-dimensional examples and see that our collocated solver achieves high-order convergence (Section 4.2), replicates experimental data (Section 5.2), and can accurately represent standard computational results with CFL numbers far greater than 1 (Section 5.3). Finally, we showed that our solver is well suited to study real-world science and engineering problems by simulating high Reynolds number flows past arbitrary geometries (Section 5.4). Our nodal projection method has proven to be a competitive computational fluid dynamics tool for studying complex fluid flows.
However, we see room for further exploration and improvement at both theoretical and computational levels. Although we derived a technical sufficient condition for the stability of our projection operator, we could not formally prove its stability for our specific ghost node construction and left the boundary effects out of our analysis. Therefore, we relied on an in-depth computational verification process. A more detailed theoretical study could provide information to optimize the construction of collocated operators. Furthermore, our implementation was parallelized using a shared-memory framework. To further extend the capabilities of our nodal solver, we aim to expand it to a distributed memory framework, as this significant enhancement will provide stronger scalability and enable us to investigate a broader range of applications.
To conclude, we emphasize that our nodal projection method uses a fully collocated data layout, drastically reducing implementation costs. The results presented herein demonstrate that our method is a suitable foundation to study a wide range of fluid flows, and we believe that this work can be readily extended to solve more complex phenomena, such as free-surface and multi-phase flows. We foresee our nodal projection method leading to the development of a new class of collocated computational tools capable of accelerating scientific discoveries by lowering accessibility barriers.
## 7 Acknowledgments
The authors thank Brandon Stark for collecting and processing the data to generate a three-dimensional model of the UC Merced Beginnings statue. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1840265.
## Computational Environment
Our solver was implemented in C++ 20, and all computations were done in parallel on CPUs using OpenMP [17; 52] libraries. Three-dimensional visualizations were generated using ParaView [2] (version 5.11.0). The imagery of the Beginnings sculpture was collected using a Skydio 2+ drone and processed with DroneDeploy using default settings to generate a three-dimensional model.
## Credit author statement
**Matthew Blomquist**: Formal analysis, Investigation, Software, Validation, Visualization, Writing - Original Draft, Writing - Review & Editing. **Scott R. West**: Formal analysis, Investigation, Software, Validation, Visualization, Writing - Original Draft, Writing - Review & Editing. **Adam L. Binswanger**: Formal analysis, Investigation, Software, Validation, Writing - Original Draft, Writing - Review & Editing. **Maxime Theillard**: Conceptualization, Supervision, Formal Analysis, Methodology, Software, Project administration, Writing - Original Draft, Writing - Review & Editing.
|
2303.12217 | Image Reconstruction without Explicit Priors | We consider solving ill-posed imaging inverse problems without access to an
explicit image prior or ground-truth examples. An overarching challenge in
inverse problems is that there are many undesired images that fit to the
observed measurements, thus requiring image priors to constrain the space of
possible solutions to more plausible reconstructions. However, in many
applications it is difficult or potentially impossible to obtain ground-truth
images to learn an image prior. Thus, inaccurate priors are often used, which
inevitably result in biased solutions. Rather than solving an inverse problem
using priors that encode the explicit structure of any one image, we propose to
solve a set of inverse problems jointly by incorporating prior constraints on
the collective structure of the underlying images.The key assumption of our
work is that the ground-truth images we aim to reconstruct share common,
low-dimensional structure. We show that such a set of inverse problems can be
solved simultaneously by learning a shared image generator with a
low-dimensional latent space. The parameters of the generator and latent
embedding are learned by maximizing a proxy for the Evidence Lower Bound
(ELBO). Once learned, the generator and latent embeddings can be combined to
provide reconstructions for each inverse problem. The framework we propose can
handle general forward model corruptions, and we show that measurements derived
from only a few ground-truth images (O(10)) are sufficient for image
reconstruction without explicit priors. | Angela F. Gao, Oscar Leong, He Sun, Katherine L. Bouman | 2023-03-21T22:35:04Z | http://arxiv.org/abs/2303.12217v1 | # Image reconstruction without explicit priors
###### Abstract
We consider solving ill-posed imaging inverse problems without access to an explicit image prior or ground-truth examples. An overarching challenge in inverse problems is that there are many undesired images that fit to the observed measurements, thus requiring image priors to constrain the space of possible solutions to more plausible reconstructions. However, in many applications it is difficult or potentially impossible to obtain ground-truth images to learn an image prior. Thus, inaccurate priors are often used, which inevitably result in biased solutions. Rather than solving an inverse problem using priors that encode the explicit structure of any one image, we propose to solve a set of inverse problems jointly by incorporating prior constraints on the _collective structure_ of the underlying images.The key assumption of our work is that the ground-truth images we aim to reconstruct share common, low-dimensional structure. We show that such a set of inverse problems can be solved simultaneously by learning a shared image generator with a low-dimensional latent space. The parameters of the generator and latent embedding are learned by maximizing a proxy for the Evidence Lower Bound (ELBO). Once learned, the generator and latent embeddings can be combined to provide reconstructions for each inverse problem. The framework we propose can handle general forward model corruptions, and we show that measurements derived from only a few ground-truth images (\(O(10)\)) are sufficient for image reconstruction without explicit priors.
Angela F. Gao\({}^{1*}\), Oscar Leong\({}^{1*}\), He Sun\({}^{2}\), Katherine L. Bouman\({}^{1}\)\({}^{1}\)Computing and Mathematical Sciences, California Institute of Technology, \({}^{2}\)Peking University
\({}^{*}\) denotes equal contribution
Inverse problems, computational imaging, prior models, generative networks, Bayesian inference
## 1 Introduction
In imaging inverse problems, the goal is to recover the ground-truth image from corrupted measurements, where the measurements and image are related via a forward model: \(y=f(x)+\eta\). Here, \(y\) are our measurements, \(x\) is the ground-truth image, \(f\) is a known forward model, and \(\eta\) is noise. Such problems are ubiquitous and include denoising [1], super-resolution [2], compressed sensing [3, 4], and phase retrieval [5]. These problems are often ill-posed: there are many images that are consistent with the observed measurements. Thus, traditionally structural assumptions on the image are required to reduce the space of possible solutions. We encode these structural assumptions in an _image generation model (IGM)_, which could take the form of either an implicit or explicit image prior.
In order to define an IGM it is necessary to have knowledge of the structure of the underlying image distribution. If ground-truth images are available, then an IGM can be learned directly [6, 7]. However, this approach requires access to an abundance of clean data and there are many scientific applications (e.g., geophysical imaging and astronomical imaging) where we do not have access to ground-truth images. Collecting ground-truth images in these domains can be extremely invasive, time-consuming, expensive, or even impossible. For instance, how should we define an IGM for black hole imaging without having ever seen a direct image of a black hole or knowing what one should look like? Moreover, classical approaches that utilize hand-crafted IGMs [8] are prone to human bias [9].
In this work, we show how one can solve a set of ill-posed image reconstruction tasks without access to an explicit image prior. The key insight of our work is that knowledge of common structure across multiple problems is sufficient regularization alone. In particular, we suppose we have access to a collection of measurements \(\{y^{(i)}\}_{i=1}^{N}\) which are observed through a forward model \(y^{(i)}:=f(x^{(i)})+\eta^{(i)}\). An important assumption we make is that the ground-truth images \(\{x^{(i)}\}_{i=1}^{N}\) are drawn from the same distribution (unknown _a priori_) and share common, low-dimensional structure. This assumption is satisfied in a number of applications where ground-truth images are not available. For instance, although we might not know what a black hole looks like, we might expect it to be similar in appearance over time. Our main result is that one can exploit this common structure by jointly inferring 1) a single generator \(G_{\theta}\) and 2) \(N\) low-dimensional latent distributions \(q_{\phi^{(i)}}\), such that the distribution induced by the push-forward of \(q_{\phi^{(i)}}\) through \(G_{\theta}\) approximates the posterior \(p(x|y^{(i)})\) for each example \(i\in[N]\).
## 2 Approach
In this work, we propose to solve a set of inverse problems without access to an IGM by assuming that the set of ground-truth images have common, low-dimensional structure. Other works have considered solving linear inverse problems without an IGM [10, 11], but assume that one has access to multiple independent measurements of a single, static ground-truth image, limiting their applicability to many real-world problems. In contrast, in our work we do not require multiple measurements of the same ground-truth image.
### Motivation for ELBO as a model selection criterion
In order to accurately learn an IGM, we motivate the use of the Evidence Lower Bound (ELBO) as a loss by showing that it provides a good criterion for selecting a prior model. Suppose we are given noisy measurements from a single image: \(y=f(x)+\eta\). In order to reconstruct the image \(x\), we traditionally first require an IGM \(G\) that captures the distribution \(x\) was sampled from. A natural approach would be to find or select the model \(G\) that maximizes the model posterior distribution \(p(G|y)\propto p(y|G)p(G)\). That is, conditioned on the noisy measurements, find the model of highest likelihood. Unfortunately computing \(p(y|G)\) is intractable, but we show that it can be well approximated using the ELBO.
To motivate our discussion, consider estimating the conditional posterior \(p(x|y,G)\) by learning the parameters \(\phi\) of a variational distribution \(q_{\phi}(x)\). Observe that the definition of the KL-divergence followed by using Bayes' theorem gives
\[D_{\mathrm{KL}}(q_{\phi}(x)\parallel p(x|y,G))=\log p(y|G)\] \[-\mathbb{E}_{x\sim q_{\phi}(x)}[\log p(y|x,G)+\log p(x|G)-\log q_{ \phi}(x)].\]
The ELBO of an IGM \(G\) given measurements \(y\) is defined by
\[\mathrm{ELBO}(G,q_{\phi};y) :=\mathbb{E}_{x\sim q_{\phi}(x)}[\log p(y|x,G)+\log p(x|G)\] \[-\log q_{\phi}(x)]. \tag{1}\]
Rearranging the above equation and using the non-negativity of the KL-divergence, we see that we can lower bound the model posterior as
\[\log p(G|y)\geqslant\mathrm{ELBO}(G,q_{\phi};y)+\log p(G)-\log p(y). \tag{2}\]
Note that \(-\log p(y)\) is independent of the parameters of interest, \(\phi\). If the variational distribution \(q_{\phi}(x)\) is a good approximation to the posterior \(p(x|y,G)\), \(D_{\mathrm{KL}}\approx 0\) so maximizing \(\log p(G|y)\) with respect to \(G\) is approximately equivalent to maximizing \(\mathrm{ELBO}(G,q_{\phi};y)+\log p(G)\).
Each term in the ELBO objective encourages certain properties of the IGM \(G\). In particular, the first term, \(\mathbb{E}_{x\sim q_{\phi}(x)}[\log p(y|x,G)]\), requires that \(G\) should lead to an estimate that is consistent with our measurements \(y\). The second term, \(\mathbb{E}_{x\sim q_{\phi}(x)}[\log p(x|G)]\), encourages images sampled from \(q_{\phi}(x)\) to have high likelihood under our model \(G\). The final term is the entropy term, \(\mathbb{E}_{x\sim q_{\phi}(x)}[-\log q_{\phi}(x)]\), which encourages a \(G\) that leads to "fatter" minima that are less sensitive to small changes in likely images \(x\) under \(G\).
### ELBOProxy
Some IGM are explicit (e.g., Gaussian image prior), which allows for direct computation of \(\log p(x|G)\). In this case, we can optimize the ELBO defined in Equation (1) directly and then perform model selection. However, an important class of IGMs that we are interested in are those given by deep generative networks. Such IGMs are not probabilistic in the usual Bayesian interpretation of a prior, but instead implicitly enforce structure in the data. Moreover, terms such as \(\log p(x|G)\) can only be computed directly if we have an injective map [12]. This architectural requirement limits the expressivity of the network. Hence, we instead consider a proxy of the ELBO that is especially helpful for deep generative networks. Suppose our image generation model is of the form \(x=G(z)\) where \(G\) is a generative network and \(z\) is a latent vector. Introducing a variational family for our latent representations \(z\sim q_{\phi}(z)\) and using \(\log p(z|G)\) in place of \(\log p(x|G)\), we arrive at the following proxy of the ELBO:
\[\mathrm{ELBOProxy}(G,q_{\phi};y) :=\mathbb{E}_{z\sim q_{\phi}(z)}[\log p(y|G(z))\] \[+\log p(z|G)-\log q_{\phi}(z)]. \tag{3}\]
When \(G\) is injective and \(q_{\phi}(x)\) is the push-forward of \(G\) through \(q_{\phi}(z)\), then this proxy is exactly the ELBO in Eq. (1). While \(G\) may not necessarily be injective, we show empirically that the ELBOProxy is a useful criterion for selecting such models.
**Toy example:** To illustrate the use of the \(\mathrm{ELBOProxy}\) as a model selection criterion, we conduct the following experiment that asks whether the \(\mathrm{ELBOProxy}\) can identify the best model from a given set of plausible IGMs. For this experiment, we use the MNIST dataset [13] and consider two inverse problems: denoising and phase retrieval. We train a generative model \(G_{c}\) on each class \(c\in\{0,1,2,\dots,9\}\). Hence, \(G_{c}\) is learned to generate images from class \(c\) via \(G_{c}(z)\) where \(z\sim\mathcal{N}(0,I)\). Then, given noisy measurements \(y_{c}\) of a single image from class \(c\), we ask whether the generative model \(G_{c}\) from the appropriate class would achieve the best \(\mathrm{ELBOProxy}\). For denoising, our measurements are \(y_{c}=x_{c}+\eta_{c}\) where \(\eta_{c}\sim\mathcal{N}(0,\sigma^{2}I)\) and \(\sigma=\sqrt{0.5}\). For phase retrieval, \(y_{c}=|\mathcal{F}(x_{c})|+\eta_{c}\) where \(\mathcal{F}\) is the Fourier transform and \(\eta_{c}\sim\mathcal{N}(0,\sigma^{2}I)\) with \(\sigma=\sqrt{0.05}\).
We construct \(5\times 10\) arrays for each problem, where in the \(i\)-th row and \(j\)-th column, we compute the \(-\mathrm{ELBOProxy}\) obtained by using model \(G_{j-1}\) to reconstruct images from class \(i-1\). We calculate \(\mathrm{ELBOProxy}(G_{c},q_{\phi_{c}};y_{c})\) by parameterizing \(q_{\phi_{c}}\) with a Normalizing Flow and optimizing network weights \(\phi_{c}\) to maximize (3). Results are shown in Fig. 2. We note that all of the correct models are chosen
Figure 1: We use a set of \(N\) measurements \(\{y^{(i)}\}_{i=1}^{N}\) from \(N\) different ground-truth images to infer \(\{\phi^{(i)}\}_{i=1}^{N}\), the parameters of the latent distributions, and \(\theta\), the parameters of the generative network \(G\). All inferred parameters are colored as blue and the loss is given by Equation (4). Here, \(G_{\sharp}^{*}P\) denotes the push-forward of distribution \(P\) induced by \(G\).
in both denoising and phase retrieval. We also note some interesting cases where the \(\mathrm{ELBOProxy}\) values are similar for certain cases, such as when recovering the \(3\) or \(4\) image when denoising. For example, when denoising the \(4\) image, both \(G_{4}\) and \(G_{9}\) achieve comparable \(\mathrm{ELBOProxy}\) values. By carefully inspecting the noisy image of the \(4\), one can see that both models are reasonable given the structure of the noise.
### Learning the IGM to solve inverse problems
As the previous section illustrates, the \(\mathrm{ELBOProxy}\) provides a good criterion for choosing an appropriate IGM from noisy measurements. Here, we consider the task of learning the IGM \(G\) directly from corrupted data. We consider the setting where we have access to a collection of \(N\) measurements \(y^{(i)}=f(x^{(i)})+\eta^{(i)}\) for \(i\in[N]\). The key assumption we make is that common, low-dimensional structure is shared across the underlying images \(\{x^{(i)}\}_{i=1}^{N}\).
We propose to find a _shared_ generator \(G_{\theta}\) with weights \(\theta\) along with latent distributions \(q_{\phi^{(i)}}\) that can be used to reconstruct the full posterior of each image \(x^{(i)}\) from its corresponding noisy measurements \(y^{(i)}\). This approach is illustrated in Fig. 1. Having the generator be shared across all images helps capture their common collective structure. Each corruption, however, could induce its own complicated image posteriors. Hence, we assign each measurement \(y^{(i)}\) its own latent distribution to capture the differences in their posteriors. While the learned distribution may not necessarily be the true image posterior (as we are optimizing a proxy of the ELBO), it still captures a distribution of images that fit to the observed measurements.
More explicitly, given a set of measurements \(\{y^{(i)}\}_{i=1}^{N}\), we optimize the \(\mathrm{ELBOProxy}\) from Equation (3) to jointly infer a generator \(G_{\theta}\) and variational distributions \(\{q_{\phi^{(i)}}\}_{i=1}^{N}\):
\[\max_{\theta,\phi^{(1)}}\frac{1}{N}\sum_{i=1}^{N}\mathrm{ELBOProxy}(G_{\theta},q_{\phi^{(i)}};y^{(i)})+\log p(G_{\theta}). \tag{4}\]
The expectation in this objective is approximated via Monte Carlo sampling. In terms of choices for \(\log p(G_{\theta})\), we can add additional regularization to promote particular structures, such as smoothness. Here, we consider having sparse neural network weights as a form of implicit regularization and use dropout during training to represent \(\log p(G_{\theta})\)[14].
Once a generator \(G_{\theta}\) and variational parameters \(\{\phi^{(i)}\}_{i=1}^{N}\) have been learned, we solve the \(i\)-th inverse problem by simply sampling \(\hat{x}^{(i)}=G_{\theta}(\hat{z}^{(i)})\) where \(\hat{z}^{(i)}\sim q_{\phi^{(i)}}\). Producing multiple samples for each inverse problem can help visualize the range of uncertainty under the learned IGM \(G_{\theta}\), while taking the average of these samples empirically provides clearer estimates with better metrics in terms of PSNR or MSE. We report PSNR outputs in our subsequent experiments.
## 3 Experimental Results
We now consider solving a set of inverse problems via the framework described in 2.3. For each of these experiments, we use a multivariate Gaussian distribution to parameterize each posterior distribution \(q_{\phi^{(i)}}\), and we use a Deep Decoder [15] with \(6\) layers of size \(150\), a latent size of \(40\), and a dropout of \(10^{-4}\) as the IGM. Each multivariate Gaussian distribution is parameterized by a mean and covariance matrix \(\{\mu^{(i)},L^{(i)}L^{(i)^{T}}+\varepsilon I\}_{i=1}^{N}\), with \(\varepsilon=10^{-3}\) is added to the covariance matrix to help with stability of the optimization.
**Denoising:** We show results on denoising noisy images of a single face from the PubFig [16] dataset in Fig. 3. The measurements are defined by \(y=x+\eta\) where \(\eta\sim\mathcal{N}(0,\sigma^{2}I)\) with an SNR of \(\sim\)15 dB. Our method is able to remove much of the noise and recovers small scale features, even with only 95 examples. Our reconstructions also substantially outperform the baseline methods AmbientGAN [17], Deep Image Prior (DIP) [18], and regularized maximum likelihood using total variation (TV-RML), as shown in Fig. 3. Unlike DIP, our method does not seem to overfit and does not require early stopping. Our method also does not exhibit noisy artifacts like those seen in AmbientGAN results.
**Compressed sensing:** We consider a compressed sensing problem inspired by astronomical imaging of black holes with the Event Horizon Telescope (EHT): suppose we are given access to measurements of the form \(y=Ax+\eta,\ \eta\sim\mathcal{N}(0,\sigma^{2}I)\) where \(A\in\mathbb{C}^{p\times n}\) is a low-rank compressed sensing matrix arising from interferometric telescope measurements. This problem is ill-posed and requires the use of priors or regularizers to recover a reasonable image [9]. Moreover, it is impossible to acquire ground-truth images of black holes, so any explicit prior defined _a priori_ will exhibit human bias. However, we can assume that although the black hole evolves, it will not change drastically from day-to-day.
Figure 2: We consider two inverse problems: denoising and phase retrieval. Left: the two leftmost columns correspond to the ground truth image \(x_{c}\) and the noisy measurements \(y_{c}\). Center: in each row, we show the means of the distribution induced by the push-forward of \(G_{j}\) and each latent distribution \(z\sim q_{\phi_{j}}\) for \(j\in\{0,\ldots,9\}\). Right: each row of the array corresponds to the \(-\mathrm{ELBOProxy}\) achieved by each model in reconstructing the images. Here, lower is better. Boxes highlighted in green correspond to the best \(-\mathrm{ELBOProxy}\) values in each row. In all cases, the correct model was chosen.
We show results on 60 frames from a video of a simulated evolving black hole target [19, 20] with an SNR of \(\sim\)32 dB in Fig. 5. The reference "target image" is the ground-truth filtered with a low-pass filter that represents the maximum resolution intrinsic to the EHT array (visualized and explained in Fig. 4). Our method is not only able to reconstruct the large scale features of the ground-truth images without any aliasing artifacts, but also achieves a level of super-resolution.
**Non-Convex Phase Retrieval:** Here we demonstrate our approach on solving non-convex phase retrieval. Our measurements are described by \(y=|\mathcal{F}(x)|+\eta\) where \(\mathcal{F}(\cdot)\) is a complex-valued linear operator (either Fourier or undersampled Gaussian measurements) and \(\eta\sim\mathcal{N}(0,\sigma^{2}I)\).
Fig. 6 shows results from a set of \(N=150\) measurement examples of the MNIST 8's class with an SNR of \(\sim\)52 dB. The true posterior is multi-modal due to the phase ambiguity and non-convexity of the problem. Our reconstructions have features similar to the digit \(8\), but contain artifacts in the Fourier case. These artifacts are only present in the Fourier case due to additional ambiguities (i.e., flipping and spatial-shifts), which lead to a more complex posterior [21].
## 4 Conclusion
In this work, we showcased how one can solve a set of inverse problems without an IGM by leveraging common structure present across the underlying ground-truth images. We demonstrated that even with a small set of corrupted measurements, one can jointly solve these inverse problems by directly learning an IGM that maximizes a proxy of the ELBO. Overall, our work showcases the possibilities of solving inverse problems in a "prior-free" fashion, free from human bias typical of ill-posed image reconstruction. We believe our approach can aid automatic discovery of novel structure from scientific measurements without access to clean data, leading to potentially new avenues for scientific discovery.
**Acknowledgements** This work was sponsored by NSF Award 2048237 and 1935980, an Amazon AI4Science Partnership Discovery Grant, and the Caltech/JPL President's and Director's Research and Development Fund (PDRDF). This research was carried out at the Jet Propulsion Laboratory and Caltech under a contract with the National Aeronautics and Space Administration and funded through the PDRDF.
Figure 4: **Visualization of the intrinsic resolution of the EHT compressed sensing measurements.** The EHT measures sparse spatial frequencies of the image (i.e., components of the image’s Fourier transform). In order to generate the ground truth image (c), all frequencies in the entire domain of (a) must be used. Restricting spatial frequencies to the ones in (a) and (b)’s green circle generates the target (d). The EHT samples a subset of this region, indicated by the sparse black samples in (b). Naively recovering an image using only these frequencies results in the _dirty image_ (e), which is computed by \(A^{H}y\). The 2D spatial Fourier frequency coverage represented with \((u,v)\) positions is referred to as the UV coverage.
Figure 5: **Compressed sensing with a video of a black hole.** We demonstrate our method described in Section 2.3 using 60 images from an evolving black hole target. Here we show the ground-truth, dirty (\(A^{H}y\), see Fig. 4), and mean reconstruction images, respectively. We also show the unwrapped space \(\times\) time image, which is taken along the overlaid white ring illustrated in the T=1 ground-truth image. The bright-spot’s temporal trajectory of our reconstruction matches that of the ground-truth.
Figure 6: **Phase retrieval with MNIST 8’s.** We shown an example of our method applied to the challenging non-convex problems of Fourier and complex Gaussian phase retrieval. |
2302.03033 | Exemplars and Counterexemplars Explanations for Image Classifiers,
Targeting Skin Lesion Labeling | Explainable AI consists in developing mechanisms allowing for an interaction
between decision systems and humans by making the decisions of the formers
understandable. This is particularly important in sensitive contexts like in
the medical domain. We propose a use case study, for skin lesion diagnosis,
illustrating how it is possible to provide the practitioner with explanations
on the decisions of a state of the art deep neural network classifier trained
to characterize skin lesions from examples. Our framework consists of a trained
classifier onto which an explanation module operates. The latter is able to
offer the practitioner exemplars and counterexemplars for the classification
diagnosis thus allowing the physician to interact with the automatic diagnosis
system. The exemplars are generated via an adversarial autoencoder. We
illustrate the behavior of the system on representative examples. | Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore Rinzivillo | 2023-01-18T11:14:42Z | http://arxiv.org/abs/2302.03033v1 | # Exemplars and Counterexemplars Explanations for Image Classifiers, Targeting Skin Lesion Labeling
###### Abstract
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans by making the decisions of the formers understandable. This is particularly important in sensitive contexts like in the medical domain. We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations on the decisions of a state of the art deep neural network classifier trained to characterize skin lesions from examples. Our framework consists of a trained classifier onto which an explanation module operates. The latter is able to offer the practitioner exemplars and counterexemplars for the classification diagnosis thus allowing the physician to interact with the automatic diagnosis system. The exemplars are generated via an adversarial autoencoder. We illustrate the behavior of the system on representative examples.
Image classification, Explainable AI, Machine Learning, Skin Lesion Image Classification, Adversarial Autoencoders
## 1 Introduction
In the last years, AI based decision support systems have gained a huge impact in different domains, in many cases providing high accuracy predictions, classification, and recommendation. However, their adoption in high-stake scenario that involves decision on humans has raised several ethical concerns about the fairness, bias, transparency and dependable decisions taken on the basis of AI suggestions [1]. These concerns are even more relevant in mission-critical domains, like in healthcare. Thus, it is necessary to develop AI systems that are able to assist the doctors to take informative decisions, complementing their own knowledge with the information and suggestion yielded by the AI system.
Our proposal starts from the design and development of a classification model for the _ISIC 2019 Challenge_. The objective of the challenge is to invite research to develop automated systems to provide accurate classification
of skin cancers from dermoscopic images. Our system consists of two main modules: _a)_ a CNN model to classify each input image as a class among eight possibilities; _b)_ an explainer based on exemplars and counter exemplars synthesis that exploit and adversarial autoencoder (AAE) to produce the images for the explanation. In this paper we introduce the models and their structures. In particular, we have used two separate learning processes: _a)_ a learning phase for the CNN, starting from the ResNet50 architecture; and _b)_ a training phase for an Adversarial AutoEncoder (AAE). Since the image produced by the AAE are intended to be used as exemplars, it is crucial to have a wide catalog of neighborhood instances. For this reason, we developed a progressive growing AAE to maximize the diversification of the generated images. The two training processes are intentionally kept separated since two different loss functions are used (the first for accuracy, the second for discrimination). We also want to demonstrate that the explainer may be effective even if the original training set was not available. In the paper we show that accurately designing the AAE is crucial for obtaining an explanation based on realistic exemplars and counter-exemplars, especially in a specialistic domain as healthcare.
The rest of the paper is organized as follows. In Section 2 we list recent contributions in the field of explainability for image classification. Section 3 recalls basic notions related with the approach exploited in this paper. In Section 4 we illustrate in detail the settings of the case study addressed, while Section 5 shows the results. Finally, Section 6 summarizes the contribution and proposes future research directions.
## 2 Related Work
Image classification can be widely applied in health for various purposes ranging from heart disease diagnosis to skin cancer detection [2, 3, 4]. In order to have high performance in these tasks, AI systems relying on machine learning models are more and more used. Unfortunately, these models are typically _black boxes_ handing the rationale of their behavior. For this reason, research on _black box explanation_ has recently received much attention [5, 6, 7]. This interest is driven by the idea of adopting into AI systems explanation methods such that high performance and interpretability can coexist. Explainability is practically useful in social sensitive context like the medical one [8].
In image classification, typical explanations are _saliency maps_, i.e., images that show each pixel's positive (or negative) contribution. At a high level, explanation methods can be categorized as _model-specific or model-agnostic_, depending on whether the explanation method exploits knowledge of the internal structure of the black box or not; _global or local_, depending on whether the explanation is provided for the black box as a whole or for any specific instance. The explainer abele adopted in this paper is a _local model-agnostic_ method.
With respect to the literature, lime[9] and shap[10] are two of the most well known data agnostic local explanation methods. lime randomly generates a local neighborhood "around" the instance to explain, labels them using the black box under analysis and returns an explanation using as surrogate model a linear regressor. On the other hand, shap adopts game theory and exploits the Shapley values of a conditional expectation function of the black box, providing for each feature the unique additive importance. Besides being model-agnostic, lime
and shap are also theoretically not tied to a specific type of data. Indeed, they can be applied to explain image classifiers and return explanation in the form of saliency maps by turning the features importance into pixels importance. They achieve this objective by using for the process "superpixels", i.e., areas of the image under analysis with similar colors. Unfortunately, the usage of superpixels requires a segmentation procedure that affects the explanation. Moreover, the neighborhoods considered when investigating the black box behavior are no longer plausible instances but simply the image under analysis with some pixels "obscured" [11]. The fact that the explanation procedure relies on not plausible images is generally unpleasant in medical applications. abele overcomes these issues relying on a realistic procedure for generating images similar to the one under analysis and does not require any a priori segmentation [12].
Other explanation methods widely used to build saliency maps are model-specific approaches such as IntGrad [13], GradInput [14], and \(\varepsilon\)-LRP [15]. In brief, to retrieve the saliency map, they redistribute the prediction backwards using local rules until it assigns a relevance score to each pixel value. These approaches are designed for deep neural networks and cannot be employed for explaining image classifiers for which the type of the model is unknown. On the other hand, being model-agnostic, abele overcomes this limitation and also allows playing with the components of the explanation while these model-specific approaches are more limited.
## 3 Setting The Stage
### Adversarial Autoencoders
An important issue arising in the use of synthetic instances when developing black box explanations is the question of maintaining the identity of the distribution of the examples that are generated with the prior distribution of the original examples. In [12] this issue is approached by using an Adversarial Autoencoder (AAE) [16], which combines a Generative Adversarial Network (GAN) [17] with the autoencoder representation learning algorithm.
AAEs are probabilistic autoencoders that aim at generating new random items that are highly similar to the training data. They are regularized by matching the aggregated posterior distribution of the latent representation of the input data to an arbitrary prior distribution. The AAE architecture (Figure 1-left) includes an \(\textit{encoder}:\mathbb{R}^{n}{\rightarrow}\mathbb{R}^{k}\), a _decoder_ : \(\mathbb{R}^{k}{\rightarrow}\mathbb{R}^{n}\) and a _discriminator_ : \(\mathbb{R}^{k}{\rightarrow}[0,1]\) where \(n\) is the number of pixels in an image and \(k\) is the number
Figure 1: _Left_: AAE architecture. _Right_: Discriminator and Decoder module.
of latent features. Let \(x\) be an instance of the training data, we name \(z\) the corresponding latent data representation obtained by the _encoder_. We can describe the AAE with the following distributions [16]: the prior distribution \(p(z)\) to be imposed on \(z\), the data distribution \(p_{d}(x)\), the model distribution \(p(x)\), and the encoding and decoding distributions \(q(z|x)\) and \(p(x|z)\), respectively. The encoding function \(q(z|x)\) defines an aggregated posterior distribution of \(q(z)\) on the latent feature space: \(q(z)\)= \(\int_{x}q(z|x)p_{d}(x)dx\). The AAE guarantees that the aggregated posterior distribution \(q(z)\) matches the prior distribution \(p(z)\), through the latent instances and by minimizing the reconstruction error. The AAE generator corresponds to the encoder \(q(z|x)\) and ensures that the aggregated posterior distribution can confuse the _discriminator_ in deciding if the latent instance \(q(z)\) comes from the true distribution \(p(z)\).
The AAE learning involves two phases: the _reconstruction_ aimed at training the _encoder_ and _decoder_ to minimize the reconstruction loss; the _regularization_ aimed at training the _discriminator_ using training data and encoded values. After the learning, the decoder defines a generative model mapping the prior distribution \(p(z)\) to distribution \(p_{d}(x)\).
### Abele
abele (Adversarial Black box Explainer generating Latent Exemplars) is a local model agnostic explainer for image classifiers [12]. Given an image \(x\) to explain and a black box \(b\), the explanation provided by abele is composed of _(i)_ a set of _exemplars_ and _counter-exemplars_, _(ii)_ a _saliency map_. Exemplars and counter-exemplars show instances classified with the same outcome as \(x\), and with an outcome other than \(x\), respectively. They can be visually analyzed to understand the reasons for the decision. The saliency map highlights the areas of \(x\) that contribute to its classification and areas that push it into another class. The explanation process is as follows. First, abele generates a neighborhood in the latent feature space exploiting an Adversarial Autoencoder (AAE) [16]. Then, it learns a decision tree on that latent neighborhood providing local decision and counterfactual rules [18]. Finally, abele selects and decodes exemplars and counter-exemplars satisfying these rules and extracts from them a saliency map.
Figure 2: Latent Local Rules Extractor (_lore_) module.
#### 3.2.1 Encoding
The image \(x{\in}\mathbb{R}^{n}\) to be explained is passed as input to the AAE where the _encoder_ returns the latent representation \(z\in\mathbb{R}^{k}\) using \(k\) latent features with \(k\ll n\).
#### 3.2.2 Neighborhood Generation
abele generates a set \(H\) of \(N\) instances in the latent feature space, with characteristics close to those of \(z\). Since the goal is to learn a predictor on \(H\) able to simulate the local behavior of \(b\), the neighborhood includes instances with both decisions, i.e., \(H=H_{=}\cup H_{\neq}\) where instances \(h\in H_{=}\) are such that \(b(\widetilde{h})=b(x)\), and \(h\in H_{\neq}\) are such that \(b(\widetilde{h})\neq b(x)\). We name \(\widetilde{h}\in\mathbb{R}^{n}\) the decoded version of an instance \(h\in\mathbb{R}^{k}\) in the latent feature space. The neighborhood generation of \(H\) (_neighbor_ module in Fig. 2) may be accomplished using different strategies ranging from pure random strategy using a given distribution to a genetic approach maximizing a fitness function [18]. In our experiments we adopt the last strategy. After the generation process, for any instance \(h\in H\), abele exploits the _disde_ module (Fig. 1-right) for both checking the validity of \(h\) by querying the _discriminator_ and decoding it into \(\widetilde{h}\). Then, it queries the black box \(b\) with \(\widetilde{h}\) to get the class \(y\), i.e., \(b(\widetilde{h})=y\).
#### 3.2.3 Local Classifier Rule Extraction
Given the local neighborhood \(H\), abele builds a decision tree classifier \(c\) trained on \(H\) labeled with \(b(\widetilde{H})\). The surrogate tree is intended to locally mimic the behavior of \(b\) in the neighborhood \(H\). It extracts the decision rule \(r\) and counterfactual rules \(\Phi\) enabling the generation of _exemplars_ and _counter-exemplars_. Fig. 2 shows the process that, starting from the image to be explained, leads to the decision tree learning, and to the extraction of the decision and counterfactual rules. We name this module llore, as a variant of lore[18].
#### 3.2.4 Explanation Extraction
Often, e.g. in medical or managerial decision making, people explain their decisions by pointing to exemplars with the same (or different) decision outcome. We follow this approach and we model the explanation of an image \(x\) returned by abele as a triple \(e=\langle\widetilde{H}_{e},\widetilde{H}_{c},s\rangle\) composed by _exemplars_\(\widetilde{H}_{e}\), _counter-exemplars_
Figure 3: _Left_: Exemplar Generator (_eg_) module. _Right_: abele architecture.
\(\widetilde{H}_{c}\) and a _saliency map_\(s\). Exemplars and counter-exemplars are images representing instances similar to \(x\), leading to an outcome equal to or different from \(b(x)\). Exemplars and counter-exemplars are generated by abele exploiting the _eg_ module (Fig. 3-left). It first generates a set of latent instances \(H\) satisfying the decision rule \(r\) (or a set of counter-factual rules \(\Phi\)), as shown in Fig. 2. Then, it validates and decodes them into exemplars \(\widetilde{H}_{e}\) (or counter-exemplars \(\widetilde{H}_{c}\)) using the _disde_ module. The saliency map \(s\) highlights areas of \(x\) that contribute to its outcome and areas that push it into another class. The map is obtained by the saliency extractor _se_ module (Fig. 3-right) that first computes the pixel-to-pixel-difference between \(x\) and each exemplar in the set \(\widetilde{H}_{e}\), and then, it assigns to each pixel of the saliency map \(s\) the median value of all differences calculated for that pixel. Thus, formally for each pixel \(i\) of the saliency map \(s\) we have: \(s[i]=\textit{median}_{\sqrt{h}_{e}\in\widetilde{H}_{e}}(x[i]-\widetilde{h}_ {e}[i])\).
## 4 Case Study
We present here a description of all the components involved in the training process of the black box model used to classify instances of the ISIC dataset in eight different classes, and of the AAE used by abele trained on the same dataset.
### ISIC Dataset, Preprocessing and Classifier
Skin Lesion Analysis Towards Melanoma Detection is a challenge proposed by the International Skin Imaging Collaboration (ISIC), an international effort to improve melanoma diagnosis, sponsored by the International Society for Digital Imaging of the Skin (ISDIS). ISIC has developed an international repository of dermoscopic images, for both the purposes of clinical training, and for supporting technical research toward automated algorithmic analysis. The goal for ISIC 2019 is to classify dermoscopic images among nine different diagnostic categories: MEL (Melanoma), NV (Melanocytic nevus), BCC (Basal cell carcinoma), AK (Actinic keratosis), BKL (Benign keratosis), DF (Dermatofibroma), VASC (Vascular lesion), SCC (Squamous cell carcinoma), UNK (None of the others / out-of-distribution).
Figure 4: Melanocytic nevus preprocessed samples
#### 4.1.1 Dataset
The dataset is composed of a _training set_ of 25,331 JPEG images of skin lesions and their category (labels); a _test set_ of 8,238 unlabelled JPEG images of skin lesions. In our experiments we relied on the training set for which ground truth is directly available. In particular, from the training set, we used 80% for training and 20% for validation. For the evaluation, we use the same evaluation protocols as the submission system.
#### 4.1.2 Data Preprocessing
Since the images have different resolutions. We preprocess them as follows:
* For the training process, the images are randomly rescaled, rotated and cropped to generate the input to the network. Note that such preprocessing does not deform the lesions in the image. Resolution of the preprocessed images is 224\(\times\)224.
* For the validation and test process, each image is firstly rescaled to 256\(\times\)256 according to the shorter edge, then cropped at the center into a 224\(\times\)224 image.
Since the UNK category is a reject option and is not available in the training we focused on the 8 other categories.
#### 4.1.3 Image Classifier Architecture
For the classification, we used a classical ResNet, pretrained on Imagenet, i.e., a black box classifier. In particular, we replaced the last classification layer by the new one adapted to the number of classes in the diagnosis. We trained this layer and fine tuned the rest of the network on ISIC dataset. The 50-layer ResNet architecture is the following: The network is composed of 18 modules sequentially combined together, including one conv1 module (7\(\times\)7, 64 filters, stride 2), three conv2 modules (1\(\times\)1, 64 filters, stride 1; 3\(\times\)3, 64 filters, stride 1; 1\(\times\)1, 256 filters, stride 1), four conv3 (1\(\times\)1, 128 filters, stride 2; 3\(\times\)3, 128 filters, stride 1; 1\(\times\)1, 512 filters, stride 1), six conv4 (1\(\times\)1, 256 filters, stride 2; 3\(\times\)3, 256 filters, stride 1; 1\(\times\)1, 1024 filters, stride 1), three conv5 (1\(\times\)1, 512 filters, stride 2; 3\(\times\)3, 512 filters, stride 1; 1\(\times\)1, 2048 filters, stride 1) and one last fully connected module fc (average pooling, 9-output fully connected layer, sigmoid activation). The first module conv1 is composed of one single convolution layer. For conv2 to conv5, each module is a residual block including three convolutional layers in the residual branch. The output of such block is the sum of the input and the output of the convolutional layers. The module fc is the newly trained prediction layer. The spatial size is reduced only at the first layer of the modules conv3, conv4 and conv5. We adopt a binary cross entropy loss for each class, so that the problem is considered as 8 individual one-vs-rest binary classification problems.
#### 4.1.4 Evaluation Criteria
The official evaluation criterion for the challenge is the Normalized (or balanced) multi-class accuracy. It is defined as the average of recall obtained in each class.
The best value is 1 and the worst is 0. This metric makes all the classes equally important. In terms of balanced multi-class accuracy (mean value of categorical recalls), the trained model achieved 0.838 on the validation set. We report also the performance on the official test set. The score is 0.488, which is potentially impacted by the presence of out-of-distribution samples (UNK) since the model was not tuned for rejection.
### Customization of ABELE
We describe here the customization of abele we carried on in order to make it usable for the complex image classification task addressed by the ISIC classifier previously described.
#### 4.2.1 Issues with Generative Models
Generative Adversarial models are generally not easy to train as they are usually affected by a number of common failures. These problems vary from a diversified spectrum of failures in convergence to the famous Mode Collapse [19], the tendency by the generator network to produce a small variety of output types. Such problems mainly arise from the competing scheme generator and discriminator are trained on: they are trained simultaneously in a zero-sum game, thus the goal is to find an equilibrium between the two competing objectives. Since every time the parameters of one of the models are updated the nature of the optimization problem is changed, this has the effect of creating a dynamic system that easily fails to converge.
Furthermore, even the concept of convergence is not as clear as in other context. As the generator improves during the training, the discriminator performance gets worse because it cannot easily tell the difference between real and fake outputs. At the limit where the generator succeeds perfectly, the discriminator flips a coin to make its prediction. Hence, the discriminator feedback gets less meaningful over time. If the model continues training past the point when the discriminator is giving random feedback, then the generator starts to train on low quality feedback, and its own performance may collapse. For a Generative Adversarial model, convergence is often an unstable transitory region rather than a stable state.
In addition to the previous problems, we often face the further complication to deal with real world datasets that are far from ideal: fragmentation, imbalance, lack of uniform digitization, shortage of data are primary challenges of big data analytics for healthcare. All of them impede efficiency and accuracy of machine learning model trained with these data, especially in the case of inherently fragile generative models.
Training an AAE in a standard fashion to reproduce samples from ISIC dataset without taking special care of all issues mentioned above resulted in extremely poor performance, mostly due to a persistent collapse mode. In order to overcome such generative failure and dataset limitations, we implemented a collection of cutting edge techniques that altogether is capable of addressing all the issues we mentioned and successfully training an AAE with adequate performance.
#### 4.2.2 Overcoming Mode Collapse
One usually wants a generative model to produce a large variety of outputs, for example, a different image for every input of our skin lesion generator. However, since the generator is always trying to produce the one output that seems most plausible to the discriminator, it may happen that the generator learns to produce just a single output over and over again, or a small set of outputs. In such case, the discriminator best strategy is to learn to always reject that output. But if the current discriminator gets stuck in a poor local minimum it is not going to find the best strategy, hence it would be too easy for the generator to find the most credible output for the current discriminator.
As a consequence, at each iteration of generator an over-optimization for the current discriminator takes place, and the discriminator never learns to escape out of the trap. Thus, each generation of generator rotates through a small set of output types, possibly a single one. In adversarial generative models, this form of failure is called mode collapse.
In recent years many techniques have been proposed to overcome this ubiquitous form of failure. Ad hoc tricks like Mini Batch Discrimination [20] or Wasserstein Loss [21] have been proved empirically to alleviates mode collapse, while Unrolled GANs [22] or Conditional AAE [23] intervene directly on the internal structure of the training scheme to discouraging over-optimization by the current generator.
The reasons behind mode collapse and other forms of failure remain still unclear and not fully proven. In the health domain, such singularities around the train process appear to be even more frequent. Possible reasons can be the lack of a substantial number of training data, the necessity to deal with high resolution images, fragmented and highly unbalanced datasets (malignant cancers are usually a small proportion of the entire batch). Furthermore, the need of a substantial number of latent features makes the parameter space extremely irregular, non-convex and in need of powerful regularization. In order to overcome failure modes and train an AAE successfully, we used the following collection of techniques.
#### 4.2.3 Progressive Growing Adversarial Autoencoder
Progressive Growing GANs [24] have been introduced as an extension to the GANs training process. It helps to achieve a more stable training of generative models for high resolution images. The main idea is to start with a very low resolution image and step by step adding block of layers that simultaneously increase the output size of the generator model and the input size of the discriminator model until the desired size is achieved.
In a general GAN scheme, the discriminator is linked to the generator model output. However, in an AAE the discriminator takes as input the encoded latent space instead of the full reconstructed image. In order to achieve all benefits of a progressive growing model, we need a novel structure to take care of this different dynamics. We then propose a Progressive Growing Adversarial Autoencoder (PGAAE): starting with a single block of convolutional layers for each of the two generating networks (encoder and decoder) we are able to reconstruct low resolution images (7x7 pixels), then step by step we increase the number of blocks until the network is powerful enough to manage images of the desired
size (224x224 pixels). The latent space dimension is kept fixed, consequently the discriminator takes as input tensors always the same size. Although one could fix also the discriminator network, we found helpful to progressively increasing also the width of this network so that the discriminator can deal each step with a more structured information. On the contrary, we observe that increasing the depth of the discriminator increases the instability of the training, causing a wide variety of failures ranging from poor performance to catastrophic forgetting [19].
The main idea behind such construction relies on the instability caused to the training process by heavy structured high dimensional data. Generating high-resolution images is challenging for generative models, as the generator must learn how to output both high dimensional structure and fine details at the same time. The high resolution makes any discrepancy in the fine detail of generated images easy to mark for the discriminator, and the training process degenerates. Large images also require significantly more memory, consequently, the batch size used to update model weights each iteration is reduced to ensure that the large images fit into memory. This introduces further instability into the process.
The incremental addition of the layers allows the models to first learn large scale structure and progressively shift the attention to finer detail. At each step, all previous layers remain trainable throughout the training process. From a different point of view, one can think of each block of layers as the initialization of the same common network structure of the subsequent step of the progressive scheme. Such progressive initialization is by all means a powerful form of regularization carried out through both the encoder and decoder networks. A heavy quantity of regularization is indeed needed to smooth the parameter space and, in turn, reducing failure modes like mode collapse.
The PGAAE network paradigma is reported in Figure 5. First, we start by training a shallow AAE with just one convolutional block for both the encoder and decoder network. Such first AAE is trained to reconstruct skin lesion images resized to 7x7 pixels. Once the network is fully trained and optimized, it sends
Figure 5: A Progressive Growing AAE network.
its weights to a second AAE with a deeper structure of two convolutional blocks for both the encoder and decoder network. This second AAE is trained with the same dataset resized to 14x14 pixels. At each step, weights are shared just over the common underlying structure and then kept trainable for future training steps, i.e. each AAE is initialized with the previous AAE weights and trained with images doubled in size.
After the sixth step, we have a fully trained AAE able to reproduce skin lesion images of size 224x224 pixels. In order to enhance the discriminator ability to discriminate between more and more complex images, at each step the discriminator grows in width: it consists of two dense layers with a progressive growing number of neurons (from 500 to 3000 with a step increase of 500 neurons after each phase) and a Leaky ReLu activation with parameter 0.2. All discriminators end with a single neuron dense layer with a sigmoid activation.
Each convolutional block includes two identical sets of three layers: a conv2d layer with stride 3, filters ranging from 16 to 128, followed by a batch normalization with parameter 0.95 and a ReLu activation. Depending on whether we consider the encoder or the decoder network, a max pooling or an up sampling layer is attached at the end of each block.
#### 4.2.4 Denoising Autoencoder
Another major issue affecting the training of generative models is the tendency of learning the identity function: if the autoencoder has more nodes in the hidden layer than inputs, then it can just learn the data and the output simply equals the input. Hence, it does not perform any useful representation learning or dimensionality reduction.
Denoising autoencoders [25] are a stochastic version of basic autoencoders. A Denoising autoencoder attempts to address the identity function issue by randomly corrupting input images that the autoencoder must then reconstruct. By making the training process and the reconstruction phase more challenging, it is proved that denoising autoencoders mitigate the identity function issue and learn more robust representations.
Another way to gain more generalization, reduce the vanishing gradient problem and improve convergence is to add noise also to the discriminator inputs [26]. Denoising autoencoder, adversarial generative training ad noise injection have been used separately to improve autoencoder performance. We augment the adversarial model with both a denoising feature applied to the generator and a noise injection in to the discriminator. The denoising feature was particularly helpful in achieving good reconstruction performance for latent space with 256 latent features. We opted for a Gaussian noise with standard deviation \(\sigma=0.1\) (we tried different value ranging from 0.05 to 0.9: it seems there is no significant difference in reconstruction accuracy in the range \(\sigma\in[0.1,0.3]\); accuracy starts to deteriorate quickly after \(\sigma=0.3\)). Despite we are not interested in low dimensional latent space (reconstructed images would be too blurry and not enough variegate), we found no significant advantage in injecting noise in such cases.
#### 4.2.5 Mini Batch Discrimination
Mini Batch Discrimination [20] was originally introduced as a technique to mitigate collapse of the generator network. It is a discriminative technique for generative adversarial networks between whole minibatches of samples rather than between individual samples.
The idea is for the discriminator to consider an entire batch of data, instead of looking to a single input data. The mode collapse is then much easier to spot, since the discriminator understands that whenever all the samples in a batch are very close to each other, the data has to be rejected. This forces the generator to produce many good outputs in each data batch. An \(L_{1}\) penalization norm is concatenated with the original input and fed to the discriminator last but one layer. Such penalization quantifies the closeness between data in the same minibatch, causing the discriminator to reject batches that are internally too similar. This discriminative technique along with the progressive growing network structure, helped to avoid the mode collapse for batches of small and middle size (16-64) at the cost of a small increase of discriminator parameters. Indeed, training AAE with small batches size increases the chances of falling in to a mode collapse. However, small batch sizes are forced by hardware limitation due to high resolution images and high dimensional latent space.
Following [20], a minibatch discrimination layer needs two hyperparameters to be fine tuned, namely \(B\) and \(C\) in the original paper, i.e. the number of discrimination kernels to use and the dimensionality of the space where closeness of samples is calculated. Hypothetically, the larger \(B\) and \(C\) are, the better results are obtained at the price of a lower computation speed. A good compromise between accuracy and speed was the choice \(B=16\) and \(C=5\).
#### 4.2.6 Performance
After a thorough fine tuning of all three networks structures (encoder, decoder and discriminator) our PGAAE with 256 latent features achieves a reconstruction error measure through RMSE that ranges from 0.08 to 0.24 depending on whether we consider the most common or the most rare skin lesion class. Data augmentation was necessary to overcome scarcity and imbalance of the dataset. Mode collapse is greatly reduced, and we are able to generate variegate and good quality skin lesion images. ABELE explainer is now able to generate meaningful explanations.
## 5 Explanations
The outcome of the classifier and the explanator are designed to present a compact visual interface to the user, organized into four panes: 1) the original image that was processed by the CNN and the label of the predicted classification; 2) an attention area that allows us to emphasize the areas that had a positive (brown color) or negative (green color) contribution to the classification; 3) a set of synthetic prototypes generated by the AAE that are classified with the same class of the input; 4) a counterexample, i.e. a synthetic image to present a prototype that is classified with a different class than the input.
Figure 6 presents the outcome for an image, classified as _Melanocytic nevus_. From the attention area the user is able to evaluate which parts of the input
image where relevant for the CNN. The presentation of the results is complemented with the four prototypes: the four images are generated by the AAE, and they help the user to enforce the confidence with the decision yielded by the black-box, comparing the original image with the four exemplars. The counterexample, instead, has the function of probing the result of the black-box, by generating an image similar to the input but classified with a different class from the CNN.
ABELE generates statistics on the neighborhood of the input in the latent space. This information contributes to understand how the model space of the CNN is fragmented around the given input. This gives to the doctor a pulse of the classes that the black-box locates around the given instance. For the example in Figure 6, the statistics and rules on the latent space are the following:
\[\text{Neighborod}\{NV:41BCC:18AK:4BKL:26DF:11\}\] \[e=\{rules=\{7>-1.01,99\leq 0.07,225>-0.75,255\leq-0.02,\] \[238>0.15,137\leq-0.14\}\rightarrow\{class:NV\}\] \[counter-rules=\{\{7\leq-1.01\}\rightarrow\{class:BCC\}\}\}\]
Here the _Neighborod_ entry shows the composition of the synthetic latent instances generated by the AAE. The rules and counter-rules are expressed in terms of ordinal positions of the dimensions in the latent space. Of course this representation is intended for internal use: it provides no accessible information to the human, but it may be exploited by the visual interface to implement an interactive refinement of the provided explanation
## 6 Conclusion
In this paper we propose a classification framework composed of a CNN model for the ISIC 2019 Challenge classification and a local-outcome explainer that produces exemplars and counter exemplars as an explanation. Our analytical framework contributes _a)_ to model a CNN to classify each input image as a class among eight possibilities; _b)_ to create an explainer based on exemplars and
Figure 6: ABELE graphic explanation for a Melanocytic nevus
counter exemplars synthesis that exploit an adversarial autoencoder (AAE) to produce the images for the explanation; _c)_ to tune the generation of exemplars to overcome the collapse mode issue to provide acceptable prototypes for the explanations. The CNN and the AAE follow distinct training processes, to demonstrate the possibility of using the explainer even with an external black-box, for example to test its fairness and dependability.
The analytical pipeline presented here is the core part of a wider system, where the interaction with the user should be further developed. In particular, we plan to enable an explorative process of the latent space of the input, by allowing the user to ask for additional exemplars or counter exemplars. Given the high impact on the cognitive layer of the user, we are designing a qualitative evaluation criteria with domain experts (i.e. doctors), to test the different dimensions of the explanations: usability, utility, comprehensibility, fidelity, etc.
## Acknowledgment
This work is partially supported by the European Community H2020 programme under the funding schemes: G.A. 825619 _AI4EU_ (ai4eu.eu), G.A. 952026 _HumanE AI Net_ (humane-ai.eu), G.A. 952215 _TAILOR_ (tailor-network.eu).
|
2310.03222 | The bright side of simple heuristics for the TSP | The greedy and nearest-neighbor TSP heuristics can both have $\log n$
approximation factors from optimal in worst case, even just for $n$ points in
Euclidean space. In this note, we show that this approximation factor is only
realized when the optimal tour is unusually short. In particular, for points
from any fixed $d$-Ahlfor's regular metric space (which includes any
$d$-manifold like the $d$-cube $[0,1]^d$ in the case $d$ is an integer but also
fractals of dimension $d$ when $d$ is real-valued), our results imply that the
greedy and nearest-neighbor heuristics have \emph{additive} errors from optimal
on the order of the \emph{optimal} tour length through \emph{random} points in
the same space, for $d>1$. | Alan Frieze, Wesley Pegden | 2023-10-05T00:44:18Z | http://arxiv.org/abs/2310.03222v1 | # The bright side of simple heuristics for the TSP
###### Abstract
The greedy and nearest-neighbor TSP heuristics can both have \(\log n\) approximation factors from optimal in worst case, even just for \(n\) points in Euclidean space. In this note, we show that this approximation factor is only realized when the optimal tour is unusually short. In particular, for points from any fixed \(d\)-Ahlfor's regular metric space (which includes any \(d\)-manifold like the \(d\)-cube \([0,1]^{d}\) in the case \(d\) is an integer but also fractals of dimension \(d\) when \(d\) is real-valued), our results imply that the greedy and nearest-neighbor heuristics have _additive_ errors from optimal on the order of the _optimal_ tour length through _random_ points in the same space, for \(d>1\).
## 1 Introduction
Papadimitriou [6] showed that finding an optimum Traveling Salespesron Tour is NP-hard even for points in Euclidean space, while Arora [1] and Mitchell [5] give polynomial-time approximation schemes for the Euclidean TSP. In practice these have resisted efficient implementations, and in practice, Euclidean TSP approximation still leans heavily on heuristics which are not known to be asymptotically optimal. For metric TSP, Christofides algorithm achieves an approximation ratio of 1.5, which saw slight improvement with the recent breakthrough of Karlin, Klein, Gharan, and Shayan [4].
Perhaps the simplest heuristics to find a tour through \(n\) points are the Nearest Neighbor heuristic, which grows (and in the end, closes) a path
by jumping at each step to the nearest unvisited point, and the Greedy heuristic, which at each step chooses the shortest available edge which would not create any vertices of degree 3 or close a cycle unless on the \(n\)th step. For \(n\) points in an arbitrary metric space, each of these heuristics is known to give a tour within \(\log n\) of optimal [2, 3], and examples are known which realize these approximation ratios, even just in Euclidean space. But our main result implies that for \(n\) points in the unit square whose optimal tour has length \(\Omega(\sqrt{n})\) (as is the typical case), the Greedy and Nearest Neighbor heuristics will both return a tour whose length is within a constant factor of optimal.
We will prove our results not just for full-dimensional Euclidean space but for any sufficiently regular metric space with dimension \(d>1\); the point of this generality is to emphasize that for greedy or nearest-neighbor algorithms to have poor approximation ratios on some input, it is really necessary that the the input admits an unexpectedly short tour given the space its points are taken from, rather than, say, just because the input was actually chosen from a lower dimensional subset of the space than expected.
A metric space \(\mathcal{M}\) equipped with a measure \(\mu\) is _d-Ahlfor's regular_ if there are constants \(C,D\) so that
\[Cr^{d}\leq\mu(B(p,r))\leq Dr^{d} \tag{1}\]
for all \(p\in\mathcal{M}\) and \(0<r\leq\operatorname{diam}(\mathcal{M})\). Here \(B(p,r)\) is the ball of radius \(r\) centred at \(p\). Simple examples of regular metric spaces include subspaces of Euclidean space like unit cubes under Lebesgue measure (having integer dimensions) or fractals like the Sierpinski gasket under the Hausdorff measure (having intermediate dimensions)--for example, the metric space induced in Euclidean space by any fractal generated by an iterative function system satisfying the open set condition is Ahlfor's regular for some \(d\), for the Hausdorff measure of appropriate dimension.
We will prove the following about optimal TSP tours in Ahlfor's-regular spaces:
**Theorem 1**.: _Suppose \(x_{1},x_{2},\dots\) is a sequence of i.i.d points drawn from a \(d\)-Ahlfor's regular probability measure on the metric space \(\mathcal{M}\). Then there exists a constant \(C\) so that for \(X_{n}=\{x_{1},\dots,x_{n}\}\), the length of the optimal tour through \(X_{n}\) has length at least \(Cn^{1-\frac{1}{d}}\) for all sufficiently large \(n\), with probability 1._
Our main result for the nearest-neighbor and greedy heuristics is then the following:
**Theorem 2**.: _If the bounded metric space \(\mathcal{M}\) admits a \(d\)-Ahlfor's regular measure, then there is a constant \(C\) and an \(n_{0}\) such that for any \(n\) points in \(\mathcal{M}\) with \(n\geq n_{0}\), the nearest-neighbor and greedy algorithms produce a tour of length at most \(Cn^{1-\frac{1}{d}}\)._
## 2 Proofs
Proof of Theorem 1.: Let \(D\) be the constant from (1) guaranteed to exist for \((\mathcal{M},\mu)\). Let \(r=\left(\frac{1}{Dn}\right)^{1/d}\). For any fixed \(i\), let \(Z_{i}\) be the indicator for the event \(\mathcal{E}_{i}\) that \(x_{i}\) is the unique point from \(X_{n}\) in \(B(x_{i},r)\). Then
\[\Pr(\mathcal{E}_{i})\geq(1-Dr^{d})^{n-1}\geq e^{-1}.\]
Let \(Z=Z_{1}+\cdots+Z_{n}\). Thus \(\mathbb{E}(Z)\geq e^{-1}n\). Let \(\mathcal{B}\) be the event that there exists \(i\) such that \(B(x_{i},r)\) contains more than \(\log^{2}n\) points from \(X_{n}\) other than \(x_{i}\). Then
\[\Pr(\mathcal{B})\leq n\Pr(Bin(n,Dr^{d})\geq\gamma)\leq{n\choose\gamma}(Dr^{d })^{\gamma}\leq\left(\frac{e}{\gamma}\right)^{\gamma}\leq n^{-\log n}.\]
If \(\mathcal{B}\) does not occur then changing the value of one \(x_{i}\) only changes the value of \(Z\) by at most \(\log^{2}n\). If \(\mathcal{B}\) does occur then \(Z\) could change by at most \(n\). We will now use Warnke's _Typical bounded differences inequality_[7] to show that \(Z\) is concentrated around its mean.
**Theorem 3** (Warnke).: _Let \(X=(X_{1},\ldots,X_{N})\) be a family of independent random variables with \(X_{k}\) taking values in a set \(\Lambda_{k}\). Let \(\Gamma\subseteq\prod_{j\in[N]}\Lambda_{j}\) be an event and assume that the function \(f:\prod_{j\in[N]}\Lambda_{j}\to\mathbf{R}\) satisfies the typical Lipschitz condition: there are numbers \(c_{k},k\in[N]\) and \(d_{k},k\in[N]\) such that whenever \(x,y\) differ only in the \(k\)th coordinate, we have_
\[|f(x)-f(y)|\leq\begin{cases}c_{k}&\text{if }x\in\Gamma.\\ d_{k}&\text{otherwise}.\end{cases}\]
_Then for all numbers \(\gamma_{k},k\in[N]\) with \(\gamma_{k}\in(0,1)\),_
\[\Pr(|f(X)-\mathbb{E}(f(X))|\geq t)\leq\\ 2\exp\left\{-\frac{t^{2}}{2\sum_{k\in[N]}(c_{k}+\gamma_{k}(d_{k} -c_{k}))^{2}}\right\}+\Pr(X\notin\Gamma)\sum_{k\in[N]}\gamma_{k}^{-1}.\]
We will apply this theorem with \(f=Z,N=n,X=\{x_{1},\ldots,x_{n}\},\Gamma=\mathcal{B}^{c}\) and \(c_{k}=\gamma,d_{k}=n,\gamma_{k}=\log^{2}n,\gamma_{k}=n^{-2}\) for \(k\in[n]\). This yields
\[\Pr(Z\leq\mathbb{E}(Z)-n^{2/3})\leq 2\exp\left\{-\frac{n^{4/3}}{n(\log^{2}n+1)^{2 }}\right\}+n^{3-\log n}=o(1).\]
So, w.h.p. there are at least \(n/3\) of the \(x_{i}\) that are at least \(r\) from their nearest neighbor. Theorem 1 follows immediately.
Proof of Theorem 2.: Consider any nearest-neighbor or greedy tour \(x_{1},\ldots,x_{n}\) through the point-set \(X=\{x_{1},\ldots,x_{n}\}\in\mathcal{M}\). We define a sequence of open balls \(B_{1},\ldots,B_{n-1}\), where \(B_{i}\) is centered at \(x_{i}\) and has radius \(\operatorname{dist}(x_{i},x_{i+1})\). Observe that when the edge from \(\{x_{i},x_{i+1}\}\) is selected, there can be no other vertices \(x_{j}\) which would be available for selection but are closer to \(x_{i}\) than \(\operatorname{dist}(x_{i},x_{i+1})\). This implies that the family \(\mathcal{B}=\{B_{i}\}\) has the following property:
(\(\star\)) For any distinct balls \(B_{i},B_{j}\in\mathcal{B}\), we have either that the \(B_{i}\) doesn't contain the center of \(B_{j}\) or that \(B_{j}\) doesn't contain the center of \(B_{i}\) (according to whether \(i<j\) or \(j<i\), respectively).
Now we partition \(\mathcal{B}\) into sets \(\mathcal{B}_{1},\mathcal{B}_{2},\ldots,\) where each \(\mathcal{B}_{j}\) consists of every ball \(D\in\mathcal{B}\) whose radius \(r\) satisfies \(\frac{1}{2^{j}}<r\leq\frac{1}{2^{j-1}}\).
Now each family \(\mathcal{B}_{i}\) consists of balls whose radii differ by at most a factor of \(2\). In particular, as (\(\star\)) implies that the distance between the center of two balls in \(\mathcal{B}\) is at least the minimum of the radii of the two balls, within each family \(\mathcal{B}_{i}\), we know that the distance between the centers of two balls is at least half the maximum of the radii of the two balls. In particular, if we define families \(\tilde{\mathcal{B}}_{i}\) by rescaling the balls in each family \(\mathcal{B}_{i}\) by a factor of \(\frac{1}{2}\), then each family \(\tilde{\mathcal{B}}_{i}\) is a family of disjoint balls. As such, we have from the condition (1) that
\[|\mathcal{B}_{k}|\leq C2^{kd}, \tag{2}\]
for a fixed constant \(C\) depending only on the metric space \(\mathcal{M}\).
In particular, we can bound the total length \(L\) of the nearest neighbor tour by the radii \(r(B)\) of the balls \(B\in\mathcal{B}\) as follows:
\[L\leq\sum_{B\in\mathcal{B}}r(B)=\sum_{k\geq 1}\sum_{B\in\mathcal{B}_{k}}r(B) \leq C_{0}\sum_{k=1}^{k_{0}}2^{k(d-1)}\leq C_{0}\frac{2^{k_{0}d(1-1/d)}}{2^{d -1}-1}, \tag{3}\]
where \(k_{0}\) is smallest integer for which the bound \(C2^{k_{0}d}\) on \(|\mathcal{B}_{k_{0}}|\) from (2) exceeds \(n\). We have thus that for any \(d>1\) and a constant \(C_{1}\) depending
on the metric space \(\mathcal{M}\) but not the point set \(X\), that
\[L\leq C_{1}n^{1-\frac{1}{d}},\]
proving the theorem.
|
2302.13220 | Combining Graphical and Algebraic Approaches for Parameter
Identification in Latent Variable Structural Equation Models | Measurement error is ubiquitous in many variables - from blood pressure
recordings in physiology to intelligence measures in psychology. Structural
equation models (SEMs) account for the process of measurement by explicitly
distinguishing between latent variables and their measurement indicators. Users
often fit entire SEMs to data, but this can fail if some model parameters are
not identified. The model-implied instrumental variables (MIIVs) approach is a
more flexible alternative that can estimate subsets of model parameters in
identified equations. Numerous methods to identify individual parameters also
exist in the field of graphical models (such as DAGs), but many of these do not
account for measurement effects. Here, we take the concept of
"latent-to-observed" (L2O) transformation from the MIIV approach and develop an
equivalent graphical L2O transformation that allows applying existing graphical
criteria to latent parameters in SEMs. We combine L2O transformation with
graphical instrumental variable criteria to obtain an efficient algorithm for
non-iterative parameter identification in SEMs with latent variables. We prove
that this graphical L2O transformation with the instrumental set criterion is
equivalent to the state-of-the-art MIIV approach for SEMs, and show that it can
lead to novel identification strategies when combined with other graphical
criteria. | Ankur Ankan, Inge Wortel, Kenneth A. Bollen, Johannes Textor | 2023-02-26T03:11:19Z | http://arxiv.org/abs/2302.13220v1 | # Combining Graphical and Algebraic Approaches for
###### Abstract.
Measurement error is ubiquitous in many variables - from blood pressure recordings in physiology to intelligence measures in psychology. Structural equation models (SEMs) account for the process of measurement by explicitly distinguishing between _latent_ variables and their measurement _indicators_. Users often fit entire SEMs to data, but this can fail if some model parameters are not identified. The model-implied instrumental variables (MIVs) approach is a more flexible alternative that can estimate subsets of model parameters in identified equations. Numerous methods to identify individual parameters also exist in the field of graphical models (such as DAGs), but many of these do not account for measurement effects. Here, we take the concept of "latent-to-observed" (L2O) transformation from the MIIV approach and develop an equivalent graphical L2O transformation that allows applying existing graphical criteria to latent parameters in SEMs. We combine L2O transformation with graphical instrumental variable criteria to obtain an efficient algorithm for non-iterative parameter identification in SEMs with latent variables. We prove that this graphical L2O transformation with the instrumental set criterion is equivalent to the state-of-the-art MIIV approach for SEMs, and show that it can lead to novel identification strategies when combined with other graphical criteria.
## 1. Introduction
Graphical models such as directed acyclic graphs (DAGs) are currently used in many disciplines for causal inference from observational studies. However, the variables on the causal pathways modelled are often different from those being measured. Imperfect measures cover a broad range of sciences, including health and medicine (e.g., blood pressure, oxygen level), environmental sciences (e.g., measures of pollution exposure of individuals), and the social (e.g., measures of socioeconomic status) and behavioral sciences (e.g., substance abuse).
Many DAG models do not differentiate between the variables on the causal pathways and their actual measurements in a dataset (Tennant et al., 2019). While this omission is defensible when the causal variables can be measured reliably (e.g., age), it becomes problematic when the link between a variable and its measurement is more complex. For example, graphical models employed in fields like Psychology or Education Research often take the form of _latent variable structural equation models_ (LVSEMs, Figure 1; Bollen (1989)), which combine a _latent level_ of unobserved variables and their hypothesized causal links with a _measurement level_ of their observed indicators (e.g., responses to questionnaire items). This structure is so common that LVSEMs are sometimes simply referred to as SEMs. In contrast, models that do not differentiate between causal factors and their measurements are traditionally called _simultaneous equations_ or _path models1_.
Footnote 1: Path models can be viewed as LVSEMs with all noise set to 0; some work on path models, importantly by Sewall Wright himself, does incorporate latent variables.
Once a model has been specified, estimation can be performed in different ways. SEM parameters are often estimated all at once by iteratively minimizing some difference measure between the observed and the model-implied covariance matrices. However, this "global" approach has some pitfalls. First, all
model parameters must be algebraically identifiable for a unique minimum to exist; if only a single model parameter is not identifiable, the entire fitting procedure may not converge (Boomsma, 1985) or provide meaningless results. Second, local model specification errors can propagate through the entire model (Bollen et al., 2007). Alternatively, Bollen (1996) introduced a "local", equation-wise approach for SEM parameter identification termed "model-implied instrumental variables" (MIIVs), which is non-iterative and applicable even to models where not all parameters are simultaneously identifiable. MIIV-based SEM identification is a mature approach with a well-developed underlying theory as well as implementations in multiple languages, including R (Fisher et al., 2019).
Of all the model parameters that are identifiable in principle, any given estimator (such as the MIIV-based approach) can typically only identify parameters in identified equations and identified parameters in underidentified equations. Different identification methods are therefore complementary and can allow more model parameters to be estimated. Having a choice of such methods can help users to keep the stages of _specification_ and _estimation_ separated. For example, a researcher who only has access to global identification methodology might be tempted to impose model restrictions just to "get a model identified" and not because there is a theoretical rationale for the restrictions imposed. With more complementary methods to choose from, researchers can instead base model specification on substantive theory and causal assumptions.
The development of parameter identification methodology has received intense attention in the graphical modeling field. The most general identification algorithm is Pearl's do-calculus, which provides a complete solution in non-parametric models (Huang and Valtorta, 2006; Shpitser and Pearl, 2006). The backdoor and front-door criteria provide more convenient solutions in special cases (Pearl, 2009). While there is no practical general algorithm to decide identifiability for models that are linear in their parameters, there has been a flurry of work on graphical criteria for this case, such as instrumental sets (Brito and Pearl, 2002), the half-trek criterion (Foygel et al., 2012), and auxiliary variables (Chen et al., 2017). Unfortunately, these methods were all developed for the acyclic directed mixed graph (ADMG) framework and require at least the variables connected to the target parameter to be observed - which is rarely the case in SEMs. Likewise, many criteria in graphical models are based on "separating" certain paths by conditioning on variables, whereas no such conditioning-based criteria exist for SEMs.
The present paper aims to make identification methods from the graphical model literature available to the SEM field. We offer the following contributions:
* We note that Bollen (1996)'s latent-to-observed (L2O) transformation that transforms a latent variable SEM into a model with only observed variables can be used more generally in models containing arbitrary mixtures of latent and observed variables (Section 3).
* We present a graphical equivalent of L2O transformation that allows us to apply known graphical criteria to SEMs (Section 4).
* We prove that Bollen's MIIV approach (Bollen, 1996; Bollen and Bauer, 2004; Bollen et al., 2022) is equivalent to a graphical L2O transformation followed by the application of the graphical instrumental set criterion (Brito and Pearl (2002); Section 5).
* We give examples where the graphical L2O transformation approach can identify more parameters compared to the MIIV approach implemented in the R package MIIVsem (Fisher et al. (2019); Section 6).
Thus, by combining the L2O transformation idea from the SEM literature with identification criteria from the graphical models field, we bridge these two fields - hopefully to the benefit of both.
Figure 1: SEM based on the _Industrialization and Political Democracy_ model (Bollen, 1989) with latent variables \(l_{1}\) (industrialization), and \(l_{2}\) (political democracy). The model contains 3 indicators for \(l_{1}\): (1) gross national product (\(y_{1}\)), (2) energy consumption (\(y_{2}\)), and (3) labor force in industry (\(y_{3}\)), and 4 indicators for \(l_{2}\): (1) press freedom rating (\(y_{4}\)), (2) political opposition freedom (\(y_{5}\)), (3) election fairness (\(y_{6}\)), and (3) legislature effectiveness (\(y_{7}\)). \(\lambda_{11}\ldots\lambda_{13},\lambda_{24}\ldots\lambda_{27},\) and \(\beta_{11}\) are the path coefficients. \(\epsilon_{1},\ldots,\epsilon_{7},\) and \(\zeta_{1}\) represent noise/errors.
Background
In this section, we give a brief background on basic graphical terminology and define SEMs.
### Basic Terminology
We denote variables using lowercase letters \((x_{i})\), sets and vectors of variables using uppercase letters \((X)\), and matrices using boldface \((\mathbf{\Lambda})\). We write the cardinality of a set \(V\) as \(|V|\), and the rank of a matrix \(\mathbf{\Lambda}\) as \(\operatorname{rk}(\mathbf{\Lambda})\). A _mixed graph_ (or simply _graph_) \(\mathcal{G}=(V,A)\) is defined by sets of variables (nodes) \(V=\{x_{1},\ldots,x_{n}\}\) and arrows \(A\), where arrows can be directed \((x_{i}\to x_{j})\) or bi-directed \((x_{i}\leftrightarrow x_{j})\). A variable \(x_{i}\) is called a _parent_ of another variable \(x_{j}\) if \(x_{i}\to x_{j}\in A\), or a _spouse_ of \(x_{j}\) if \(x_{i}\leftrightarrow x_{j}\in A\). We denote the set of parents of \(x_{i}\) in \(\mathcal{G}\) as \(Pag(x_{i})\).
**Paths:** A _path_ of length \(k\) is a sequence of \(k\) variables such that each variable is connected to its neighbours by an arrow. A _directed path_ from \(x_{i}\) to \(x_{j}\) is a path on which all arrows point away from the start node \(x_{i}\). For a path \(\pi\), let \(\pi[x_{i}\sim x_{j}]\) denote its subsequence from \(x_{i}\) to \(x_{j}\), in reverse order when \(x_{i}\) occurs after \(x_{j}\); for example, if \(\pi=x_{1}\gets x_{2}\to x_{3}\) then \(\pi[x_{2}\sim x_{3}]=x_{2}\to x_{3}\) and \(\pi[x_{1}\sim x_{2}]=x_{2}\to x_{1}\). Importantly, this definition of a path is common in DAG literature but is different from the SEM literature, where "path" typically refers to a single arrow between two variables. Hence, a path in a DAG is equivalent to a sequence of paths in path models. An _acyclic directed mixed graph_ (ADMG) is a mixed graph with no directed path of length \(\geq 2\) from a node to itself.
**Treks and Trek Sides:** A _trek_ (also called _open path_) is a path that does not contain a _collider_, that is, a subsequence \(x_{i}\to x_{j}\gets x_{k}\). A path that is not open is a _closed path_. Let \(\pi\) be a trek from \(x_{i}\) to \(x_{j}\), then \(\pi\) contains a unique variable \(t\) called the _top_, also written as \(\pi^{\leftarrow}\), such that \(\pi[t\sim x_{i}]\) and \(\pi[t\sim x_{j}]\) are both directed paths (which could both consist of a single node). Then we call \(\pi^{\leftarrow}:=\pi[t\sim x_{i}]\) the _left side_ and \(\pi^{\rightarrow}:=\pi[t\sim x_{j}]\) the _right side_ of \(\pi\).2
Footnote 2: In the literature, treks are also often represented as tuples of their left and right sides.
**Trek Intersection:** Consider two treks \(\pi_{i}\) and \(\pi_{j}\), then we say that \(\pi_{i}\) and \(\pi_{j}\)_intersect_ if they contain a common variable \(v\). We say that they _intersect on the same side_ (have a same-sided intersection) if \(v\) occurs on \(\pi_{i}^{\leftarrow}\) and \(\pi_{j}^{\rightarrow}\) or \(\pi_{i}^{\rightarrow}\) and \(\pi_{j}^{\rightarrow}\); in particular, if \(v\) is the top of \(\pi_{i}\) or \(\pi_{j}\), then the intersection is always same sided. Otherwise, \(\pi_{i}\) and \(\pi_{j}\)_intersect on opposite sides_ (have an opposite-sided intersection).
**t-separation:** Consider two sets of variables, \(L\) and \(R\), and a set \(T\) of treks. Then we say that the tuple \((L,R)\)_t-separates_ (is a \(t\)-separator of) \(T\) if every tre in \(T\) contains either a variable in \(L\) on its left side or a variable in \(R\) on the right side. For two sets of variables, \(A\) and \(B\), we say that \((L,R)\)_t_-separates \(A\) and \(B\) if it \(t\)-separates all treks between \(A\) and \(B\). The _size_ of a \(t\)-separator \((L,R)\) is \(|L|+|R|\).
### Structural Equation Models
We now define structural equation models (SEMs) as they are usually considered in the DAG literature (e.g., Sullivant et al., 2010). This definition is the same as the Reticular Action Model (RAM) representation (McArdle and McDonald, 1984) from the SEM literature. A _structural equation model_ (SEM) is a system of equations linear in their parameters such that:
\[X=\mathbf{B}X+E\]
where \(X\) is a vector of variables (both latent and observed), \(\mathbf{B}\) is a \(|X|\times|X|\) matrix of _path coefficients_, and \(E=\{\epsilon_{1},\ldots,\epsilon_{|X|}\}\) is a vector of error terms with a positive definite covariance matrix \(\mathbf{\Phi}\) (which has typically many or most of its off-diagonal elements set to \(0\)) and zero means.3 The _path diagram_ of an SEM \((\mathbf{B},\mathbf{\Phi})\) is a mixed graph with nodes \(V=X\cup E\) and arrows \(A=\{\epsilon_{i}\to x_{i}\mid i\in 1,\ldots,|X|\}\cup\{x_{i}\to x_{j}\mid B[i,j]\neq 0\}\cup\{\epsilon_{i}\leftrightarrow \epsilon_{j}\mid i\neq j,\mathbf{\Phi}[i,j]\neq 0\}\). We also write \(\beta_{x_{i}\rightarrow x_{j}}\) for the path coefficients in \(\mathbf{B}\) and \(\phi_{\epsilon_{i}}\) for the diagonal entries (variances) in \(\mathbf{\Phi}\). Each equation in the model corresponds to one node in this graph, where the node is the dependent variable and its parent(s) are the explanatory variable(s). Each arrow represents one parameter to be estimated, i.e., a _path coefficient_ (e.g., directed arrow between latents and observed variables), a _residual covariance_ (bi-directed arrow), or a _residual variance_ (directed arrow from error term to latent or indicator). However, some of these parameters could be fixed; for example, at least one parameter per latent variable needs to be fixed to set its scale, and covariances between observed exogenous variables (i.e., observed variables that have no parents) are typically fixed to their observed values. In this paper, we focus on estimating the path coefficients. We only consider _recursive_ SEMs in this paper - i.e., where the path diagram is an ADMG - even though the methodology can be generalized.
Footnote 3: This can be extended to allow for non-zero means, but our focus here is on the covariance structure, so we omit that for simplicity.
Sullivant et al. (2010) established an important connection between treks and the ranks of submatrices of the covariance matrix, which we will heavily rely on in our paper.
**Theorem 1**.: _Trek separation; (Sullivan et al., 2010, Theorem 2.8) Given an SEM \(\mathcal{G}\) with an implied covariance matrix \(\mathbf{\Sigma}\), and two subsets of variables \(A,B\subseteq X\),_
\[\text{rk}(\mathbf{\Sigma}[A,B])\leq min\left\{|L|+|R|\mid(L,R)\text{ $t$-separates $A$ and $B$}\right\}\]
_where the inequality is tight for generic covariance matrices implied by \(\mathcal{G}\)._
In the special case \(A=\{x_{1}\},B=\{x_{2}\}\), Theorem 1 implies that \(x_{1}\) and \(x_{2}\) can only be correlated if they are connected by a trek. Although the compatible covariance matrices of SEMs can also be characterized in terms of \(d\)-separation (Chen and Pearl, 2014), we use \(t\)-separation for our purpose because it does not require conditioning on variables, and it identifies more constraints on the covariance matrix implied by SEMs than d-separation (Sullivan et al., 2010).
## 3 Latent-to-observed transformations for SEMs
A problem with IV-based identification criteria is that they cannot be directly applied to latent variable parameters. The MIIV approach addresses this issue by applying the L2O transformation to these model equations, such that they only consist of observed variables. The L2O transformation in Bollen (1996) is presented on the LISREL representation of SEMs (see Supplementary Material). In this section, we first briefly introduce "scaling indicators", which are required for performing L2O transformations. We then use it to define the L2O transformation on the RAM notation (defined in Section 2.2) and show that with slight modification to the transformation, we can also use it to partially identify equations. We will, from here on, refer to this transformation as the "algebraic L2O transformation" to distinguish it from the purely graphical L2O transformation that we introduce later in Section 4.
### Scaling Indicators
The L2O transformation (both algebraic and graphical) uses the fact that any SEM is only identifiable if the scale of each latent variable is fixed to an arbitrary value (e.g., 1), introducing new algebraic constraints. These constraints can be exploited to rearrange the model equations in such a way that latent variables can be eliminated.
The need for scale setting is well known in the SEM literature and arises from the following lemma (since we could not find a direct proof in the literature - perhaps due to its simplicity - we give one in the Appendix).
**Lemma 1**.: _(Rescaling of latent variables). Let \(x_{i}\) be a variable in an SEM \((\mathbf{B}^{\prime},\mathbf{\Phi}^{\prime})\) where we choose a scaling factor \(\alpha\neq 0\) and change the coefficients as follows: For every parent \(p\) of \(x_{i}\), \(\beta^{\prime}_{p\to x_{i}}=\alpha^{-1}\)\(\beta_{p\to x_{i}}\); for every child \(c\) of \(x_{i}\), \(\beta^{\prime}_{x\to c}=\alpha\)\(\beta_{x_{i}\to c}\); for every spouse \(s\) of \(x_{i}\), \(\phi^{\prime}_{x_{i}\to s}=\alpha^{-1}\phi_{x_{i}\to s}\); and \(\phi^{\prime}_{x_{i}}=\alpha^{-2}\phi_{x_{i}}\). Then for all \(j,k\neq i\), \(\mathbf{\Sigma}[j,k]=\mathbf{\Sigma}^{\prime}[j,k]\)._
If \(x_{i}\) is a latent variable in an SEM, then Lemma 1 implies that we will get the same implied covariance matrix among the observed variables for all possible scaling factors. In other words, we need to set the scale of \(x_{i}\) to an arbitrary value to identify any parameters in such a model. Common choices are to either fix the error variance of every latent variable such that its total variance is 1, or to choose one indicator per latent and set its path coefficient to 1. The latter method is often preferred because it is simpler to implement. The chosen indicators for each latent are then called the _scaling indicators_. However, note that Lemma 1 tells us that we can convert any fit based on scaling indicators to a fit based on unit latent variance, so this choice does not restrict us in any way.
### Algebraic L2O Transformation for RAM
The main idea behind algebraic L2O transformation is to replace each of the latent variables in the model equations by an observed expression involving the scaling indicator. As in Bollen (1996), we assume that each of the latent variables in the model has a unique scaling indicator. We show the transformation on a single model equation to simplify the notation. Given an SEM \(\mathcal{G}\) on variables \(X\), we can write the equation of any variable \(x_{i}\in X\) as:
\[x_{i}=\epsilon_{i}+\sum_{x_{j}\in C_{0}(x_{i})}\beta_{x_{j}\to x_{i}}x_{j}\]
where \(C_{0}(x_{i})=\{C_{0}(x_{i}),C_{0}(x_{i})\}\) is the set of covariates in the equation for \(x_{i}\). \(C_{0}(x_{i})\) and \(C_{0}(x_{i})\) are the latent and observed covariates, respectively. Since each latent variable \(x_{j}\) has a unique scaling indicator \(x^{s}_{j}\), we can write the latent variable as \(x_{j}=x^{s}_{j}-\epsilon_{x^{s}_{j}}\). Replacing all the latents in the above equation with their scaling indicators, we obtain:
\[x_{i}=\epsilon_{i}+\sum_{x_{j}\in C_{0}(x_{i})}\beta_{x_{j}\to x_{i}}(x^{s}_{j} -\epsilon_{x^{s}_{j}})+\sum_{x_{k}\in C_{0}(x_{k})}\beta_{x_{k}\to x_{j}}x_{k}\]
If \(x_{i}\) is an observed variable, the transformation is complete as the equation only contains observed variables. But if \(x_{i}\) is a latent variable, we can further replace \(x_{i}\) as follows:
\[x_{i}^{s}=\epsilon_{i}+\epsilon_{x_{i}^{s}}+\sum_{x_{j}\in C_{0}(x_{i})}\beta_{x_{j} \to x_{i}}(x_{j}^{s}-\epsilon_{x_{j}^{s}})+\sum_{x_{k}\in C_{0}(x_{k})}\beta_{x_{ k}\to x_{i}}x_{k}\]
As the transformed equation now only consists of observed variables, IV-based criteria can be applied to check for identifiability of parameters.
### Algebraic L2O Transformations for Partial Equation Identification
In the previous section, we used the L2O transformation to replace all the latent variables in the equation with their scaling indicators, resulting in an equation with only observed variables. An IV-based estimator applied to these equations would try to estimate all the parameters together. However, there are cases (as shown in Section 6) where not all of the parameters of an equation are identifiable. If we apply L2O transformation to the whole equation, none of the parameters can be estimated.
Here, we outline an alternative, "partial" L2O transformation that replaces only some of the latent variables in the equation. Assuming \(C_{0}^{i}(x_{i})\subset C_{0}(x_{i})\) as the set of latent variables whose parameters we are interested in estimating, we can write the partial L2O transformation as:
\[x_{i}=\epsilon_{i}+ \sum_{x_{j}\in C_{0}^{i}(x_{i})}\beta_{x_{j}\to x_{i}}(x_{j}^{s} -\epsilon_{x_{j}^{s}})+\] \[\sum_{x_{k}\in C_{0}(x_{k})\setminus C_{0}(x_{k})}\beta_{x_{k} \to x_{i}}x_{k}+\sum_{x_{k}\in C_{0}(x_{k})}\beta_{x_{k}\to x_{i}}x_{l}\]
Similar to the previous section, we can further apply L2O transformation for \(x_{i}\) if it is also a latent variable. As the parameters of interest are now with observed covariates in the transformed equation, IV-based criteria can be applied to check for their identifiability while treating the variables in \(C_{0}(x_{i})\setminus C_{0}^{i}(x_{i})\) as part of the error term.
## 4 Graphical L2O Transformation
Having shown the algebraic L2O transformation, we now show that these transformations can also be done graphically for path diagrams. An important difference is that the algebraic transformation is applied to all equations in a model simultaneously by replacing all latent variables, whereas we apply the graphical transform only to a single equation at a time (i.e., starting from the original graph for every equation). Applying the graphical transformation to multiple equations simultaneously results in a non-equivalent model with a different implied covariance matrix.
Given an SEM \(\mathcal{G}\), the equation for any variable \(x_{j}\) can be written in terms of its parents in the path diagram as: \(x_{j}=\sum_{x_{k}\in\text{Pa}_{0}(x_{j})}\beta_{x_{k}\to x_{j}}x_{k}+ \epsilon_{x_{j}}\). Using this equation, we can write the relationship between any latent variable \(x_{j}\) and its scaling indicator \(x_{j}^{s}\) as (where \(\beta_{x_{j}\to x_{j}^{s}}\) is fixed to 1):
\[x_{j}=x_{j}^{s}-\epsilon_{x_{j}^{s}}-\sum_{x_{k}\in\text{Pa}_{0}(x_{j}^{s}) \setminus x_{j}}\beta_{x_{k}\to x_{j}^{s}}x_{k} \tag{1}\]
We use this graphical L2O transformation as follows. Our goal is to identify a path coefficient \(\beta_{x_{j}\to x_{j}}\) in a model \(\mathcal{G}\). If both \(x_{i}\) and \(x_{j}\) are observed, we leave the equation untransformed and apply graphical identification criteria (Chen and Pearl, 2014). Otherwise, we apply the graphical L2O transformation to \(\mathcal{G}\) with respect to \(x_{i}\), \(x_{j}\), or both variables - ensuring that the resulting model \(\mathcal{G}^{\prime}\) contains an arrow between two observed variables \(x_{i}^{s}\) and \(x_{j}^{s}\), where the path coefficient \(\beta_{x_{j}^{s}\to x_{j}^{s}}\) in \(\mathcal{G}^{\prime}\) equals \(\beta_{x_{k}\to x_{j}}\) in \(\mathcal{G}\).
We now illustrate this approach on an example for each of the three possible combinations of latent and observed variables.
**Latent-to-observed arrow:** Consider the arrow
Figure 2: Example L2O transformations for path coefficients (a) from a latent to an observed variable; (b) from an observed to a latent variable; (c) between two latent variables.
\(y_{3}\) in Figure 2a, and let \(\beta\) be the path coefficient of this arrow. To perform the L2O transformation, we start with the model equation involving \(\beta\):
\[y_{3}=\beta l_{1}+\beta_{y_{5}\to y_{3}}y_{5}+\epsilon_{3}\]
We then use Equation 1 to write the latent variable, \(l_{1}\) in terms of its scaling indicator, \(y_{2}\) as: \(l_{1}=y_{2}-\epsilon_{2}+\beta_{y_{1}\to y_{2}}y_{1}\), and replace it in the above equation to obtain:
\[y_{3}=\beta y_{2}-\beta\beta_{y_{1}\to y_{2}}y_{1}+\beta_{y_{5}\to y_{3}}y_{5}- \beta\epsilon_{2}+\epsilon_{3}\]
The transformation has changed the equation for \(y_{3}\), which now regresses on the observed variables \(y_{2}\), \(y_{1}\), and \(y_{5}\), as well as the errors \(\epsilon_{2}\) and \(\epsilon_{3}\). We make the same changes in the graphical structure by adding the arrows \(y_{2}\to y_{3}\), \(y_{1}\to y_{3}\), \(\epsilon_{2}\to y_{3}\), and removing the arrow \(l_{1}\to y_{3}\).
**Observed-to-latent arrow:** Consider the arrow \(y_{1}\to l_{1}\) in Figure 2b with coefficient \(\beta\). For L2O transformation in this case, we apply Equation 1 to replace \(l_{1}\) in the model equation \(l_{1}=\beta y_{1}+\zeta_{1}\) to obtain:
\[y_{4}=\beta y_{1}+\beta_{y_{3}\to y_{4}}y_{3}+\beta_{y_{2}\to y_{4}}y_{2}+ \zeta_{1}+\epsilon_{4}\]
The equivalent transformation to the path diagram consists of adding the arrows \(y_{1}\to y_{4}\), and \(\zeta_{1}\to y_{4}\), and removing the arrows: \(l_{1}\to y_{4}\) and \(y_{1}\to l_{1}\).
**Latent-to-latent arrow:** Consider the arrow \(l_{1}\to l_{2}\) in Figure 2c with coefficient \(\beta\). In this case, we again apply Equation 1 to replace both \(l_{1}\) and \(l_{2}\) in the model equation for \(l_{2}=\beta l_{1}+\zeta_{2}\). This is equivalent to applying two L2O transformations in sequence and leads to the transformed equation:
\[y_{2}=\beta y_{1}-\beta\epsilon_{1}+\zeta_{2}+\epsilon_{2}\]
Equivalently, we now add the arrows \(y_{1}\to y_{2}\), \(\zeta_{2}\to y_{2}\), and \(\epsilon_{1}\to y_{2}\). We also remove the arrows \(l_{2}\to y_{2}\) and \(l_{1}\to l_{2}\).
## 5 Model-Implied Instrumental Variables are Equivalent to Instrumental Sets
After applying the L2O transformations from the previous sections, we can use either algebraic or graphical criteria to check whether the path coefficients are identifiable. In this section, we introduce the Instrumental set criterion (Brito and Pearl, 2002) and the MIIV approach from Bollen (1996) that precedes it, and show that they are equivalent. Importantly, even though we refer to the MIIV approach as an algebraic criterion to distinguish it from the graphical criterion, it is not a purely algebraic approach and utilizes the graphical structure of the model to infer correlations with error terms.
We will first focus on the instrumental set criterion proposed by Brito and Pearl (2002). We state the criterion below in a slightly rephrased form that is consistent with our notation in Section 2:
**Definition 1** (Instrumental Sets (Brito and Pearl, 2002)).: _Given an ADMG \(\mathcal{G}\), a variable \(y\), and a subset \(X\) of the parents of \(y\), a set of variables \(I\) fulfills the instrumental set condition if for some permutation \(i_{1}\ldots i_{k}\) of \(I\) and some permutation \(x_{1}\ldots x_{k}\) of \(X\) we have:_
1. _There are no treks from_ \(I\) _to_ \(y\) _in the graph_ \(\mathcal{G}_{X}\) _obtained by removing all arrows between_ \(X\) _and_ \(y\)_._
2. _For each_ \(j\)_,_ \(1\leq j\leq k\)_, there is a trek_ \(\pi_{j}\) _from_ \(I_{j}\) _to_ \(X_{j}\) _such that for all_ \(i<j\)_: (1)_ \(l_{i}\) _does not occur on any trek_ \(\pi_{j}\)_; and (2) all intersections between_ \(\pi_{i}\) _and_ \(\pi_{j}\) _are on the left side of_ \(\pi_{i}\) _and the right side of_ \(\pi_{j}\)_._
Its reliance on permutation makes the instrumental set criterion fairly complex; in particular, it is not obvious how an algorithm to find such sets could be implemented, since enumerating all possible permutations and paths is clearly not a practical option. Fortunately, we can rewrite this criterion into a much simpler form that does not rely on permutations and has an obvious algorithmic solution.
**Definition 2** (Permutation-free Instrumental Sets).: _Given an ADMG \(\mathcal{G}\), a variable \(y\) and a subset \(X\) of the parents of \(y\), a set of variables \(I\) fulfills the permutation-free instrumental set condition if: (1) There are no treks from \(I\) to \(y\) in the graph \(\mathcal{G}_{X}\) obtained by removing all arrows leaving \(X\), and (2) All \(t\)-separators \((L,R)\) of \(I\) and \(X\) have size \(\geq k\)._
**Theorem 2**.: _The instrumental set criterion is equivalent to the permutation-free instrumental set criterion._
Proof.: This is shown by adapting a closely related existing result (van der Zander and Liskiewicz, 2016). See Supplement for details.
**Definition 3** (Algebraic Instrumental Sets (Bollen (1996), Bollen (2012))).: _Given a regression equation \(y=B\cdot X+\epsilon\), where \(X\) possibly correlates with \(\epsilon\), a set of variables \(I\) fulfills the algebraic instrumental set condition if: (1) \(I\perp\!\!\!\perp\epsilon\), (2) \(rk(\mathbf{\Sigma}[I,X])=|X|\), and (3) \(rk(\mathbf{\Sigma}[I])=|I|\)_
Having rephrased the instrumental set criterion without relying on permutations, we can now establish a correspondence to the algebraic condition for instrumental variables - which also serves as an alternative correctness proof for Definition 1 itself. The proof of the Theorem is included in the Supplementary Material.
**Theorem 3**.: _Given an SEM \((\mathbf{B},\mathbf{\Phi})\) with path diagram \(\mathcal{G}=(V,A)\) and a variable \(y\in V\), let \(X\) be a subset of the parents of \(y\) in \(\mathcal{G}\). Then a set of variables \(I\subseteq V\) fulfills the algebraic instrumental set condition with respect to the equation_
\[y=B\cdot X+\epsilon;\text{ where }\epsilon=\sum_{p\in P_{Q}(y)\setminus X}p+ \epsilon_{y}\]
_if and only if \(I\) fulfills the instrumental set condition with respect to \(X\) and \(y\) in \(\mathcal{G}\)._
In the R package MIIVsem (Fisher et al., 2019) implementation of MIIV, all parameters in an equation of an SEM are simultaneously identified by (1) applying an L2O transformation to all the latent variables in this equation; (2) identifying the composite error term of the resulting equation; and (3) applying the algebraic instrumental set criterion based on the model matrices initialized with arbitrary parameter values and derived total effect and covariance matrices; see Bollen and Bauer (2004) for details. Theorem 3 implies that the MIIVsem approach is generally equivalent to first applying the graphical L2O transform followed by the instrumental set criterion (Definition 1) using the set of all observed parents of the dependent variable in the equation as \(X\).
## 6 Examples
Having shown that the algebraic instrumental set criterion is equivalent to the graphical instrumental set criterion, we now show some examples of identification using the proposed graphical approach and compare it to the MIIV approach implemented in MIIVsem 4. First, we show an example of a full equation identification where we identify all parameters of an equation altogether. Second, we show an example of partial L2O transformation (as shown in Section 3.3) that allows us to estimate a subset of the parameters of the equation. Third, we show an example where the instrumental set criterion fails to identify any parameters, but the conditional instrumental set criterion (Brito and Pearl, 2002) can still identify some parameters. Finally, we show an example where the parameters are inestimable even though the equation is identified.
Footnote 4: In some examples, a manual implementation of the MIIV approach can permit estimation of models that are not covered by the implementation in MIIVsem
### Identifying Whole Equations
In this section, we show an example of identifying a whole equation using the graphical criterion. Let us consider an SEM adapted from Shen and Takeuchi (2001), as shown in Figure 2(a). We are interested in estimating the equation \(y_{3}\sim l_{1}+l_{2}\), i.e., parameters \(\lambda_{13}\) and \(\lambda_{23}\). Doing a graphical L2O transformation for both these parameters together adds the edges \(y_{2}\to y_{3}\), \(y_{4}\to y_{3}\), \(\epsilon_{2}\to y_{3}\), and \(\epsilon_{4}\to y_{3}\), and removes the edges \(l_{1}\to y_{3}\), and \(l_{2}\to y_{3}\), resulting in the model shown in Figure 2(b). Now, for estimating \(\lambda_{13}\) and \(\lambda_{23}\) we can use the regression equation \(y_{3}\sim y_{2}+y_{4}\), with \(y_{1}\) and \(y_{5}\) as the IVs. As \(y_{1}\) and \(y_{5}\) satisfy Definition 2, both the parameters are identified. Both of these parameters are also identifiable using MIIVsem.
### Identifying Partial Equations
For this section, we consider a slightly modified version of the model in the previous section. We have added a correlation between \(\epsilon_{1}\) and \(\epsilon_{2}\), and have allowed the latent variables, \(l_{1}\) and \(l_{2}\) to be uncorrelated, as shown in Figure 3(a). The equation \(y_{3}\sim l_{1}+l_{2}\) is not identified in this case, as \(y_{5}\) is the only available IV (Figure 3(b)). However, using the partial graphical transformation for \(l_{2}\) while treating \(l_{1}\) as an error term (Figure 3(c)), the parameter \(\lambda_{23}\) can be identified by using \(y_{5}\) as the IV. As the R package MIIVsem always tries to identify full equations, it is not able to identify either of the parameters in this case - although this would be easily doable when applying the MIIV approach manually.
### Identification Based on Conditional IVs
So far, we have only considered the instrumental set criterion, but many other identification criteria have
Figure 3: (a) Example model following the structure of Figure 1 with explicit error terms. (b) L2O transformation for the model in (a) for identifying both coefficients of the equation for \(y_{3}\) simultaneously. We end up with the regression equation \(y_{3}\sim y_{2}+y_{4}\) and can identify both coefficients using \(y_{1}\) and \(y_{5}\) as instrumental variables.
been proposed for DAGs. For example, we can generalize the instrumental set criteria to hold _conditionally_ on some set of observed variables (Brito and Pearl, 2002). There can be cases when conditioning on certain variables allows us to use conditional IVs. This scenario might not occur when we have a standard latent and measurement level of variables, but might arise in specific cases; for example, when there are exogenous covariates that can be measured without error (such as the year in longitudinal studies), or interventional variables in experimental settings (such as complete factorial designs) which are uncorrelated, and observed exogenous by definition. Figure 4(a) shows a hypothetical example in which the latent variables \(l_{1}\) and \(l_{2}\) are only correlated through a common cause \(y_{6}\), which could, for instance, represent an experimental intervention. Similar to the previous example, a full identification for \(y_{3}\sim l_{1}+l_{2}+y_{6}\) still does not work. Further, because of the added correlation between \(l_{1}\) and \(l_{2}\), partial identification is not possible either. The added correlation between \(l_{1}\) and \(l_{2}\) opens a path from \(y_{5}\) to \(y_{3}\), resulting in \(y_{5}\) no longer being an IV for \(y_{3}\sim y_{4}\). However, the conditional instrumental set criterion (Brito and Pearl, 2002) can be used here to show that the parameter \(\lambda_{23}\) is identifiable by conditioning on \(y_{6}\) in both stages of the IV regression. In graphical terms, we say that conditioning on \(y_{6}\)\(d\)-separates the path between \(l_{1}\) and \(l_{2}\) (Figure 4(b)), which means that we end up in a similar situation as in Figure 3(c). We can therefore use \(y_{5}\) as an IV for the equation \(y_{3}\sim y_{4}\) once we condition on \(y_{6}\). As the MIIV approach does not consider conditional IVs, it is not able to identify either of the parameters.
### Inestimable Parameters in Identified Equations
In the previous examples, the L2O transformation creates a new edge in the model between two observed variables that has the same path coefficient that we are interested in estimating. But if the L2O transformation adds a new edge where one already exists, the new path coefficient becomes the sum of the existing coefficient and our coefficient of interest. In such cases, certain parameters can be inestimable even if the transformed equation is identified according to the identification criteria.
In Figure 5(a), we have taken a model about the economic effects of schooling from Griliches (1977). All parameters in the equation of \(y_{4}\) are identifiable by using \(y_{1}\) and \(y_{2}\) as the IVs. However, we get an interesting case if we add two new edges \(y_{1}\to y_{3}\) and \(y_{2}\to y_{4}\) (Figure 5(b)): The L2O transformation for the equation of \(y_{4}\) adds the edges \(e_{3}\to y_{4}\) and \(y_{3}\to y_{4}\), as shown in Figure 5(c). But since the original model already has the edge \(y_{3}\to y_{4}\), the new coefficient for this edge becomes \(\lambda_{14}+\lambda_{34}\). The regression equation for \(y_{4}\) is still: \(y_{4}\sim y_{3}+y_{2}\), and it is identified according to the instrumental set criterion as \(y_{2}\) and \(y_{1}\) are the IVs for the equation. But if we estimate the parameters, we will obtain values for \(\lambda_{24}\) and \(\lambda_{14}+\lambda_{34}\). Therefore, \(\lambda_{24}\) remains identifiable in this more general case, but \(\lambda_{14}\) and \(\lambda_{34}\) are individually not identified. The graphical L2O approach allows us to easily visualize such cases
Figure 4: (a) Adapted SEM from Shen and Takeuchi (2001); modified by making \(l_{1}\) and \(l_{2}\) uncorrelated and \(e_{1}\) and \(e_{2}\) correlated. (b) Transformed model for estimating \(y_{3}\sim l_{1}+l_{2}\). The equation is not identified as \(y_{5}\) is the only IV. (c) With partial L2O transformation, \(\lambda_{23}\) can be estimated using \(y_{5}\) as the IV.
Figure 5: (a) Modified version of the Figure 3(a) model, where \(l_{1}\) and \(l_{2}\) share an observed cause \(y_{6}\). \(\lambda_{13}\) and \(\lambda_{23}\) are still not simultaneously identified as no IVs are available. (b) Even with partial transformation, \(\lambda_{23}\) is no longer identified as \(y_{5}\) is not an IV because of the open paths \(y_{5}\gets l_{2}\gets y_{6}\to y_{3}\) and \(y_{5}\gets l_{2}\gets y_{6}\to l_{1}\to y_{3}\). However, using the conditional instrumental set criterion, we can identify \(\lambda_{23}\) by using \(y_{5}\) as a _conditional IV_ for the equation \(y_{3}\sim y_{4}\), as conditioning on \(y_{6}\) blocks the open paths.
after transformation.
## 7 Discussion
In this paper, we showed the latent-to-observed (L2O) transformation on the RAM notation and how to use it for partial equation identification. We then gave an equivalent graphical L2O transformation which allowed us to apply graphical identification criteria developed in the DAG literature to latent variable parameters in SEMs. Combining this graphical L2O transformation with the graphical criteria for parameter identification, we arrived at a generic approach for parameter identification in SEMs. Specifically, we showed that the instrumental set criterion combined with the graphical L2O transformation is equivalent to the MIIV approach. Therefore, the graphical transformation can be used as an explicit visualization of the L2O transformation or as an alternative way to implement the MIIV approach in computer programs. To illustrate this, we have implemented the MIIV approach in the graphical-based R package _dagitty_(Textor et al., 2017) and the Python package _pgmpy_(Ankan and Panda, 2015).
Our equivalence proof allows users to combine results from two largely disconnected lines of work. By combining the graphical L2O transform with other identification criteria, we obtain novel identification strategies for LVSEMs, as we have illustrated using the conditional instrumental set criterion. Other promising candidates would be auxiliary variables (Chen et al., 2017) and instrumental cutsets (Kumor et al., 2019). Conversely, the SEM literature is more developed than the graphical literature when it comes to non-Gaussian models. For example, MIIV with two-stages least squares estimation is asymptotically distribution-free (Bollen, 1996), and our results imply that normality is not required for applying the instrumental set criterion.
|
2304.11961 | Towards Mode Balancing of Generative Models via Diversity Weights | Large data-driven image models are extensively used to support creative and
artistic work. Under the currently predominant distribution-fitting paradigm, a
dataset is treated as ground truth to be approximated as closely as possible.
Yet, many creative applications demand a diverse range of output, and creators
often strive to actively diverge from a given data distribution. We argue that
an adjustment of modelling objectives, from pure mode coverage towards mode
balancing, is necessary to accommodate the goal of higher output diversity. We
present diversity weights, a training scheme that increases a model's output
diversity by balancing the modes in the training dataset. First experiments in
a controlled setting demonstrate the potential of our method. We discuss
connections of our approach to diversity, equity, and inclusion in generative
machine learning more generally, and computational creativity specifically. An
implementation of our algorithm is available at
https://github.com/sebastianberns/diversity-weights | Sebastian Berns, Simon Colton, Christian Guckelsberger | 2023-04-24T09:55:17Z | http://arxiv.org/abs/2304.11961v3 | # Towards Mode Balancing of Generative Models via Diversity Weights
###### Abstract
Large data-driven image models are extensively used to support creative and artistic work. Under the currently predominant distribution-fitting paradigm, a dataset is treated as ground truth to be approximated as closely as possible. Yet, many creative applications demand a diverse range of output, and creators often strive to actively diverge from a given data distribution. We argue that an adjustment of modelling objectives, from pure mode coverage towards mode balancing, is necessary to accommodate the goal of higher output diversity. We present _diversity weights_, a training scheme that increases a model's output diversity by balancing the modes in the training dataset. First experiments in a controlled setting demonstrate the potential of our method. We discuss connections of our approach to diversity, equity, and inclusion in generative machine learning more generally, and CC specifically. An implementation of our algorithm is available at [https://github.com/sebastianberns/diversity-weights](https://github.com/sebastianberns/diversity-weights)
## Introduction
Large image generation models (LIGMs), in particular as part of text-to-image generation systems [12, 13], have been widely adopted by visual artists to support their creative work in art production, ideation, and visualisation [15, 16]. While providing vast possibility spaces, LIGMs, trained on huge image datasets scraped from the internet, not only adopt but often exacerbate data biases, as observed in word embedding and captioning models [1, 17, 18]. The tendency to emphasise majority features and to primarily reproduce the predominant types of data examples can be limiting for many computational creativity (CC) applications that use machine learning-based generators [10]. Learned models are often used to illuminate a possibility space and to produce artefacts for further design iterations. Examples range from artistic creativity, like the production of video game assets [12, 13], over constrained creativity, e.g. industrial design and architecture [1], to scientific creativity, such as drug discovery [1]. Many of these and similar applications would benefit from higher diversity in model output. Given that novelty, which underlies diversity, is considered one of the essential aspects of creativity [1, 15], we expect that, vice versa, a stronger focus on diversity can also foster creativity (cf. Stanley and Lehman, 2015).
Most common modelling techniques, however, follow a distribution-fitting paradigm and do not accommodate the goal of higher diversity. Within this paradigm, one of the primary generative modelling objectives is _mode coverage_[14], i.e. the capability of a model to generate all prominent types of examples present in a dataset. While such a model can in principle produce many types of artefacts, it does not do so reliably or evenly. A model's probability mass is assigned in accordance to the prevalence of a type of example or feature in a dataset. Common examples or features have higher likelihood under the model than rare ones. As a consequence, samples with minority features are not only less likely to be obtained by randomly sampling a model, they are also of lower fidelity, e.g. in terms of image quality. Related studies on Transformer-based language models [1, 13] have identified a "superlinear" relationship: while training examples with multiple duplicates are generated "dramatically more frequently", examples that only appear once in the dataset are rarely reproduced by the model.
In this work, we argue for an adjustment of modelling techniques from mode coverage to _mode balancing_ to enrich CC with higher output diversity. Our approach allows to train models that cover all types of training examples and can generate them with even probability and fidelity. We present a two-step training scheme designed to reliably increase output diversity. Our technical contributions are:
* _Diversity weights_, a training scheme to increase a generative model's output diversity by taking into account the relative contribution of individual training examples to overall diversity.
* _Weighted Frechet Inception Distance (wFID)_, an adaptation of the FID measure to estimate the distance between a model distribution and a target distribution modified by weights over individual training examples.
* A proof-of-concept study, demonstrating the capacity of our method to increase diversity, examining the trade-off between artefact typicality and diversity.
In the following sections, we first introduce the objective of _mode balancing_ and highlight its importance for CC based on existing frameworks and theories. Then, we provide background information on the techniques relevant for our work. Next, we present our _diversity weights_ method in detail, as well as our formulation of _Weighted FID_. Following this, we present the setup and methodology of our study and evaluate its results. In the discussion section, we contribute to the debate on issues of diversity, equity, and inclusion (DEI) in generative machine learning more generally, and CC specifically, by explaining how our method could be beneficial in addressing data imbalance bias. This is followed by an overview of related work, our conclusions and an outlook on future work.
## Mode Balancing
Generative deep learning models now form an integral part of CC systems [1]. A lot of work on such models is concerned with _mode coverage_: to match a data distribution as closely as possible by accurately modelling all types of examples in a dataset (fig. 1). In the specific case of generative adversarial networks (GANs), great effort is put into preventing _mode collapse_, a training failure state in which a model disregards important modes and is only able to produce a few types of training examples. Mode coverage is captured formally in common evaluation measures such as Frechet Inception Distance (FID) and Precision-Recall (PR). Crucially, this is always done in reference to the training set statistics or data manifold. In this context, diversity is often arguably missed to refer to mode coverage. While mode coverage describes the fraction of modes in a dataset that are represented by a model, the diversity of a model's output, if understood more generally and intuitively, can theoretically be higher than that of the dataset.
Mode coverage is conceptually similar to the notion of _typicality_[15]. Defined as the extent to which a produced output is "an example of the artefact class in question", a model which only generates outputs with high typicality, if sampled at random, has to provide most support to those training set examples with the highest density of features characteristic of that artefact, i.e. to maximise mode coverage. Crucially, sampling from the model would resemble going along the most well-trodden paths in the possibility space defined by the dataset and, as Ritchie already suggests, counteract novelty as a core component of creativity [1, 13].
Crucially, mode balancing breaks with the convention of viewing the dataset as 'ground truth'. Instead, we consider the dataset to provide useful domain information and the characteristics of _typical_ examples [15]. But a data distribution does not have to be matched exactly. Particularly in artistic applications, creators often strive to _actively diverge_ from the typical examples in a dataset [1, 1]. To stay with our metaphor, borrowed from Veale, Cardoso, and Perez y Perez (2019), _mode balancing_ allows us to walk more along the less trodden paths and thus especially support exploratory and transformational creativity [1, 1]. In contrast to the mode coverage paradigm, in mode balancing, diversity is measured independently of the training data distribution. In the theoretical case of a balanced dataset of absolutely dissimilar examples, i.e. multiple equally likely modes, our method would assign uniform weights to all examples and thus be identical to standard training schemes with random sampling.
## Background
### Probability-Weighted Vendi Score
We adopt the Vendi Score (VS) as a measure of dataset diversity and employ its probability-weighted formulation in our work [15]. Given a set of artefacts \(x_{1},\dots,x_{n}\), the probability-weighted VS is based on a probability vector \(\mathbf{p}=(p_{1},\dots,p_{n})\) and a similarity matrix \(\mathbf{K}\in\mathbb{R}^{\,n\times n}\) between pairs of artefacts such that \(\mathbf{K}_{ii}=1\). Calculating VS involves various steps. First, the probability-weighted similarity matrix is defined as \(\mathbf{K}^{\mathbf{p}}=\text{diag}(\sqrt{\mathbf{p}})\,\mathbf{K}\,\text{ diag}(\sqrt{\mathbf{p}})\). Its eigenvectors \(\lambda_{1},\dots,\lambda_{n}\) can be obtained via the eigendecomposition \(\mathbf{K}^{\mathbf{p}}=\mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^{-1}\), where \(\boldsymbol{\lambda}=\text{diag}(\boldsymbol{\Lambda})\). The probability-weighted Vendi Score (VS) is the exponential of the Shannon entropy of the eigenvalues of the probability-weighted similarity matrix:
\[\text{VS}(\mathbf{K},\mathbf{p})=\exp\Big{(}-\sum^{n}\lambda_{i}\,\log\lambda _{i}\Big{)} \tag{1}\]
Also known as _perplexity_, exponential entropy can be used to measure how well a probability model predicts a sample. Low perplexity indicates good prediction performance. Consequently, the more diverse a sample, the more difficult its prediction, the higher the perplexity and its VS.
#### Illustrative Example
The probability vector \(p\) represents the relative abundances of individual artefacts. Instead of repeating identical artefacts in a set, their prevalence can be expressed with higher probability. For illustration, we present an example of four artefacts, of which three are absolutely similar to each other and one is absolutely dissimilar to all others. All have equal probability.
\[\mathbf{K}^{a}=\left(\begin{smallmatrix}1&0&0\\ 0&1&1\\ 0&1&1\\ 0&1&1\end{smallmatrix}\right),\quad\mathbf{p}^{a}=\left(\begin{smallmatrix}0.25 \\ 0.25\\ 0.25\\ 0.25\end{smallmatrix}\right) \tag{2}\]
The same information can be reduced to two absolutely dissimilar artefacts and the corresponding probabilities \(\mathbf{p}^{b}\).
\[\mathbf{K}^{b}=(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}),\quad\mathbf{p}^{b}=(\begin{smallmatrix}0.25\\ 0.75\end{smallmatrix}) \tag{3}\]
Figure 1: _Mode collapse_: the model does not cover all modes in the data distribution. _Mode coverage_: the data distribution’s modes are modelled as closely as possible w.r.t. their likelihood. _Mode balancing_: the model covers all modes, but with equal likelihood.
Both representations yield the same VS, which reflects the imbalanced set of two absolutely dissimilar artefacts. \(\text{VS}(\mathbf{K}^{a},\mathbf{p}^{a})=\text{VS}(\mathbf{K}^{b},\mathbf{p}^{b})= 1.755\dots\)
The imbalance of our example set negatively affects its diversity. If all items in the set are given equal importance, one artefact is under-represented. Instead, each of the two absolutely dissimilar artefacts in the set should thus be assigned equal weight \(p=0.5\). In the case of repetitions, this weight has to be divided across the repeated artefacts.
\[\mathbf{K}^{c}=\left(\begin{smallmatrix}1&0&0&0\\ 0&1&1&1\\ 0&1&1&1\\ 0&1&1&1\end{smallmatrix}\right),\quad\mathbf{p}^{c}=\left(\begin{smallmatrix} 0.5\\ 0.166\dots\\ 0.166\dots\\ 0.166\dots\end{smallmatrix}\right) \tag{4}\]
This maximises VS to reflect the effective number of absolutely dissimilar artefacts \(\text{VS}(\mathbf{K}^{c},\mathbf{p}^{c})=2\).
### Importance Sampling
Conventionally, training examples are drawn from a dataset with uniform probability. In importance sampling, instead, examples are chosen according to their contribution to an unknown target distribution. In our case, the importance of training examples is determined by their individual contribution to the overall dataset diversity as quantified by the optimised probability distribution \(p\) (see example above). We aim to increase the output diversity of a model. For this, we replace the basic sampling operation by a diversity-weighted importance sampling scheme.
### Model Evaluation
To assess model performance, we use some common measures for generative models, as well as measures specifically relevant to our method. Inception Score (IS) [11], Frechet Inception Distance (FID) [10], and Precision-Recall (k-NN parameter \(k=3\)) [10] quantify sample fidelity and mode coverage w.r.t. the unbiased training data distribution. We employ our Weighted Frechet Inception Distance (wFID) to account for the change in target distribution, induced by our method through diversity-weighted sampling (see below for details). Diversity is estimated with the Vendi Score (VS) [10].
Note that we follow the recommendations by Barratt2018 and calculate IS over the entire generated set of samples, removing the common split into subsets. We also remove the exponential, such that the score becomes interpretable in terms of mutual information. While not all reported scores are directly comparable to other works, our measurements are internally consistent and reliable.
Image EmbeddingsInstead of comparing image data on raw pixels, standard evaluation measures of model performance have relied on image classification networks to be used as embedding models for feature extraction. The InceptionV3 model [23] is most commonly used as a representative feature space and has been widely adopted as part of a standard measurement pipeline. Unfortunately, small numerical differences in model weights, implementations and interpolation operations can compound to bigger discrepancies. For example, image scaling to match the input size of an embedding model can change the computed features and thus affect the subsequent measurements [13]. Furthermore, embedding models trained on the ImageNet dataset, like InceptionV3, inherit the dataset's biases, which can lead to unreliable measurements that do not agree with human assessment [10]. In this work, we therefore follow the recommendations for anti-aliasing re-scaling and use CLIP ViT-L14 [12] as the image embedding model in our feature extraction and measurement pipelines (except for IS). Note that, while trained on a much larger (proprietary) dataset and better suited as embedding model, CLIP still has its own biases.
## Diversity Weights
If artefacts in a set are repeated, i.e. their relative abundance is increased, their individual contribution to the overall diversity of the set decreases. Yet, with uniform weighting, all artefacts contribute to the model distribution equally (cf. eq. 2). Instead, we aim to adjust the weight of individual artefacts in a set in accordance with their contribution to overall diversity.
We formulate an optimisation problem to find the optimal weight for each artefact in a set, such that its diversity, as measured by VS, is maximised.
\[\max\,\exp\Big{(}-\sum^{n}\lambda_{i}\log\lambda_{i}\Big{)} \tag{5}\] \[\text{s.t.}\,\,\sum^{n}p_{i}=1\qquad 0\leq p_{i}\leq 1\] \[\text{where}\,\,\,\mathbf{p}=(p_{i},\dots,p_{n}),\,\,\,p_{i}\in \mathbb{R}^{[0,1]}\] \[\mathbf{K}\in\mathbb{R}^{n\times n},\,\,\,\mathbf{K}_{ii}=1\] \[\mathbf{K}^{\mathbf{P}}=\text{diag}(\sqrt{\mathbf{p}})\,\mathbf{K }\,\text{diag}(\sqrt{\mathbf{p}})=\mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^{-1}\] \[\boldsymbol{\lambda}=\text{diag}(\boldsymbol{\Lambda})=(\lambda _{1},\dots,\lambda_{n})\]
### Optimisation Algorithm1
Footnote 1: An implementation of the optimisation algorithm is available at [https://github.com/sebastianbers/diversity-weights](https://github.com/sebastianbers/diversity-weights)
We compute an approximate solution to the optimisation problem via gradient descent (algorithm 1). The objective function consists of two terms: diversity loss and entropy loss. The diversity loss is defined as the negative probability-weighted VS of the set of artefacts, given its similarity matrix and the corresponding probability vector (cf. eq. 1). To ensure the optimised artefact probability distribution follows the Kolmogorov (1933) axioms, we make the following adjustments. Instead of optimising the artefact probabilities directly, we optimise a weight vector \(\mathbf{w}\). The probability vector \(\mathbf{p}\) is obtained by dividing the \(\mathbf{w}\) by the sum of its values, which guarantees the second axiom. To satisfy the first axiom, we implement a fully differentiable version of VS in log space. Optimising in log space enforces weights above zero, since the logarithm \(\log x\) is only defined for \(x>0\) and tends to negative infinity as \(x\) approaches zero. However, if the weights have no upper limit, values can grow
unbounded. A heavy-tailed weight distribution negatively affects the importance sampling step of our method during training, as batches can become saturated with the highest-weighted training examples, causing overfitting. We therefore add an entropy loss term \(\operatorname{H}(\mathbf{p})=-\sum p_{i}\log(p_{i})\) to be maximised in conjunction with the diversity loss. The entropy loss acts as a regularisation term over the weight vector, such that its distribution is kept as close to uniform as possible. The emphasis on the two loss terms is balanced by the hyperparameter \(\gamma\in[0,1]\).
\[\mathcal{L}=-\gamma\mathrm{VS}(\mathbf{K},\mathbf{p})-(\gamma-1)\operatorname {H}(\mathbf{p})\text{, }\quad\mathbf{p}=\frac{\mathbf{w}}{||\mathbf{w}||_{1}} \tag{6}\]
Given a normalised data matrix \(X\) where rows are examples and columns are features, we obtain the similarity matrix \(\mathbf{K}\) by computing the Gram matrix \(K=X\cdot X^{\mathsf{T}}\). The weight vector \(\mathbf{w}\) is initialised with uniform weights \(w_{i}=\log(1)=0\). The probability vector \(\mathbf{p}\) is obtained by dividing the weight vector \(\mathbf{w}\) by the sum of its values. We choose the Adam optimiser (Kingma and Ba, 2015) with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). The learning rate decays exponentially every \(5\) iterations by a factor of \(0.99\).
### Weighted FID
The performance of generative models, in particular that of implicit models like GANs, is conventionally evaluated with the FID (Heusel et al., 2017). Raw pixel images are embedded into a representation space, typically of an artificial neural network. Assuming multi-variate normality of the embeddings, FID then estimates the distance between the model distribution and the data distributions from their sample means and covariance matrices.
In our proposed method, however, the learned distribution is modelled on a weighted version of the dataset. Moreover, referring to the standard statistics of the original dataset is no longer applicable, as the weighted sampling scheme changes the target distribution. We therefore adjust the measure such that it becomes the Weighted Frechet Inception Distance (wFID), where the standard mean and covariances to calculate the dataset statistics are substituted by the weighted mean \(\mu^{*}=\big{(}\sum w_{i}\mathbf{x}_{i}\big{)}/\sum w_{i}\) and the weighted sample covariance \(\mathbf{C}=\big{(}\sum w_{i}(\mathbf{x}_{i}-\mu^{*})^{\mathsf{T}}(\mathbf{x}_ {i}-\mu^{*})\big{)}/\sum w_{i}\). Note that the statistics of the model distribution need to be calculated without weights as the model should have learned the diversity-weighted target distribution.
## Proof-Of-Concept Study on Hand-Written Digits
We show the effect of the proposed method in an illustrative study on pairs of handwritten digits. Whileistically not particularly challenging, digit pairs have several benefits over other exemplary datasets. First, the pairings of digits create a controlled setting with two known types of artefacts. Second, hand-written digits present a simple modelling task, in which the quality and diversity of a model's output is easy to visually assess. And third, generating digits is fairly uncontroversial. While, for example, generating human faces is more relevant for the subject of diversity, it is also a highly complex and potentially emotive domain.
### Methodology
For individual pairs of digits, we quantitatively and qualitatively evaluate the results of GAN training with diversity weights and compare it against standard training. Experiments are repeated five times with different random seeds.
Digit PairsFrom the ten classes of the MNIST training set, we select three digit pairs: 0-1, 3-8, and 4-9, which represent examples of similar and dissimilar pairings. For example, images of hand-written zeros and ones are easy to distinguish, as they are either written as circles or straight lines. In contrast, threes and eights are both composed of similar circular elements.
Balanced DatasetsFor each pair of digits, we create five balanced datasets (with different random seeds) of 6,000 samples each. Each dataset consists of 3,000 samples of either digit, randomly selected from the MNIST training set. We compute features by embedding all images using the CLIP ViT-L/14 model. To optimise the corresponding diversity weights, we obtain pairwise similarities between images by calculating the Gram matrix of features.
Diversity WeightsFor each dataset (5 random draws per digit pair), we optimise the diversity weights for 100 iterations. We fine-tune the loss term balance hyperparameter and determine its optimal value \(\gamma=0.8\), where the weights converge to a stable distribution, while reaching a diversity loss as close to the maximum as possible. Without the entropy loss term (\(\gamma=1.0\)) the weights yield the highest VS, but reach both very high and very low values. Large differences in weight values negatively affect the importance sampling step of our method during training, as batches can become saturated with the highest-weighted training examples. In contrast, a bigger emphasis on the entropy loss (\(\gamma=0.6\)) results in the weights distribution being closer to uniform, but does not maximise diversity. The hyperparameter \(\gamma\) provides control over the trade-off between diversity and _typicality_, i.e. the extent to which an generated artefact is a typical training example (Ritchie, 2007). The VS of the digit datasets when measured without and with diversity weights at different loss term balances are presented in table 1.
The resulting diversity weight for each of the 6,000 samples corresponds to their individual contributions to the overall diversity of the dataset. We give an overview of the highest and lowest weighted data samples in fig. 2. Low-weighted samples are typical examples of the MNIST dataset: e.g. round zeros and simple straight ones, all of similar line width. High-weighted samples show a much greater diversity: thin and thick lines, imperfect circles as zeros, ones with nose and foot line.
TrainingFor each digit dataset, we compare two training schemes: 1) a baseline model with the standard training scheme, and 2) three models trained with our diversity weights (DivW) method and different loss term balances (\(\gamma\)), where training examples are drawn according to the corresponding diversity weights. The compared loss term balances are \(\gamma=0.6\), \(\gamma=0.8\), and \(\gamma=1.0\). All models have identical architectures (Wasserstein GAN with gradient penalty; Gulrajani et al., 2017) and hyperparameters and are optimised for 6,000 steps (see appendix for details).
To allow our method to develop its full potential, we increase the batch size to 6,000 samples, the size of the dataset. Training examples are drawn according to diversity weights _with_ replacement, i.e. the same example can be included in a batch more than once. Small batches in turn would be dominated by the highest-weighted examples, causing overfitting and ultimately mode collapse.
EvaluationWe evaluate individual models on six measures: Vendi Score (VS) to quantify output diversity; Inception Score (IS), Frechet Inception Distance (FID) and weighted FID (wFID), as well as Precision-Recall (PR) to estimate sample fidelity and mode coverage. From each model we obtain 6,000 random samples, the same amount as a digit dataset. As described above, for all measures, except IS, we use CLIP as the image embedding model to compute image features. For VS, we obtain pairwise similarities between images by calculating the Gram matrix of features. Our proposed wFID measure accounts for the different target distribution induced by the diversity weights.
shift away from the predominant mode coverage paradigm.
Ongoing debates have not yet resulted in a uniformly accepted way of dealing with data bias in generative machine learning more generally, and CC specifically. One way to address data bias is to gather more or better data. But this is not always possible or practical, since collecting, curating and pre-processing new data is notoriously laborious, costly, or subject to limited access. Another way is to instead adjust the methodology of learning from data, such that a known data bias is mitigated. In this work, we focus on the latter and propose the _diversity-weighted sampling_ scheme to address the imbalance of representation between majority and minority features in a dataset.
Diversity weights address the specific bias of _data imbalance_, particularly in unsupervised learning. In contrast to supervised settings, where class labels provide a clear categorisation of training examples, here common features are often shared between various types of examples. This makes it difficult to find an appropriate balance of training examples. Diversity weights give an indication of which type of examples are under-represented from a diversity-maximisation perspective. We draw a connection to issues of DEI as data biases often negatively affect under-represented groups (Bolukbasi et al., 2016; Zhao et al., 2017; Hendricks et al., 2018; Stock and Cisse, 2018).
Combining image generation models with multi-modal embedding models, like CLIP, enables complex text-to-image generative systems which can be doubly affected by data bias through the use of two data-driven models: the image generator and the image-text embedding. The discussion on embedding models, and other methods that can guide the search for artefacts, is beyond the scope of this paper. Our work focuses on the image generators powering these technologies. Yet, a conscious shift to _mode balancing_, in particular for the training of the underlying generative model, could support the mitigation of bias in text-to-image generation models, complementing existing efforts in prompt engineering after training (Colton, 2022).
Figure 3: Performance comparison of our method (DivW) with different loss term balances (\(\gamma\)) against a standard GAN, trained on three digit pair datasets (blue circles: 0-1, green crosses: 3-8, red diamonds: 4-9) with six measures: VS, PR and IS (higher is better), as well as standard FID and weighted FID scores (lower is better). Means and 95 % confidence intervals over five random seeds. Individual datapoints show means over five random sampling repetitions. The hyperparameter \(\gamma\) provides control over the trade-off between diversity and typicality.
It is worth noting, that our method also introduces bias, particularly emphasising under-represented features in the dataset. We do this explicitly and for a specific purpose. Other applications might differ in their perspective and objective and deem none or other biases less or more important. As we mentioned above, since a dataset cannot maintain its status of 'ground truth', the responsibility of reviewing and potentially mitigating data biases falls onto researchers and practitioners. We hope our work proves helpful in this task.
## Related Work
Previous work primarily focuses on samples from minority groups and related data biases. Objectives range from mitigating such biases to improving minority coverage, i.e. achieving better image fidelity for underrepresented data examples. Some approaches employ importance weighting where weights are derived from density ratios, either via an approximation based on the discriminator's prediction (Lee et al., 2021) or via an additional probabilistic classifier (Grover et al., 2019). Others propose an implicit maximum likelihood estimation framework to improve the overall mode coverage (Yu et al., 2020). These methods either depend on additional adversarially trained models or on more specific hybrid models. Our approach, instead, has two major benefits over previous work. First, it is model-agnostic and thus potentially applicable to a wide range of network architectures and training schemes. Second, it only adds an offline pre-computation step prior to conventional training procedures and during training solely intervenes at the data sampling stage.
Authors of previous work further argue for increased diversity, but do not evaluate on explicit measures of diversity. Results are reported on the standard metrics IS, as well as FID and PR which rely on the training dataset for reference. Consequently, they can only estimate sample fidelity and mode coverage as present in the data. We, instead, evaluate on measures designed to objectively quantify diversity.
Most importantly, while we argue for an adaptation of modelling techniques to allow for _mode balancing_ to achieve higher output diversity, all related works operate under the _mode coverage_ paradigm. In fact, Lee et al. (2021) include a discriminator rejection sampling step (Azadi et al., 2018) after training to undo the bias introduced by their importance sampling scheme.
Figure 4: Random samples for all digit pairs (top row: 0-1, middle: 3-8, bottom: 4-9) from the standard models (left column) and our DivW models with different loss balances (\(\gamma\)). The hyperparameter \(\gamma\) provides control over the trade-off between diversity and typicality.
## Conclusions
We introduced a method to derive a weight vector over the examples in a training dataset, which indicate their individual contribution to the dataset's overall diversity. _Diversity weights_ allow to train a generative model with importance sampling such that the model's output diversity increases.
Our work is motivated by potential benefits for computational creativity applications which aim to produce a wide range of diverse output for further design iterations, ranging from artistic over constrained to scientific creativity. We also highlight a connection to issues of data bias in generative machine learning, in particular data imbalances and the under-representation of minority features. The impracticality of easily mitigating data imbalances in an unsupervised setting further motivates our work.
In a proof-of-concept study, we demonstrated that our method increases model output diversity when compared to a standard GAN. The results highlight a trade-off between artefact typicality, i.e. the extent to which an artefact is a typical training, and diversity. Our method provides control over this trade-off via a loss balance hyperparameter.
## Future Work
We plan to build on the present work in several ways. First, by refining our method, in particular the training procedure, to improve overall sample fidelity. For this, a thorough analysis and systematic comparison to related work is needed. The loss balance hyperparameter could further be tuned automatically by including it as a learnable parameter in the optimisation procedure. Apart from our gradient descent approach, there might be alternative exact or approximate methods for the diversity weight optimisation, e.g. constraint optimisation or analytical solutions.
Second, we plan to extend experimentation to other generative models and on bigger and more complex datasets to demonstrate the scalability of our approach. Since our method is architecture-agnostic, there remain many opportunities for future work to understand the effect and potential benefits of our method in other modelling techniques. As GAN training is notoriously unstable and requires careful tuning, other modelling techniques might prove more appropriate. Results on datasets representing humans are needed to demonstrate the capability of our method to mitigate issues of DEI resulting from data imbalances.
Moreover, empirical studies will be necessary to investigate how the shift from mode coverage to mode balancing can support diversity in a large range of CC applications.
## Acknowledgements
Sebastian Berns is funded by the EPSRC Centre for Doctoral Training in Intelligent Games & Games Intelligence (IGGI) [EP/S022325/1]. Experiments were performed on the Queen Mary University of London Apocrita HPC facility, supported by QMUL Research-IT [King, Butcher, and Zalewski, 2017].
## Appendix
The tables below outline the experiments' training hyperparameters and network architectures, which do not include any pooling, batchnorm or dropout layers. He initialisation (Kaiming uniform) is used for convolutional layers (conventional and upsampling) and Glorot initialisation (Xavier uniform) for fully connected (FC) layers. |
2307.08960 | Comparison of HRV Indices of ECG and BCG Signals | Electrocardiography (ECG) plays a significant role in diagnosing
heart-related issues, it provides, accurate, fast, and dependable insights into
crucial parameters like QRS complex duration, the R-R interval, and the
occurrence, amplitude, and duration of P, R, and T waves. However, utilizing
ECG for prolonged monitoring poses challenges as it necessitates connecting
multiple electrodes to the patient's body. This can be discomforting and
disruptive, hampering the attainment of uninterrupted recordings.
Ballistocardiography (BCG) emerges as a promising substitute for ECG,
presenting a non-invasive technique for recording the heart's mechanical
activity. BCG signals can be captured using sensors positioned beneath the bed,
thereby providing enhanced comfort and convenience for long-term monitoring of
the subject. In a recent study, researchers compared the heart rate variability
(HRV) indices derived from simultaneously acquired ECG and BCG signals.
Encouragingly, the BCG signal yielded satisfactory results similar to those
obtained from ECG, implying that BCG holds potential as a viable alternative
for prolonged monitoring. The findings of this study carry substantial
implications for the advancement of innovative, non-invasive methods in
monitoring heart health. BCG showcases the ability to offer a more comfortable
and convenient alternative to ECG while retaining its capacity to deliver
accurate and reliable cardiac information concerning a patient's condition. | Kavya Remesh, Job Chunkath | 2023-07-18T04:05:55Z | http://arxiv.org/abs/2307.08960v2 | # Comparison of HRV Indices of ECG and BCG Signals
###### Abstract
Electrocardiography (ECG) plays a significant role in diagnosing heart-related issues, it provides, accurate, fast, and dependable insights into crucial parameters like QRS complex duration, the R-R interval, and the occurrence, amplitude, and duration of P, R, and T waves. However, utilizing ECG for prolonged monitoring poses challenges as it necessitates connecting multiple electrodes to the patient's body. This can be discomforting and disruptive, hampering the attainment of uninterrupted recordings. Ballistocardiography (BCG) emerges as a promising substitute for ECG, presenting a non-invasive technique for recording the heart's mechanical activity. BCG signals can be captured using sensors positioned beneath the bed, thereby providing enhanced comfort and convenience for long-term monitoring of the subject. In a recent study, researchers compared the heart rate variability (HRV) indices derived from simultaneously acquired ECG and BCG signals. Encouragingly, the BCG signal yielded satisfactory results similar to those obtained from ECG, implying that BCG holds potential as a viable alternative for prolonged monitoring. The findings of this study carry substantial implications for the advancement of innovative, non-invasive methods in monitoring heart health. BCG showcases the ability to offer a more comfortable and convenient alternative to ECG while retaining its capacity to deliver accurate and reliable cardiac information concerning a patient's condition.
Ballistocardiography (BCG), Electrocardiography (ECG), Heart Rate Detection, HRV Indices, Heart Rate Variability (HRV).
## I Introduction
Electrocardiography (ECG) is a well-established technique for diagnosing cardiac function, involving the transcutaneous interpretation of the heart's electrical activity through electrodes placed on the skin and recorded externally [1,2]. A standard ECG waveform consists of five waves, namely P, Q, R, S, and T [3].
Ballistocardiography (BCG) emerges as an inconspicuous substitute for ECG, capturing body movements resulting from shifts in the body's center of mass caused by arterial blood displacement and cardiac contraction [4]. In short, BCG generates a graphical representation of repetitive bodily motion resulting from the forceful ejection of blood into the major vessels with each heartbeat. This vital sign consists of eight fiducial points (G, H, I, J, K, L, M, and N), falling within the frequency range of 1-20 Hz [5,6,7].
Recent advancements in BCG instrumentation have bolstered its viability as an ECG alternative. Cost-effective and compact measurement tools, such as piezoelectric sensors (EMFi), static-charge-sensitive beds, force plates, and modified commercial weighing scales, have been developed for BCG [5, 8]. The primary advantage of BCG over ECG lies in the absence of electrodes, textiles, or similar devices that need to be affixed to the patient's body [9]. Consequently, ballistocardiography systems are well-suited for continuous monitoring of cardiopulmonary activity during sleep and over extended periods [6]. BCG offers superior patient safety and comfort while providing more comprehensive information about the heart and cardiovascular diseases due to its larger number of fiducial points [7].
Heart rate variability (HRV) represents a significant non-invasive tool for assessing cardiac autonomic activity, typically derived from beat-to-beat (RR) interval series extracted from electrocardiograms (ECGs). Accurate HRV analysis relies on high-quality ECG/BCG recordings [10].
This study primarily focuses on comparing HRV indices (time domain: SDNN, RMSSD, pNN50; frequency domain: VLF, LF, and HF) between electrocardiography and ballistocardiography in a group of 20 subjects aged between 20 and 40, encompassing both male and female participants with varying body mass index (BMI) categories, including underweight, normal, and obese individuals.
## II Data Acquisition System
The acquisition of electrocardiogram (ECG) and ballistocardiogram (BCG) signals was performed employing the subsequent techniques:
### Electrocardiogram
For ECG signal acquisition, a 3-lead ECG acquisition system [10] was utilized. This system encompassed a bio-amplifier which pre-amplified the signal to increase signal strength above noise level.
### Ballistocardiogram
Two piezoelectric sensors were connected serially to obtain the BCG signal. These sensors were mounted on a compact single-sided circuit board. The data acquisition (DAQ) card [11, 12, 13] from National Instruments 9201 was employed for signal acquisition. The NI 9201 has a 12-bit resolution and features 8 analog inputs. Additionally, it supports Wi-Fi connectivity. In order to facilitate wireless communication with a laptop, the wireless NI CDAQ 9191 was connected to the DAQ. The establishment of wireless connections between the DAQ and the laptop was achieved using NI Measurement Automation Explorer (MAX).
## III Preprocessing
Subsequently, the acquired ECG and BCG signals were preprocessed using MATLAB. The preprocessing steps involved amplification and denoising.
**ECG:** The signal from the amplifier was passed through a bandpass filter with cut-off frequencies set at 5 Hz and 20 Hz [14]. This step effectively eliminated noise and baseline wandering.
**BCG:** The BCG signal obtained had an amplitude ranging from 30-70 \(\mathrm{mV}\), which was subsequently amplified utilizing an instrumentation amplifier with a gain of 10 [15]. The amplified signal was then subjected to band-pass filtering to acquire a signal within the range of 0.1-30 Hz. This filtering step also eliminated baseline wandering.
## IV Heart Rate Calculation
The heart rate (HR) was calculated using the ECG and BCG signals, employing the following methods:
ECG: The QRS complexes present in the ECG signal were detected through the utilization of the PanTomkins algorithm [14]. The locations of the peaks and their corresponding amplitudes were subsequently determined. Instantaneous RR intervals as well as the mean RR interval were computed. Finally, the HR was determined based on the mean RR interval.
BCG: The peaks in the BCG signal were detected utilizing the BCG Heart Beat Detection Method [15]. The locations of the peaks and their corresponding amplitudes were determined accordingly. Instantaneous JJ intervals and the mean JJ interval were computed. The HR was then calculated based on the mean JJ interval.
## V Heart Rate Variability Analysis
Heart rate variability (HRV) represents an invaluable non-invasive tool for assessing cardiac autonomic activity [2]. Typically, HRV is computed from beat-to-beat (RR) interval series derived from electrocardiograms (ECGs). Optimal quality ECG/BCG recordings serve as the foundation for accurate HRV analysis [16].
In ECG, the RR intervals indicate the variation between consecutive heartbeats. The analysis of heart rate variability (HRV) measurements revolves around the study of these RR intervals and how they change over time. The MATLAB program was used to determine both time domain and frequency domain indices of HRV. In the time domain, values for SDNN, RMSSD, and pNN50 were calculated. In the frequency domain, VLF, LF, and HF were also determined [17, 18].
In BCG, JJ intervals can be used as an alternative to RR intervals in ECG. The time and frequency domain indices of HRV were evaluated in a similar manner to the ECG analysis.
## VI Results
### Electroccardiography
1. ECG Signal Preprocessing: The acquired ECG signal underwent a series of preprocessing steps, including bandpass filtering and averaging using a moving window integrator. The derivative of the integrated signal was then computed, followed by squaring the resultant signal. The outcomes of these preprocessing steps are depicted in Figure 1.
2. QRS Complex Detection: QRS complexes were detected utilizing the bandpass-filtered and integrated signal. This process is illustrated in Figure 2.
3. Heart Rate: The variations in heart rate are displayed in Figure 3. The time domain HRV indices are as follows,
Figure 3: Heart Rate variation
Figure 2: Detection of QRS Complexes
Figure 4: RR Intervals
Figure 5: Power Spectral Density of the BCG and ECG Signals
B Ballistocardiography
1. BCG Signal Preprocessing:Similar to ECG, the acquired BCG signal was preprocessed through bandpass filtering and averaging using a moving window integrator. The derivative of the integrated signal was computed, and subsequently, the square of the derivative was taken. The outcomes of these preprocessing steps are presented in Figure 6.
2. **J Peak Detection:** The J peaks were detected using the bandpass-filtered and integrated BCG signal, as illustrated in Figure 7.
3. **Heart Rate:** The variations in heart rate are shown in Figure 7. The Time Domain HRV Indices are as follows: Mean Heart Rate (in bpm): 74.673 Standard Deviation of all normal JJ Intervals (SDNN in ms): 132.568 Root Mean Square Successive Differences (RMSSD in ms): 40.19 Percentage of differences between successive NN intervals that are greater than 50ms (percentage units): 9.756
4. **JJ Intervals:** The JJ intervals are depicted in Figure 7.
5. **Frequency Domain HRV Indices:** The Power Spectral Density of the BCG signal is displayed in Figure 5.
ECG and BCG-derived HRV indices extends to the frequency domain as well. This finding was consistent across 20 subjects, leading us to conclude that long-term monitoring with BCG is indeed a feasible alternative.
## 7 Conclusion
The goal of this paper was to compare the Heart Rate Variability (HRV) indices of Electrocardiogram (ECG) and Ballistocardiogram (BCG). After observing the HRV indices of ECG and BCG in 20 subjects, we concluded that there is a high correlation between the HRV (time domain and frequency domain) indices of ECG and BCG. We also found that BCG can be more comfortable for the subject than ECG during data acquisition. BCG uses only sensors that can be laid down under the bed for data collection, whereas ECG uses electrodes that have to be attached to the body of the subject. This difference in data acquisition methods can disturb the subject's freedom of movement for long-term monitoring cases. Since BCG acquisition assures the subject more freedom of movement and since the HRV indices of ECG and BCG are the same, long-term monitoring with BCG is indeed a feasible alternative that ensures a good performance.
|
2308.09223 | DMCVR: Morphology-Guided Diffusion Model for 3D Cardiac Volume
Reconstruction | Accurate 3D cardiac reconstruction from cine magnetic resonance imaging
(cMRI) is crucial for improved cardiovascular disease diagnosis and
understanding of the heart's motion. However, current cardiac MRI-based
reconstruction technology used in clinical settings is 2D with limited
through-plane resolution, resulting in low-quality reconstructed cardiac
volumes. To better reconstruct 3D cardiac volumes from sparse 2D image stacks,
we propose a morphology-guided diffusion model for 3D cardiac volume
reconstruction, DMCVR, that synthesizes high-resolution 2D images and
corresponding 3D reconstructed volumes. Our method outperforms previous
approaches by conditioning the cardiac morphology on the generative model,
eliminating the time-consuming iterative optimization process of the latent
code, and improving generation quality. The learned latent spaces provide
global semantics, local cardiac morphology and details of each 2D cMRI slice
with highly interpretable value to reconstruct 3D cardiac shape. Our
experiments show that DMCVR is highly effective in several aspects, such as 2D
generation and 3D reconstruction performance. With DMCVR, we can produce
high-resolution 3D cardiac MRI reconstructions, surpassing current techniques.
Our proposed framework has great potential for improving the accuracy of
cardiac disease diagnosis and treatment planning. Code can be accessed at
https://github.com/hexiaoxiao-cs/DMCVR. | Xiaoxiao He, Chaowei Tan, Ligong Han, Bo Liu, Leon Axel, Kang Li, Dimitris N. Metaxas | 2023-08-18T00:48:30Z | http://arxiv.org/abs/2308.09223v1 | # DMCVR: Morphology-Guided Diffusion Model for 3D Cardiac Volume Reconstruction
###### Abstract
Accurate 3D cardiac reconstruction from cine magnetic resonance imaging (cMRI) is crucial for improved cardiovascular disease diagnosis and understanding of the heart's motion. However, current cardiac MRI-based reconstruction technology used in clinical settings is 2D with limited through-plane resolution, resulting in low-quality reconstructed cardiac volumes. To better reconstruct 3D cardiac volumes from sparse 2D image stacks, we propose a morphology-guided diffusion model for 3D cardiac volume reconstruction, DMCVR, that synthesizes high-resolution 2D images and corresponding 3D reconstructed volumes. Our method outperforms previous approaches by conditioning the cardiac morphology on the generative model, eliminating the time-consuming iterative optimization process of the latent code, and improving generation quality. The learned latent spaces provide global semantics, local cardiac morphology and details of each 2D cMRI slice with highly interpretable value to reconstruct 3D cardiac shape. Our experiments show that DMCVR is highly effective in several aspects, such as 2D generation and 3D reconstruction performance. With DMCVR, we can produce high-resolution 3D cardiac MRI reconstructions, surpassing current techniques. Our proposed framework has great potential for improving the accuracy of cardiac disease diagnosis and treatment planning. Code can be accessed at [https://github.com/hexiaoxiao-cs/DMCVR](https://github.com/hexiaoxiao-cs/DMCVR).
Keywords:Diffusion model 3D Reconstruction Generative model
## 1 Introduction
Medical imaging technology has revolutionized the field of cardiac disease diagnosis, enabling the assessment of both cardiac anatomical structures and motion, including the creation of 3D models of the heart [5]. Cardiac cine magnetic resonance imaging (cMRI) [16, 20] is widely used in clinical diagnosis [14], allowing for non-invasive visualization of the heart in motion with detailed information
on cardiac function and anatomy [17]. While cMRI has great potential in helping doctors understand and analyze cardiac function [9, 15], the imaging technique has certain drawbacks including low through-plane resolution to accommodate for the limited scanning time, as visualized in Fig. 1. Recently, researchers have approached the problem of cardiac volume reconstruction with learning-based generative models [2]. However, most of the methods suffer from low generation quality, missing key cardiac structures and long generation times. This paper focuses on improving the cardiac model generation quality, while reducing the generation time, aiming to better reconstruct the missing structure of the cardiac model from low through-plane resolution cMRI.
Conventional 3D cardiac modeling [12] consists of 2D cardiac image segmentation followed by 3D cardiac volume reconstruction. Recent advances in deep learning methods have shown great success in medical image segmentation [4, 6, 11, 23]. After obtaining 2D labels, the neighboring labels are stacked to reconstruct the 3D model. Nevertheless, due to the low inter-slice spatial cMRI resolution, a significant amount of structural information is lost in the resulting
Figure 1: (a) demonstrates the limitations of cardiac cMRI. The white line in the short axis (SAX) image is the location of 2 chamber (2ch) long axis (LAX) image slice and vice versa. The grey images indicate the missing slices which are not captured during the MRI scan. (b) is an overview of our DMCVR architecture. The SAX images \(x_{0}\) are first encoded to global semantic \(\ell_{sem}\), regional morphology \(\ell_{mor}\) and stochastic latent codes \(x_{T}\), followed by interpolation in their respective latent space. The reconstructed images are sampled from a forward denoising diffusion implicit model (DDIM) process conditioned on the three latent codes. Finally, the 3D cardiac model is reconstructed via stacking the labels. The red, green, and blue regions represent the left ventricle cavity (LVC), left ventricle myocardium (LVM), and right ventricle cavity (RVC), respectively.
3D volume. Thus, the interpolation between cMRI slices is necessary. Traditional intensity-based interpolation methods often yield blurring effects and unrealistic results. Conventional deformable model-based method [13] does not need consistency across images of the corresponding cardiac structures, but requires image-based structure segmentation which is nontrivial and hinders their ability to generalize. To overcome these limitations, an end-to-end pipeline based on generative adversarial networks (GANs), DeepRecon, was recently proposed in [2] that utilizes the latent space to interpolate the missing information between adjacent 2D slices. The generative network is first trained and a semantic image embedding in the \(\mathcal{W}^{+}\) space [1] is computed. Evidently, the acquired semantic latent code is not optimal and needs iterative optimization with segmentation information for improving image qualities. However, even with the optimization step, the generated images still miss details in the cardiac region, which indicates the \(\mathcal{W}^{+}\) space DeepRecon found does not represent the heart accurately.
In order to eliminate the step for optimizing the latent code and improve the image generation quality, we propose a morphology-guided diffusion-based 3D cardiac volume reconstruction method that improves the axial resolution of 2D cMRIs through global semantic and regional morphology latent code interpolation as indicated in Fig. 1. Inspired by [19], we utilize the global semantic latent code to encode the image into a high-level meaningful representation of the image. To improve the cardiac volume reconstruction, our approach needs to focus on the cardiac region. Therefore, we introduce the regional morphology latent code which represents the shapes and locations of LVC, LVM and RVC, which will help generating the cardiac region. The method consists of three parts: an implicit diffusion model, a global semantic encoder and a segmentation network that encodes an image to regional morphology embeddings. The proposed method does not require iteratively fine-tuning the latent codes. Our contributions are: 1) the first diffusion-based method for 3D cardiac volume reconstruction, 2) introducing the local morphology-based latent code for improved conditioning on the image generation process, 3) 8% improvement of left ventricle myocardium (LVM) segmentation accuracy and 35% improvement of structural similarity index compared to previous methods, and 4) improved efficiency by eliminating the iterative step for optimizing the latent code.
## 2 Methods
Fig. 2 demonstrates the structure of our DMCVR approach that learns the global semantic, regional morphology, and stochastic latent spaces from MR images to yield a broad range of outcomes, including generation of high-quality 2D image and high-resolution 3D reconstructed volume. In this section, we will first describe the architecture of our DMCVR method and then elaborate on the latent space-based 3D volume generation which enables 3D volume reconstruction.
### DMCVR Architecture
Our DMCVR is composed of a global semantic encoder \(E_{sem}\), a regional morphology network (\(E_{mor},D_{mor}\)) and a diffusion-based generator \(G\). The generating process \(G\) is defined as follows: given input \(x_{T},\ell_{sem},\ell_{mor}\), which are the stochastic, global semantic and regional morphology latent codes, we want to reconstruct the image \(x_{0}\) recursively as follows:
\[x_{t-1}=\sqrt{\alpha_{t-1}}f_{\theta}(x_{t},t,\ell_{sem},\ell_{mor})+\sqrt{1- \alpha_{t-1}}\epsilon_{\theta}(x_{t},t,\ell_{sem},\ell_{mor}), \tag{1}\]
where \(\epsilon_{\theta}(x_{t},t,\ell_{sem},\ell_{mor})\) is the noise prediction network and \(f_{\theta}\) is defined as removing the noise from \(x_{t}\) or Tweedie's formula [3]:
\[f_{\theta}(x_{t},t,\ell_{sem},\ell_{mor})=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}- \sqrt{1-\alpha_{t}}\epsilon_{\theta}(x_{t},t,\ell_{sem},\ell_{mor})) \tag{2}\]
Here, the term \(\alpha_{t}\) is a function of \(t\) affecting the sampling quality.
The forward diffusion process takes the noise \(x_{T}\) as input and produces \(x_{0}\) the target image. Since the change in \(x_{T}\) will affect the details of the output images, we can treat \(x_{T}\) as the stochastic latent code. Therefore, finding the correct stochastic latent code is crucial for generating image details. Thanks to DDIM proposed by Song _et al._[21], it is possible to get \(x_{T}\) in a deterministic fashion by running the generative process backwards to obtain the stochastic latent code \(x_{T}\) for a given image \(x_{0}\). This process is viewed as a stochastic encoder \(x_{T}=E_{sto}(x_{0},\ell_{sem},\ell_{mor})\), which is conditioned on \(\ell_{sem}\) and \(\ell_{mor}\). This conditioning helps us to remove the iterative optimization step used by previous method. We formulate the inversion process from \(x_{0}\) to \(x_{T}\) as follows:
\[x_{t+1}=\sqrt{\alpha_{t+1}}f_{\theta}(x_{t},t,\ell_{sem},\ell_{mor})+\sqrt{1- \alpha_{t+1}}\epsilon_{\theta}(x_{t},t,\ell_{sem},\ell_{mor}) \tag{3}\]
Figure 2: On the left side, we demonstrates the network structure of the DMCVR, which consists of an global semantic encoder, a regional morphology encoder/decoder and a conditional DDIM. The right side shows the visualization of the stochastic latent space sampled from a high-dimensional Gaussian distribution \(\mathcal{N}(0,I)\).
Although using the stochastic latent variables we are able to reconstruct the image accurately, the stochastic latent space does not contain interpolatable high-level semantics. Here we utilize a semantic encoder proposed by Preechakul _et al_. [19] to encode the global high-level semantics into a descriptive vector for conditioning the diffusion process, similar to the style vector in StyleGAN [10]. The global semantic encoder utilizes the first half of the UNet, and is trained end-to-end with the conditional diffusion model.
One drawback of the global semantic encoder is that it encodes the general high-level features, but tends to pay little attention to the cardiac region. This is due to the relatively small area of LVC, LVM and RVC in the cMRI slice. However, the generation accuracy of the cardiac region is crucial for the cardiac reconstruction task. For this reason, we introduce the regional morphology encoder \(E_{mor}\) that embeds the image into the latent space containing necessary information to produce the segmentation map of the target cardiac tissues. With this extra morphology information, we are able to guide the generative model to focus on the boundary of the ventricular cavity and myocardium region, which will produce increased image accuracy in the cardiac region and the downstream segmentation task. Here, we do not assume any particular architecture for the segmentation network. However, in our experiments, we utilize the segmentation network MedFormer proposed by Gao _et al_. [4] for its excellent performance.
The training of DMCVR contains the training of the segmentation network and the training of the generative model. We first train the segmentation model with summation of focal loss and dice loss [4]. We utilize the simple loss introduced in [7] for training the conditional diffusion implicit model, where
\[L_{gen}(x)=\mathbb{E}_{t\sim\mathrm{Unif}(1,T),\epsilon\sim\mathcal{N}(0,I)}|| \epsilon_{\theta}(x_{t},t,E_{sem}(x_{0}),E_{mor}(x_{0}))-\epsilon||_{2}^{2}. \tag{4}\]
### 3D volume reconstruction and latent-space-based interpolation
Due to various limitations, the gap between consecutive cardiac slices in cMRI is large, which results in an under-sampled 3D model. In order to output a smooth super-resolution cine image volume, we generate the missing slices by using the interpolated global semantic, regional morphology and stochastic latent codes. For global semantic and regional morphology latent code \(\ell\), since it is similar to the idea of latent code in StyleGAN, we utilize the same interpolation strategies as in the original paper between adjacent slices. Assume that \(k<j-i,i<j\),
\[\ell^{i+k}=(1-\frac{k}{j-i})\ell^{i}+\frac{k}{j-i}\ell^{j}. \tag{5}\]
For interpolating the stochastic latent variable, it is important to consider that the distribution of stochastic noise is high-dimensional Gaussian, as shown in Eq. (4). Thus, our stochastic embedding is positioned on a sphere shown in Fig. 2. Using linear interpolation on the stochastic noise deviates from the underlying distribution assumption and causes the diffusion model to generate unrealistic images. Hence, to preserve the Gaussian property of the stochastic
latent space, we interpolate the stochastic latent codes over a unit sphere, which can be written as follows: Let \(k<j-i,i<j\) and \(x_{T}^{i}\cdot x_{T}^{j}=\cos\theta\),
\[x_{T}^{i+k}=\frac{\sin((1-\frac{k}{j-i})\theta)}{\sin(\theta)}x_{T}^{i}+\frac{ \sin(\frac{k}{j-i}\theta)}{\sin(\theta)}x_{T}^{j}. \tag{6}\]
## 3 Experiments
### Experimental Settings
In this study we use data from the publicly available UK Biobank cardiac MRI data [18], which contains SAX and LAX cine CMR images of normal subjects. LVC, LVM and RVC are manually annotated on SAX images at the end-diastolic (ED) and end-systolic (ES) cardiac phases. We use 808 cases containing 484,800 2D SAX MR slices for training and 200 cases containing 120,000 2D images for testing. To evaluate the 3D volume reconstruction performance, we randomly choose 50 testing 2D LAX cases to evaluate the 3D volume reconstruction task. All models are implemented on PyTorch 1.13 and trained with 4\(\times\)RTX8000.
### Evaluation of the 2D slice generation quality
We provide peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) [8] to evaluate the similarity between the generated images and
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Cardiac Region & Method & DICE \(\uparrow\) & VOE\(\downarrow\) & ASD\(\downarrow\) & HD\(\downarrow\) & ASSD\(\downarrow\) \\ \hline \multirow{4}{*}{All labels} & Original & _0.943_ & _10.730_ & _0.229_ & _4.056_ & _0.229_ \\ \cline{2-7} & DeepRecon\({}_{1k}\) & 0.914 & 15.179 & 0.367 & 5.879 & 0.397 \\ \cline{2-7} & DiffAE & 0.919 & 14.913 & 0.322 & 4.654 & 0.326 \\ \cline{2-7} & DMCVR & **0.935** & **12.153** & **0.261** & **4.093** & **0.266** \\ \hline \multirow{4}{*}{LVC} & Original & _0.937_ & _11.579_ & _0.221_ & _3.156_ & _0.224_ \\ \cline{2-7} & DeepRecon\({}_{1k}\) & 0.928 & 12.955 & 0.336 & 4.299 & 0.328 \\ \cline{2-7} & DiffAE & 0.910 & 16.049 & 0.330 & 3.710 & 0.320 \\ \cline{2-7} & DMCVR & **0.929** & **12.940** & **0.250** & **3.236** & **0.254** \\ \hline \multirow{4}{*}{LVM} & Original & _0.875_ & _22.082_ & _0.226_ & _3.140_ & _0.237_ \\ \cline{2-7} & DeepRecon\({}_{1k}\) & 0.796 & 33.382 & 0.390 & 5.730 & 0.389 \\ \cline{2-7} & DiffAE & 0.825 & 29.333 & 0.351 & 4.032 & 0.338 \\ \cline{2-7} & DMCVR & **0.865** & **23.636** & **0.282** & **3.519** & **0.267** \\ \hline \multirow{4}{*}{RVC} & Original & _0.898_ & _18.187_ & _0.273_ & _4.458_ & _0.267_ \\ \cline{2-7} & DeepRecon\({}_{1k}\) & 0.858 & 23.662 & 0.381 & 6.304 & 0.473 \\ \cline{1-1} \cline{2-7} & DiffAE & 0.857 & 24.518 & 0.346 & 5.217 & 0.382 \\ \cline{1-1} \cline{2-7} & DMCVR & **0.884** & **20.467** & **0.273** & **4.460** & **0.308** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison among the segmentation results of the original image (Original), DeepRecon with 1k optimization steps (DeepRecon\({}_{1k}\)), Diffusion AutoEncoder [19] (DiffAE) and our DMCVR. We use a pretrained segmentation model on images generated by different methods. All metrics are evaluated against the ground truth based on 3D SAX images.
the original images. In addition to image quality assessment, we want to consider the segmentation performance on the generated images by using a segmentation network trained on the real training data as the evaluator and segment the testing images generated by DeepRecon\({}_{1k}\), DiffAE which only uses the global semantic latent code as the condition on the DDIM model, and our DMCVR methods. The segmentation accuracy of the evaluator on the generated images can be viewed as a quantitative metric to represent the generation quality of the generated data compared to the cMRI data. We compare segmentation obtained based on three methods against ground truth on the SAX images in Tab. 1. The Dice coefficient (DICE), volumetric overlap error (VOE), average surface distance (ASD), Hausdorff distance (HD) and average symmetric surface distance (ASSD) [22] are reported for comparison.
Our method achieves a PSNR score of **30.504** and SSIM score of **0.982**, which is a significant improvement (35% increase in SSIM) compared to DeepRecon (PSNR: **27.684**, SSIM: **0.724**) with 1k optimization steps. This indicates that our method generates more realistic image compared to DeepRecon. The segmentation results on the original images in Tab. 1 provide an upper bound for other results. DMCVR outperforms all other methods in every metric with an 8% increase in LVM segmentation compared to DiffRecon\({}_{1k}\)Moreover, by comparing the DiffAE and DMCVR, the introduction of the regional morphology latent code drastically improves the generation results due to the extra information on the shape of LVC, LVM, and RVC. Fig. 3 demonstrates the original image and corresponding synthetic images. The white arrow points towards the presence of cardiac papillary muscles. As indicated in the images, DeepRecon\({}_{1k}\) (b) cannot effectively recover the information of the papillary muscles from the latent space. However, both diffusion-based (c,d) methods accurately synthesize the information. Our method (d) generates a cleaner image with less artifacts than (c), especially around the LV and RV regions. By comparing the yellow circled area, our method produces image closer to the ground truth compared to
Figure 3: 2D and 3D visualization results of the generated images and segmentation. (a,e) original image, (b,f) DeepRecon\({}_{1k}\), (c,g) DiffAE [19], (d,h) our proposed DMCVR.
DeepRecon\({}_{1k}\). Also, the white circle in Fig. 3 demonstrates the benefits of incorporating regional morphology information. Besides, the generative model used in DeepRecon\({}_{1k}\) needs to be trained for 14 days with additional time to iteratively optimize the latent code for each slice. Our method uses 4.8 days for training. Since DDIM inversion does not have test-time optimization as DeepRecon does, DMCVR generates images faster than DeepRecon.
### Evaluation of the 3D volume reconstruction quality through latent space interpolation
In this section, we exploit the relationship between SAX and LAX images and leverage the LAX label to evaluate the volume reconstruction quality. In cardiac MRI, long axis (LAX) slices typically comprise 2-chamber (2ch), 3-chamber (3ch), and 4-chamber (4ch) views. To evaluate the performance of different interpolation methods on LAX slices, we conducted the following experiments: 1)
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Method & Average DICE & 2ch DICE & 3ch DICE & 4ch DICE \\ \hline Nearest Neighbor & 0.780 (0.111) & 0.787 (0.091) & 0.793 (0.105) & 0.766 (0.128) \\ \hline Linear Interpolation & 0.781 (0.080) & 0.797 (0.051) & 0.773 (0.070) & 0.768 (0.102) \\ \hline DeepRecon\({}_{1k}\) & 0.817 (0.097) & **0.848** (0.056) & 0.802 (0.141) & 0.797 (0.091) \\ \hline DMCVR & **0.836 (0.052)** & 0.841 **(0.042)** & **0.809 (0.069)** & **0.854 (0.043)** \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation of 3D volumetric reconstruction from the DICE score of the intersection on each LAX plane against ground truth based on 2D LAX sampled images: mean (standard deviation). Nearest Neighbor, Image-based Linear Interpolation, DeepRecon\({}_{1k}\) and our DMCVR method are compared.
Figure 4: Visual comparison of 3D volumetric reconstruction from SAX images to LAX. Each row from top to bottom are 2ch, 3ch and 4ch images. The column from left to right represents: resampled original images using nearest neighbour (NN), resampled original labels using NN, resampled DMCVR images, resampled DMCVR labels and the corresponding LAX images.
Nearest Neighbor resampling of short-axis (SAX) volume to each LAX view, 2) Image-based Linear Interpolation, 3) DeepRecon\({}_{1k}\), and 4) our DMCVR. Tab. 2 shows the computed 2D DICE score between the annotation of different LAX views and the intersection between the corresponding LAX plane and 3D reconstructed volume. Our method outperforms other methods in three categories and has only less than 1% performance degradation compared to DeepRecon\({}_{1k}\) but with more stable performance. Fig. 4 presents three examples for each LAX view, showing better reconstructed LAX results compared to the original images.
## 4 Conclusion
Integrating analysis of cMRI holds significant clinical importance in understanding and evaluating cardiac function. We propose a diffusion-model-based volume reconstruction method. Our finding shows that through an interpolatable latent space, we are able to improve the spatial resolution and produce meaningful MR images. In the future, we will consider incorporating LAX slices as part of the generation process to help refine the latent space.
**Acknowledgement** This research has been partially funded by research grants to D. Metaxas through NSF: IUCRC CARTA 1747778, 2235405, 2212301, 1951890, 2003874, and NIH-5R01HL127661.
|
2310.13031 | A Use Case: Reformulating Query Rewriting as a Statistical Machine
Translation Problem | One of the most important challenges for modern search engines is to retrieve
relevant web content based on user queries. In order to achieve this challenge,
search engines have a module to rewrite user queries. That is why modern web
search engines utilize some statistical and neural models used in the natural
language processing domain. Statistical machine translation is a well-known NLP
method among them. The paper proposes a query rewriting pipeline based on a
monolingual machine translation model that learns to rewrite Arabic user search
queries. This paper also describes preprocessing steps to create a mapping
between user queries and web page titles. | Abdullah Can Algan, Emre Yürekli, Aykut Çayır | 2023-10-19T11:37:14Z | http://arxiv.org/abs/2310.13031v1 | # A Use Case: Reformulating Query Rewriting as a Statistical Machine Translation Problem
###### Abstract
One of the most important challenges for modern search engines is to retrieve relevant web content based on user queries. In order to achieve this challenge, search engines have a module to rewrite user queries. That is why modern web search engines utilize some statistical and neural models used in the natural language processing domain. Statistical machine translation is a well-known NLP method among them. The paper proposes a query rewriting pipeline based on a monolingual machine translation model that learns to rewrite Arabic user search queries. This paper also describes preprocessing steps to create a mapping between user queries and web page titles.
research engine, query rewriting, statistical machine translation
## I Introduction
Modern search engines are the most essential data sources created by world-wide users. From this perspective, a modern search engine is also considered a huge database where users can search and find content in. That is why, roughly a search engine has two important tasks such as indexing and retrieving the data or web page content. Data indexing is out of the scope of this paper.
Search engines focus on increasing the quality of retrieving relevant content as accurately and fast as possible. To do that, most of the well-known search engines leverage the improvements of the natural language processing models. The latest development in the machine and deep learning approaches provides users to search and find relevant web page contents in milliseconds. To improve the search experiences of the users, there are several important components such as query correction, rewriting and named entity recognition. This paper describes a use case that reformulates query rewriting as a statistical machine translation model.
Our main contributions in the paper can be summarized as follows:
* To our best knowledge, we introduce the first statistical machine translation model to rewrite Arabic user queries using query-title pairs crawled from web search results.
* We present an Arabic language specific data pre-processing pipeline that struggles with the problems of queries due to being free text inputs.
The paper is organized as four main sections. Section II presents a literature review about query rewriting methods. Section III describes how we reformulate query rewriting as a machine translation problem. We briefly discuss the advantages of the proposed approach and its limitations in section IV. Section V concludes the paper.
## II Related Work
Query rewriting is one of the most common techniques to improve search results in databases. Calvenese et al. define query rewriting for database management systems [1]. Similar to database management systems, search engines need to rewrite user queries to retrieve more content aware results as fast as possible. In order to do that, search engines use some learning models that can find a semantic mapping between web documents and user queries [2]. Grbovic et al. propose a content and context aware query rewriting module for search engines based on a novel query embedding algorithm [3].
Wu et al. introduce an interesting example for query rewriting model based on reinforcement learning [4]. They have compared their reinforcement model with a T5QR model which is based on a seq2seq transformer architecture. Text generation is a common task in NLP domain. With a breakthrough in transformer architecture, query rewriting is also a proper candidate for seq2seq models based on pre-trained transformers. Lin et al. [5] present a generative query rewriting model by using pre-trained T5 architecture. Another method is the few-shot generative approach for query rewriting. Yu et al. [6] develop two methods based on rules and self-supervised learning, to generate weak supervision data and to finetune GPT-2 model with it. GPT-2 is a transformer-based large language model, which can perform zero-shot inference. Large language models also can handle context-independent queries effectively.
However, seq2seq models and large language models have latency issues while performing inference on a search engine. Latency is one of the main challenges while working on search engines. Statistical models have the advantage over neural models considering latency.
Mandal et al. [7] use synonyms for query rewriting. They take user queries and generate synonym candidates for the words in the query. They use a classifier to choose a synonym
of the word from the candidates. Riezler and Liu [8] have used user query and document space to train an SMT model. Jun-Wei et al. [9] have used user query and query-related document keywords to train an SMT model. Riezler and Liu [8] and Jun-wei [9] have used similar methods for generating rewritten queries with different word alignment technics. In this work, the SMT model has been used with user queries and document titles, which is one of the different word alignment approaches.
## III Methodology
### _Problem Reformulation_
Generally speaking, a machine translation contains two languages such as source and target languages. Let \(M\) denote the machine translation model, \(S\) denote the source language and \(T\) denote the target language in a classical machine translation application. A machine translation model learns a mapping between sentence \(s_{i}\) in \(S\) and sentence \(t_{i}\) in \(T\) as shown in equation (1). In this setting, \(S\) and \(T\) are different natural languages.
\[s\in S \tag{1}\] \[t\in T\] \[M(s)\to t\]
\[q\in Q \tag{2}\] \[\tau\in\tau\] \[M(q)\rightarrow\tau\]
In order to develop a rewriting model based on machine translation settings, equation (1) can be reformulated as a monolingual setting for source and target sentences, similarly to [8, 10]. In setting (2), \(Q\) and \(\tau\) belong to the same language. In this way, model \(M\) learns how to rewrite a sentence or query \(q\) as sentence \(\tau\).
### _Statistical Machine Translation (SMT)_
Statistical Machine Translation is treating a translation task as a machine learning problem. The process relies on learning automatic translation using a parallel corpus. SMT is one of the key applications in the field of NLP. Core ideas of automatic machine translation go back decades ago. It has been studied widely and it interacts with many different areas such as statistics, linguistics, computer science, and optimization techniques. We will specify our system architecture and design choices but we will discuss them on a high level. Nevertheless, a good historical overview of the SMT approach is given in many survey papers [11, 12].
As described in problem reformulation (Section III-A), SMT models are used to translate from a source language to a target language. Traditionally, the source and the target languages are different. However, in our system, we develop a monolingual SMT model. Considering our goal is being a query rewriting ability, we do not need two different languages. Therefore, both the source and target corpus are in Arabic.
We employ an open-source toolkit called Moses [13] for Statistical Machine Translation. As the authors claimed, it is a complete toolkit for handling various translation tasks. From preprocessing the corpus to training translation systems, the tools included in Moses are capable enough. Important steps for SMT training are described as follows:
**Data preparation:** This step has two main objectives. One of them is to clean the parallel corpus by removing duplicate spaces and empty lines. Too short or too long sentences are also deleted. Since parallel corpus contains two different files, removing a sentence requires removing the corresponding line as well. The other objective is to lower the data sparsity which is done by lowercasing and tokenization.
**Word Alignment:** MGIZA [14] is used for the word alignment between the source and the target languages. MGIZA is the multi-threaded implementation of GIZA++ [15].
**Language modeling:** Language models are used for predicting the probability of a word based on the previous context. We trained a trigram language model and the standard probability formula of a sentence is shown in equation 3.
\[P(W)=P(w_{1},w_{2},w_{3}...w_{N})\approx\prod_{i=1}^{N}p(w_{i}|w_{i-2}w_{i-1}) \tag{3}\]
Language model implementation is based on KenLM [16].
**Tuning:** After completing the training phase, the model is tuned using Minimum Error Rate Tuning (MERT) [17] method. 100K query-title pairs are reserved for the tuning phase.
**Pruning the phrase table:** Our translation model is intended to be used in the real-time search pipeline. Therefore, our biggest constraint is latency. We have pruned the phrase
Fig. 1: Preprocessing diagram
table to lower the inference time. The phrase table contains Arabic phrase pairs and their co-occurrence frequencies. We have removed phrase pairs whose frequency is less than or equal to 3. In addition to performance gain, pruning is beneficial to increase phrase table quality because phrase tables have a considerable amount of noise due to alignment errors and noisy training data. Therefore, pruning allows us to eliminate these noisy phrase pairs.
**Inference:** After the training, the trained SMT model is tested on unseen data. The model tends to produce the same output as the input for the frequent phrases. We have implemented another step to increase the output variation. For this purpose, we have not relied on a single prediction from the model instead we gathered the top-5 outputs for each test sentence. Then, we used these outputs iteratively. Starting from the first output, we have tested if a prediction is exactly the same as the input. If so, we check the next prediction. In the end, if all the outputs are the same, we finally used the fifth-best prediction.
### _Dataset_
From search engines, \(\sim\)28M Query - Title pairs have been collected for the Arabic language. After preprocessing steps have been applied to this raw data, remained 6,350,718 query-title pairs.
**Preprocessing steps:** The thresholds that have been used for the following steps have been determined by native Arabic annotators by analyzing random query-title pairs' sub-sample sets.
Punctuation, URL and website names have been replaced with white space in queries or titles if they exist. These parameters do not have any contribution to this work. Titles, which come from search results, may include website names. User queries and re-written queries have an effect on the search results. To prevent boosting any website and to protect the user's intention, website names also have been replaced with white space in the titles.
Examples of the titles including website name:
* _
We have observed an increase in the end results. Nevertheless, our system design has some limitations. A purely statistical approach depends on the training corpus meaning that larger texts might enable better results. However, it is inevitable that the data in the production system will differ from the training data at one point. There are several reasons for this possible difference. Firstly, the language itself is constantly evolving. New terms are introduced in our daily life. Also, foreign words are being used for new concepts. In a nutshell, the ratio of out-of-vocabulary words should be monitored and the system updates must be implemented accordingly.
At the end of this work, native Arabic annotators have evaluated the SMT model outputs for the test set as "Good" and "Bad". The test set has been generated from the search queries randomly. Good examples can be seen in Table II and bad examples can be seen in Table I with their error types.
The main annotation criterion is to protect user intention. If the rewritten query has changed the user intention or the meaning of the user query, then, it has been annotated as bad. In addition to this criterion, if the rewritten query has been generated grammatically wrong, then it also has been annotated as bad.
In the good cases, it can be seen that in the rewritten queries, user intention has been protected and also they have been generated grammatically correct. Furthermore, when they have been analyzed, the SMT model is disposed to rewrite queries as more general like by deleting the years in the Table II (Example 1), more specific like by adding words in the Table II (Examples 3, 4 and 5) and by changing location in the Table II (Examples 2 and 6). Moreover, the SMT model has rewritten queries by replacing words in the queries with their synonyms or alternatives.
For the bad cases, the annotation team has analyzed the error types which have been categorized into 6 groups. Changing query intention or meaning, the numbers, and the location and deleting words are actually sub-groups of the first annotation criterion. The other error groups, adding redundant words or adjectives, and normalization problems, are the sub-groups of the second annotation criterion.
## V Conclusion and Future Work
In this study, query rewriting is treated as a machine translation task. Traditional translation tasks are using bilingual data aiming a translation between different language pairs. However, we use a monolingual translation model to rewrite user-generated queries. There are many alternative methods to develop a translation system. We have chosen the Statistical Machine Translation approach. We have considered some aspects including the training data size, output quality, and inference speed.
Low inference time is one of our most important requirements. Latency issues are handled with some optimization
techniques. Firstly, trigram language models are converted to binary format. It speeds up the initial loading time significantly. Pruning the phrase table is very beneficial since it reduces the search space. Also, pruned phrase table requires much less memory for storage.
Although we have observed a significant performance increase, there remains some future work. As a data-driven method, SMT could take advantage of more training data. One of our plans is to gather larger monolingual parallel corpora. Also, our dataset is created automatically so there might be noisy query-title pairs. We are planning to eliminate those kinds of pairs with automatic evaluation metrics to increase the quality of the training data. These future works are directly related to our current approach meaning that statistical methods are limited to training data. Since we are trying to rewrite a query by preserving the intention, we need different variations. Encoder-decoder based Neural Machine Translation (NMT) methods might be useful because they make use of neural embeddings.
|
2310.10538 | Efficient Quantum Circuits based on the Quantum Natural Gradient | Efficient preparation of arbitrary entangled quantum states is crucial for
quantum computation. This is particularly important for noisy intermediate
scale quantum simulators relying on variational hybrid quantum-classical
algorithms. To that end, we propose symmetry-conserving modified quantum
approximate optimization algorithm~(SCom-QAOA) circuits. The depths of these
circuits depend not only on the desired fidelity to the target state, but also
on the amount of entanglement the state contains. The parameters of the
SCom-QAOA circuits are optimized using the quantum natural gradient method
based on the Fubini-Study metric. The SCom-QAOA circuit transforms an
unentangled state into a ground state of a gapped one-dimensional Hamiltonian
with a circuit-depth that depends not on the system-size, but rather on the
finite correlation length. In contrast, the circuit depth grows proportionally
to the system size for preparing low-lying states of critical one-dimensional
systems. Even in the latter case, SCom-QAOA circuits with depth less than the
system-size were sufficient to generate states with fidelity in excess of 99\%,
which is relevant for near-term applications. The proposed scheme enlarges the
set of the initial states accessible for variational quantum algorithms and
widens the scope of investigation of non-equilibrium phenomena in quantum
simulators. | Ananda Roy, Sameer Erramilli, Robert M. Konik | 2023-10-16T16:08:57Z | http://arxiv.org/abs/2310.10538v1 | # Efficient Quantum Circuits based on the Quantum Natural Gradient
###### Abstract
Efficient preparation of arbitrary entangled quantum states is crucial for quantum computation. This is particularly important for noisy intermediate scale quantum simulators relying on variational hybrid quantum-classical algorithms. To that end, we propose symmetry-conserving modified quantum approximate optimization algorithm (SCom-QAOA) circuits. The depths of these circuits depend not only on the desired fidelity to the target state, but also on the amount of entanglement the state contains. The parameters of the SCom-QAOA circuits are optimized using the quantum natural gradient method based on the Fubini-Study metric. The SCom-QAOA circuit transforms an unentangled state into a ground state of a gapped one-dimensional Hamiltonian with a circuit-depth that depends not on the system-size, but rather on the finite correlation length. In contrast, the circuit depth grows proportionally to the system size for preparing low-lying states of critical one-dimensional systems. Even in the latter case, SCom-QAOA circuits with depth less than the system-size were sufficient to generate states with fidelity in excess of 99%, which is relevant for near-term applications. The proposed scheme enlarges the set of the initial states accessible for variational quantum algorithms and widens the scope of investigation of non-equilibrium phenomena in quantum simulators.
## I Introduction
Quantum computation can be used to investigate strongly interacting quantum many body problems that lie beyond the reach of the classical computing paradigm [1; 2]. In this context, the power of a quantum computer can be attributed to its ability to efficiently store and manipulate highly entangled states [3]. The latter, in contrast to states with low entanglement, cannot be efficiently represented using conventional tensor-network-based approaches [4; 5; 6; 7]. These highly-entangled states of \(n\) qubits can, in principle, be prepared using a quantum circuit of \(\mathcal{O}(2^{n})\) depth [8; 9; 10]. Intuitively, this can be understood as a consequence of separately encoding the information associated with all the possible amplitudes in quantum gates [11]. The exponentially large circuit-depth can be reduced to \(\mathcal{O}(n^{2})\), but at a cost of increasing the circuit-width to \(\mathcal{O}(2^{n})\) qubits [12]. In the noisy intermediate scale quantum era (NISQ), where quantum simulators typically host \(n\sim 10^{2}\) qubits with modest coherence properties, it is desirable to do away with the exponential in both the circuit width and depth. To that end, several approaches have been suggested based on low-rank state-preparation [13] as well as protocols for creation of matrix product states on quantum circuits [14; 15; 16].
An alternative to the aforementioned is a variational approach based on hybrid quantum-classical optimization which lies at the heart of a large number of NISQ-era algorithms such as the variational quantum eigensolver [17; 18] and the quantum approximate optimization algorithm [19; 20; 21]. In this approach, a parameterized quantum circuit (pQC) built out of a pool of unitary rotations is used as an ansatz to create the target quantum state. The parameters of this quantum circuit are subject to an optimization procedure that minimizes a cost function. The latter can be, for instance, the expectation value of a target Hamiltonian or negative overlap to the target state. While the application of the unitary rotation as well as measurement of the relevant cost function are implemented on a quantum simulator, the optimization is performed on a classical computer. The result of the classical optimization is subsequently used as an updated guess for the parameters of the quantum circuit. The process is repeated until the desired accuracy of the state preparation is reached. This variational approach has been successfully used to generate the ground states of a number of quantum Hamiltonians [22; 23].
Naturally, the variational quantum-classical approach relies on: i) the choice of the ansatz quantum circuit and ii) the efficacy of the classical optimization procedure. For a target ground state of Hamiltonian \(H_{T}\), the ansatz quantum circuit can be built of a fixed number (say, \(p\)) of alternating applications of \(\exp(-\mathrm{i}\theta_{j}H_{T})\) and \(\exp(-\mathrm{i}\phi_{j}H_{M})\), where \(H_{M}\) is a mixing Hamiltonian [24; 19; 25]. The \(2p\) parameters \(\{\theta_{j},\phi_{j}\}\) are subject to the classical optimization procedure [26]. The latter can be any standard optimization routine such as a first-order gradient descent [27], a quasi-second order method like BFGS [28] or the quantum natural gradient (QNG) [29; 30]. In contrast to the first two, the QNG method finds an optimization path along the steepest gradient direction, but takes into account the geometry of the manifold of quantum states [31; 32; 33]. The QNG
method has been shown to be more robust than the others for finding the optimal set of parameters for the quantum circuit [23] and is used in the current work.
In this work, we propose a general symmetry-conserving quantum circuit ansatz based on the modification [22] of the well-known quantum approximate optimization algorithm (QAOA) ansatz [19]. The main characteristics of the SCom-QAOA circuits are summarized below. First, these circuits allow efficient preparation of arbitrary quantum states, including non-translation-invariant states despite starting from a translation-invariant initial state. In fact, for translation-invariant states, our quantum circuit ansatz is similar to that in Ref. [22]. Second, the pQC ansatz is chosen while conserving the symmetries of the model, when possible. This restricts the pool of allowed unitary rotations. Third, the depth of the circuit is determined by not only the desired fidelity, but also the amount of entanglement contained in the target state. In this work, we focus exclusively on pure target states and thus, von Neumann entropy of a subsystem serves as a useful quantitative measure of the amount of the entanglement. For a given desired fidelity, ground states of gapped one-dimensional Hamiltonians are generated by a circuit whose depth that depends not on the system-size, but rather on the correlation-length of the system. The bounded correlation length of these systems leads to a bounded entanglement entropy for the ground states of these systems [4], which in turn leads to a bounded circuit depth. The depth for realization of critical one-dimensional system scales with system-size. Fourth, the proposed approach generates eigenstates of lattice Hamiltonians that are integrable as well as non-integrable without any discernible difference in performance, indicating the robustness of this approach. This is shown by creating low-lying eigenstates of non-integrable perturbed Ising and tricritical Ising lattice models. Finally, the proposed scheme can be directly implemented for creation of eigenstates of generic two-dimensional quantum Hamiltonians - a key set of problems where NISQ simulators have the potential to outperform classical computers.
The proposed pQC ansatz is demonstrated by maximizing the square of the overlap to a target state obtained independently using density matrix renormalization group (DMRG) technique on matrix product states. The evolution of the quantum circuit is performed using the time-evolved block-decimation (TEBD) algorithm. As such, the simulations performed use perfect circuits and qubits without accounting for finite gate-errors or lifetimes of the qubits. This can be remedied by using a noisy quantum circuit simulator such as Qiskit [34].
The proposed QNG optimization of the SCom-QAOA circuits is a modified version of the general problem of determining quantum circuit complexity [35; 36; 37] in terms of a geometric control theory [38]. For the general case, the relevant unitary operator that takes the initial state to a given target state can be shown to be approximated by \(\mathcal{O}(n^{6}D^{3})\) single and two-qubit gates [36]. Here, \(D\) is the distance between the identity and the desired unitary operator [35]. In this work, the optimization is performed while first fixing the set of unitary rotations that are used in the pQC and subjecting the angles of these unitary rotations to the optimization routine. Therefore, the optimization occurs on a sub-manifold of the entire space of unitary rotations. A broader optimization in the space of unitary rotations, as proposed in Ref. [36], is likely to yield more versatile quantum circuits. In fact, some progress has already been achieved to efficiently perform this optimization [39].
The article is organized as follows. In Sec. II, the QNG-based optimization scheme is described, including details on the pQC ansatz and the necessary details about the Fubini-Study metric. Sec. III presents results of numerical simulations for the perturbed Ising and tricritical Ising models. Sec. IV provides a concluding summary and outlook.
## II QNG optimization scheme for scom-QAOA circuits
Here, the details of the ansatz for the quantum circuit and the QNG optimization method are presented. The case of open boundary conditions is described below. The periodic case can be analyzed analogously.
### scom-QAOA Quantum Circuits
Consider the general problem of preparation of arbitrary eigenstates of target quantum Hamiltonians that can be decomposed as:
\[H_{T}=H_{\text{ons}}+H_{\text{nn}}+H_{\text{nnn}}+\ldots, \tag{1}\]
where ons, nn, nnn denote the terms of the Hamiltonian that involve one site, nearest-neighbors, next-nearest-neighbors and so on. Each of the aforementioned kinds of interaction terms can be divided into groups, labeled by \(s_{\alpha}\), \(\alpha=\text{ons}\), nn, nnn...:
\[H_{\alpha}=\sum_{s_{\alpha}}K^{s_{\alpha}}=\sum_{s_{\alpha}}\sum_{j}\lambda _{j}^{s_{\alpha}}K^{s_{\alpha}}_{j}, \tag{2}\]
where \(K^{s_{\alpha}}_{j}\) is the corresponding operator in the Hamiltonian and \(\lambda_{j}^{s_{\alpha}}\) is the coupling at the \(j^{\text{th}}\)-site. For example, the Hamiltonian for the one-dimensional transverse field Ising model in the presence of a longitudinal field is
\[H^{\text{I}}=-\lambda^{X}\sum_{j=1}^{L}X_{j}-\lambda^{Z}\sum_{j=1}^{L}Z_{j}- \sum_{j=1}^{L-1}X_{j}X_{j+1}, \tag{3}\]
where \(\lambda^{X,Z}\) are strengths of the longitudinal and transverse fields respectively. Then, the decomposition of Eq. (2) yields \(s_{\text{ons}}\in\{X,Z\}\) and \(s_{\text{nn}}\in\{XX\}\). Then,
\[K^{X}_{j}=-X_{j},K^{Z}_{j}=-Z_{j},K^{XX}_{j}=-X_{j}X_{j+1}. \tag{4}\]
While for the Ising case, the grouping described in Eq. (2) is unique, this is not true in general. For instance, consider the XXZ spin-chain Hamiltonian:
\[H^{\rm xxx}=-\sum_{j=1}^{L-1}(X_{j}X_{j+1}+Y_{j}Y_{j+1}+\cos\gamma Z_{j}Z_{j+1}), \tag{5}\]
where \(\gamma\) is the anisotropy parameter. One possible decomposition would be \(s_{\rm nn}\in\{XX,YY,ZZ\}\) with
\[K_{j}^{AA}=-A_{j}A_{j+1},\ A=X,Y,Z. \tag{6}\]
An equally valid decomposition would be with \(s_{\rm nn}\in\{XX+YY,ZZ\}\) with
\[K_{j}^{XX+YY} = -(X_{j}X_{j+1}+Y_{j}Y_{j+1}),\] \[K_{j}^{ZZ} = -Z_{j}Z_{j+1}. \tag{7}\]
The two groupings for the XXZ chain lead to two different SCom-QAOA circuits that conserve different symmetries of the XXZ model. This is because the operators \(K_{j}^{s_{\alpha}}\)-s will be used to generate the unitary rotations of the different layers of the SCom-QAOA circuit. The relevant unitary operators and the corresponding symmetries are described next.
The pQC ansatz that performs the desired unitary rotation from the initial state to the target state is given by
\[U=\prod_{l=1}^{N}U^{l}, \tag{8}\]
where \(N\) the total number of layers, \(U^{l}\) is the unitary rotation at the \(l^{\rm th}\) layer and it is understood that \(l=1\) acts first. The unitary \(U^{l}\) is decomposed further into unitary rotations of sublayers as:
\[U^{l}=\ldots U^{l}_{\rm nn}U^{l}_{\rm on}U^{l}_{\rm nn}, \tag{9}\]
where the dots indicate unitary operators generated by Hamiltonian terms which are beyond next-nearest neighbor. The unitary operator \(U^{l}_{\alpha}\) for the sublayer \(\alpha\) is defined as:
\[U^{l}_{\alpha}=\exp\bigl{(}-{\rm i}\sum_{s_{\alpha}}\sum_{j}\theta^{s_{\alpha }}_{l,j}K^{s_{\alpha}}_{j}\bigr{)}, \tag{10}\]
where \(\alpha=\rm{ons}\), \(\rm{nn}\), \(\rm{nnn}\),...and \(s_{\alpha},K^{s_{\alpha}}_{j}\)-s have been defined in Eq. (2). The angles \(\theta^{s_{\alpha}}_{l,j}\) are the parameters that are subject to the QNG-optimization process. Note that although the angles \(\{\theta^{s_{\alpha}}_{l,j}\}\) change from layer to layer, the operators \(K^{s_{\alpha}}_{j}\) generating the unitary rotation remain the same [40]. The operator \(U^{l}_{\alpha}\) is decomposed into product over unitary operators corresponding to the different groupings:
\[\tilde{U}^{l}_{\alpha}=\prod_{s_{\alpha}}\exp\bigl{(}-{\rm i}\sum_{j}\theta^{ s_{\alpha}}_{l,j}K^{s_{\alpha}}_{j}\bigr{)}. \tag{11}\]
Clearly, there are different possible \(\tilde{U}^{l}_{\alpha}\)-s starting with the same \(U^{l}_{\alpha}\) since the different terms in the summation over \(s_{\alpha}\) in Eq. (10) do not necessarily commute with each other. In the models analyzed in this work, the different possible choices led to minor quantitative changes in the results. For instance, in the case of the Ising Hamiltonian [Eq. (3)], the order of the \(R_{X}\) and \(R_{Z}\) unitary operators [Fig. 1(b)] did not make any qualitative difference in the performance of the SCom-QAOA circuit. Finally, a further simplification arises when the terms at different sites within each group commute:
\[\bigl{[}K^{s_{\alpha}}_{j},K^{s_{\beta}}_{k}\bigr{]}=0,\text{if }s_{\alpha}=s_{\beta} \tag{12}\]
for all choices of \(j,k\). In this case, a further simplification occurs and the sublayer unitary operator reduces to:
\[\tilde{U}^{l}_{\alpha}=\prod_{s_{\alpha}}\prod_{j}\exp\bigl{(}-{\rm i}\theta^{ s_{\alpha}}_{l,j}K^{s_{\alpha}}_{j}\bigr{)}. \tag{13}\]
Evidently, Eq. (13) does not hold for all groupings of generic models. Consider, for example, for the XXZ chain. For the grouping of Eq. (6) [see Fig. 1(d)], Eq. (12) is valid, but not for the decomposition of Eq. (7) [see
Figure 1: (a) Scheme for SCom-QAOA circuits. For definiteness, the initial state is shown to be the product state \(|\uparrow\rangle^{\otimes L}\). The blue boxes represent the unitary rotations at different layers [Eqs. (8, 9)]. (b) The unitary operator, \(U_{l}\), for the Ising model with the longitudinal field [Eq. (3)]. Note that the angles for the different unitary operators are, in general, site-dependent [Eq. (10)]. (c,d) The unitary operator, \(U_{l}\) for the XXZ spin-chain. Note that while the target state can be reached using either unitary, the symmetry conserved by the quantum circuit is different [Eqs. (6, 7)]. (e) The unitary operator, \(U_{l}\) for the spin-chain Hamiltonian of Eq. (23), which for certain choices of parameters, gives rise to the tricritical Ising model (see Sec. III.3 for more details). This quantum circuit conserves the \(\mathbb{Z}_{2}\) charge conserved by the Hamiltonian of Eq. (23). Note that in panels (c,e), only the leading order terms of \(\tilde{U}^{l}_{\alpha}\) [Eq. (11)] are shown for the sake of brevity.
Fig. 1(c) for the leading order contribution]. In the latter case, a higher-order Suzuki-Trotter decomposition of a suitable order can be used to implement Eq. (11). In all cases considered in this work, the leading order term presented in Eq. (13) was sufficient to reach the desired fidelity.
There are two important features of the ansatz pQC. First, the ansatz pQC can be chosen to conserve a given symmetry of a model. For the pQC to conserve the charge \(Q\), where \([Q,H_{T}]=0\), in general,
\[\left[Q,\exp\bigl{(}-\mathrm{i}\sum_{j}\theta_{l,j}^{s_{\alpha}}K_{j}^{s_{ \alpha}}\bigr{)}\right]=0, \tag{14}\]
which is satisfied for \([Q,K_{j}^{s_{\alpha}}]=0\) for each \(s_{\alpha}\). This restricts possible decompositions of Eq. (2). Consider the two possible decompositions of the XXZ chain Hamiltonian in Eqs. (6, 7). If the pQC has to conserve the U(1) symmetry associated with \(Q_{1}=\sum_{j}Z_{j}\), only Eq. (7) is permitted. On the other hand, for the pQC to conserve the \(\mathbb{Z}_{2}\)-symmetry associated with \(Q_{2}=\prod_{j}Z_{j}\), either choice is allowed. Note that at the isotropic point, the XXZ chain has \(SU(2)\)-symmetry. A SCom-QAOA circuit can be constructed while conserving this non-abelian symmetry as well. The simplest case, where all the \(\theta_{l,j}^{s_{\alpha}}\)-s are site-independent, has been analyzed in Ref. [22]. Second, the angles \(\theta_{l,j}^{s_{\alpha}}\) are chosen to be site-dependent. This is crucial to reach non-translation-invariant target states starting with translation-invariant initial states. However, if the goal is to find a translation-invariant eigenstate of a translation-invariant Hamiltonian, the SCom-QAOA circuit ansatz can be simplified and it is sufficient to choose \(\theta_{l,j}^{s_{\alpha}}=\theta_{l}^{s_{\alpha}}\). This leads to straightforward generalization of the ansatz of Ref. [22], albeit with a different optimization procedure based on the QNG. The latter is explained below.
### Quantum Natural Gradient Optimization
The QNG-optimization procedure is described for the parameters \(\vec{\Theta}\equiv\{\theta_{l,j}^{s_{\alpha}}\}\). In contrast to conventional optimization routines like BFGS, the QNG ensures that optimization follows the steepest descent taking into account the geometry of the space of wavefunctions [29; 31] and can viewed as the quantum analog of approach based on information geometry [41]. In the quantum context, this has been considerably successful in preparing certain many-body states [23; 29]. This work generalizes the previous approaches to realize arbitrary excited states of interacting lattice models and in particular, demonstrates the qualitatively different behavior of the QNG-optimizer for gapped vs gapless systems. The relevant cost function \(\mathcal{L}(\vec{\Theta})\) to be minimized can be the expectation value of some target Hamiltonian \(H\), denoted by \(\mathcal{L}_{H}\), or the negative of the square of the overlap to a target state \(\psi_{T}\), denoted by \(\mathcal{L}_{O}\):
\[\mathcal{L}_{H}(\vec{\Theta}) =\bigl{\langle}\psi(\vec{\Theta})\bigr{|}H\bigl{|}\psi(\vec{ \Theta})\bigr{\rangle}, \tag{15}\] \[\mathcal{L}_{O}(\vec{\Theta}) =-\bigl{|}\bigl{\langle}\psi_{T}\bigl{|}\psi(\vec{\Theta})\bigr{ \rangle}\bigr{|}^{2}. \tag{16}\]
The parameters \(\vec{\Theta}\) can be iteratively obtained from the equation [29]:
\[\Theta_{p}(t+1)=\Theta_{p}(t)-\eta\sum_{q}g(\vec{\Theta})_{pq}^{-1}\cdot\frac{ \partial\mathcal{L}(\vec{\Theta})}{\partial\Theta_{q}(t)}, \tag{17}\]
where \(t\) is the iteration index and \(\eta\) is the learning rate, typically a real number between \(0\) and \(1\). The indices \(p,q\) denote the indices of the reshaped vector \(\vec{\Theta}\). Finally, the matrix \(g(\vec{\Theta})\) is the Fubini-Study metric tensor. The latter incorporates the geometry of the manifold of wavefunctions while computing the steepest gradient. This is then used during the iterations parameterized by \(\vec{\Theta}(t+1)\) and \(\vec{\Theta}(t)\)[42]. The Fubini-Study metric tensor is the real part of the quantum geometric tensor, \(G(\vec{\Theta})\):
\[G_{pq}(\vec{\Theta}) =\Bigl{\langle}\frac{\partial\psi(\vec{\Theta})}{\partial\Theta_{ p}}\Bigr{|}\frac{\partial\psi(\vec{\Theta})}{\partial\Theta_{q}}\Bigr{\rangle}\] \[-\Bigl{\langle}\frac{\partial\psi(\vec{\Theta})}{\partial\Theta _{p}}\Bigr{|}\psi(\vec{\Theta})\Bigr{\rangle}\Bigl{\langle}\psi(\vec{\Theta}) \Bigr{|}\frac{\partial\psi(\vec{\Theta})}{\partial\Theta_{q}}\Bigr{\rangle}, \tag{18}\] \[g_{pq}(\vec{\Theta}) \equiv\Re\bigl{[}G_{pq}(\vec{\Theta})\bigr{]}. \tag{19}\]
In practical computations, often the metric tensor \(g_{pq}\) is singular. In these cases, it is convenient to introduce a Tikhonov regularization parameter \(\delta\)[23]: \(g\to g+\delta\mathbb{I}\). This regularized metric tensor can then be inverted in computing \(\vec{\Theta}\)-s. Thus, the computation of \(G,\mathcal{L}\) corresponds to evaluation of multi-point correlation-functions of suitable operators for the states generated by the SCom-QAOA circuit. Explicitly,
\[G_{pq} =\bigl{\langle}\psi_{p}\bigr{|}K_{p}(U_{p}^{>})^{\dagger}U_{q}^{>}K _{q}\bigl{|}\psi_{q}\bigr{\rangle}\] \[-\bigl{\langle}\psi_{p}\bigr{|}K_{p}\bigl{|}\psi_{p}\bigr{\rangle} \bigl{\langle}\psi_{q}\bigr{|}K_{q}\bigl{|}\psi_{q}\bigr{\rangle}. \tag{20}\]
Here, \(|\psi_{p}\rangle\) is the state generated at the \(p^{\mathrm{th}}\) step. For instance, if \(\Theta_{p}=\theta_{l,j}^{s_{\alpha}}\), the state \(|\psi_{p}\rangle\) is generated by applying the quantum circuit up to the sublayer \(s_{\alpha}\) of the layer \(l\). The operator \(U_{p}^{>}\) stands for the remaining unitary rotations that evolve the state \(|\psi_{p}\rangle\) for the remaining layers and sublayers of the quantum circuit. It is easy to explicitly verify that Eq. (20) implies that the metric tensor is symmetric, as expected for a metric governing the distance between two quantum states:
\[G=G^{\dagger}\Rightarrow g=g^{T}, \tag{21}\]
where T indicates transposition. In actual computations, it is often advantageous to compute only the upper triangular and diagonal parts of \(G\), with the rest of the tensor obtained using Eq. (21). The gradient of the overlap and
the Hamiltonian cost functions are given by:
\[\frac{\partial\mathcal{L}_{H}}{\partial\Theta_{p}} =2\Re\big{\langle}\psi_{p}\big{|}iK_{p}(U_{p}^{>})^{\dagger}H\big{|} \psi(\vec{\Theta})\big{\rangle},\] \[\frac{\partial\mathcal{L}_{O}}{\partial\Theta_{p}} =-2\Re\Big{[}\big{\langle}\psi_{p}\big{|}iK_{p}(U_{p}^{>})^{ \dagger}\big{|}\psi_{T}\big{\rangle}\big{\langle}\psi_{T}\big{|}\psi_{p}\big{ \rangle}\Big{]}. \tag{22}\]
The computation of the metric tensor and the relevant cost functions can be straightforwardly implemented using a time-evolution algorithm present in a tensor network package. This is done here using the TEBD algorithm of the TeNPy package [43].
## III Results
Here, it is shown that the QNG-based optimizer for the SCom-QAOA circuits enable generation of ground states of gapped 1D Hamiltonians with circuit-depth that is proportional to the correlation length, while the same is proportional to the system-size for the gapless case. The two cases are demonstrated by investigating the critical Ising and tricritical Ising chains as well as the magnetic perturbation of the Ising model. While the results are presented for open boundary conditions, similar results were obtained for periodic boundary conditions which are not shown for brevity.
For the numerical computations, the overlap cost-function [Eq. (16)] was used since this was found to be more stable for targeting excited states of the different models, with the target state obtained using DMRG technique. For both the DMRG and the TEBD computations, the maximum bond-dimension was chosen to be always higher than what was reached during any of the TEBD evolution of the quantum circuit. This was to ensure that TEBD would capture faithfully the dynamics of a quantum simulator keeping singular values up to \(10^{-12}\). The cut-off for the desired fidelity (defined as the absolute value of the overlap) to the target state, \(|\psi_{T}\rangle\), is set to 99% in this work. This is chosen since the typical two-qubit gate-error in current noisy simulators is of that order. Similar results were obtained for higher cut-offs. The learning rate \(\eta\) was set to 0.25 and a regularization parameter \(\epsilon=0.01\) was used while inverting the metric \(g\) in Eq. (17). The initial choice for the angles \(\theta_{l,j}^{s_{z}}\) was taken to be \(=0.01\), but the QNG-based optimization procedure was mostly insensitive to the initial choice of angles [44].
The qubits of the circuit were initialized in the perfectly-polarized state \(|\uparrow\rangle^{\otimes L}\). Note that any other product state can be related to this state by circuit of unit depth and as such, modifies our results only by a depth of \(\mathcal{O}(1)\). Also, the so-called 'block-diagonal approximation' of the Fubini-Study metric \(g\) was not successful in generating the many-body states investigated in this work. This is indicative of correlations between different sublayers and different layers in the quantum circuit evolution, which is not surprising and matches the findings in Ref. [23].
### The critical Ising chain
Here, the results for the critical transverse field Ising model are shown. The relevant Hamiltonian is given in Eq. (3) with \(\lambda^{Z}=1\), \(\lambda^{X}=0\). The corresponding layer unitary \(U^{l}\) [Eq. (9)] is built out of a layer of \(R_{\text{XX}}\)-s followed by a layer of \(R_{\text{Z}}\)-s (Fig. 1, top right panel, without the layer of \(R_{\text{X}}\) rotations). For circuit parameters targeting the \(Q=\prod_{j}Z_{j}=+1\) symmetry sector, the initial state was chosen to be: \(|\psi_{0}\rangle=|\uparrow\rangle^{\otimes L}\) (\(L\) equals the system-size). When searching for the parameters to realize states in the \(Q=-1\) sector, the initial state was chosen to be \(|\psi_{0}\rangle=X_{j}|\uparrow\rangle^{\otimes L}\), where \(j\) is an integer close to \(L/2\).
Fig. 2 shows the optimization results for the SCom-QAOA circuits for different system sizes as the number of layers is varied for the ground state. The latter occurs in the \(Q=+1\) sector. The top left panel shows the final
Figure 2: Results for the case when the target state is the ground state of Eq. (3). The black dashed lines correspond to the desired fidelity of 0.99. (Top left) Variation of the fidelity to the target state \(O_{N}(l=N)=|\langle\psi_{T}|U|\psi_{0}\rangle|\) at the end of the \(N\)-layers of the SCom-QAOA circuit. The number of layers required to reach the desired fidelity grows with the system size \(L\). (Top right) Variation of \(O_{N}(l)\) as the circuit evolves through the different layers for \(N=L/2\). The overlap changes non-monotonically, as is the case for the expectation value of the Hamiltonian \(E_{N}(l)\) (bottom left) and the half-chain entanglement entropy \(S_{N}(l)\) (bottom right). The non-monotonic behavior is compatible with the number of required layers to reach desired fidelity proportional to the system-size (see discussion in the main text). For the bottom two panels, the results obtained using the SCom-QAOA circuits (circles) are compared with the DMRG results for the target state (crosses).
overlap to the target state, \(O_{N}(l=N)=|\langle\psi_{T}|U|\psi_{0}\rangle|\), at the end of the \(N\) applied layers [see Eq. (8) for the definition of \(U\)]. For each choice of \(L\), the desired fidelity was reached by \(N_{c}=L/2\) layers. This is shown in the inset of the top-left panel and is reminiscent of the results for the periodic Ising chain obtained in Ref. [22]. However, note that the depth of the SCom-QAOA circuit optimized based on QNG does, in fact, depend on the desired fidelity to the target state and the spectral gap of the model, as described below.
The top-right, bottom-left and the bottom-right panels show the behavior of the overlap to the target \(O_{N}(l)\), the expectation value of the Hamiltonian \(E_{N}(l)\), and the half-chain entanglement entropy \(S_{N}(l)\) respectively as a function of the layer index \(l\). In these three plots, the \(N=L/2\). The behavior of \(O_{N}(l)\), \(E_{N}(l)\), and \(S_{N}(l)\) are non-monotonic with the non-monotonicity being more severe for larger system-sizes. This is likely a consequence of the fact that the energy levels of a critical chain are split (to leading order) as \(1/L\). With increasing system-size, the QNG-optimization scheme, which maximizes the overlap to the target state, has a tougher job finding the optimal next iteration step since there are many nearby states. A related phenomenon occurs while searching for a gapped ground state, see Sec. III.2. The non-monotonicity of the overlap explains why the number of layers being required to attain desired fidelity scales proportional to \(L\). The quantum circuit evolves the product state with a time-dependent inhomogeneous quench Hamiltonian. In contrast to the linear growth of \(S_{N}(l)\) between two parts of the system saturating to an extensive value [45; 46], the same grows at first to an extensive value before diminishing to \(S_{N}(l)\propto\ln L\) for the critical Ising ground state. The values of \(E_{N}(l=N)\) and \(S_{N}(l=N)\) (in circles) are compared with the results obtained using DMRG (crosses).
Fig. 3 shows the results for the seven lowest excited states for the critical Ising chain. Three of the seven are in the \(Q=+1\) sector, while the remaining are in the \(Q=-1\) sector. By choosing the initial state to be in the suitable symmetry sector, the target states were realized. This is because the relevant symmetry is conserved throughout the quantum circuit evolution. Note that the excited states have higher entropy than the ground state [47] and in general, require circuits which are deeper than the one realizing the ground state. For \(L=12\), this is noticeable for the excited states in the Q = -1 sector [indexed by \(i=2,3\), Fig. 3 (right panel)].
### The Ising chain in longitudinal field
Here, the results are shown for the critical Ising chain perturbed by a longitudinal field. The corresponding Hamiltonian is given in Eq. (3), with \(\lambda^{Z}=1,\lambda^{X}\neq 0\). In the scaling limit, the lattice model is described by the integrable \(E_{8}\) field theory [48]. However, since our interest is in finding the circuit parameters for transforming a high-energy state to the ground state of the non-integrable lattice model, the integrable characteristic of the continuum model is not relevant. The corresponding layer unitary \(U^{l}\) [Eq. (9)] is built out of a layer of \(R_{\text{XX}}\)-s followed by a layer of \(R_{\text{Z}}\) and a layer of \(R_{\text{X}}\)-s [Fig. 1(top right)].
Fig. 4 shows the results for generating the ground state of the model for \(\lambda^{Z}=1.0\), \(\lambda^{X}=0.06\). In contrast to the critical Ising chain (Fig. 2), the number of layers required to reach the desired fidelity to the target state is independent of the system-size (top left panel). The corresponding overlap \(O_{N}(l)\), energy \(E_{N}(l)\) and the entanglement entropy \(S_{N}(l)\) are shown in the top right, bottom left and the bottom right panels. In contrast to the gapless case, the overlap and energy are monotonic with the layer index \(l\). This is likely a consequence of the spectral gap separating the ground state from the rest of the spectrum. The aforementioned gap enables the QNG-optimization to find a more optimal path towards the target state compared to the gapless case (see also Fig. 5). Note that for small system-sizes, the variations are 'less monotonic'. This is to be expected since the true gapped behavior is only apparent when the system-sizes are large enough compared to the correlation length (see top right panel of Fig. 5 for values for the latter).
Fig. 5 shows the variation of the required number of layers with the parameter \(\lambda^{X}\). As the latter is decreased, the energy gap between the ground and the first excited states diminishes (recall that \(\lambda^{X}=0\) is the critical point). This leads to an increase in the required number of layers to reach the desired fidelity to the target state. The top panels show the overlaps \(O_{N}(l=N)\) and the number of layers, \(N_{c}\), required to reach the desired fidelity. The latter quantity (top right panel) is plotted as a function of the correlation length \(\xi\), which is obtained using infinite DMRG computations. The top right panel suggests a dependence \(N_{c}\propto\xi\), which is reminiscent of the
Figure 3: Overlap to the target state as a function of the number of layers in the \(Q=+1\) (left) and the \(Q=-1\) (right) sectors. The system size was chosen to be \(L=12\). The results are shown for the first seven excited states. The index \(i\) labels the state in a given symmetry sector, with \(Q=1,i=0\) corresponding to the ground state (see Fig. 2). The black dashed lines correspond to the desired fidelity of \(0.99\).
gapless case result with the system-size replaced by the correlation length. Note that for \(0.04\leq\lambda^{\chi}\leq 0.08\), the correlation lengths are too close to each other to cause a change in the value of \(N_{c}\). Recall that the latter is integer-valued, unlike \(\xi\). The overlaps \(O_{N}(l)\) and the energies \(E_{N}(l)\) are shown in the bottom panels as a function of the layer index \(l\) for \(N=N_{c}\). As explained earlier, for smaller value of \(\lambda^{\chi}\) which is associated with a smaller spectral gap, the \(O_{N}(l),E_{N}(l)\) start deviating the monotonic behavior.
### The Tricritical Ising model
Next, results are presented for the ground state of a perturbed Ising model, which for certain choice of parameters, realizes the tricritical Ising model [49]. The Hamiltonian is given by
\[H^{\rm TCI} =-\lambda^{Z}\sum_{j=1}^{L}Z_{j}-\sum_{j=1}^{L-1}X_{j}X_{j+1} \tag{23}\] \[+\lambda^{\rm ZXX}\sum_{j=1}^{L-2}\big{(}X_{j}X_{j+1}Z_{j+2}+Z_{ j}X_{j+1}X_{j+2}\big{)}. \tag{24}\]
While for \(\lambda^{\rm ZXX}=0\), \(H^{\rm TCI}\) reduces to the Hamiltonian of the ordinary transverse-field Ising model, for \(\lambda^{\rm ZXX}\approx 0.428,\lambda^{Z}=1.0\), the long-wavelength properties of this model is described by the tricritical Ising model [49]. Below, the SCOm-QAOA circuits for the realization of the tricritical Ising ground state is described.
Following the scheme described in Sec. II, each layer unitary \(U^{l}\) is built out of a sublayer of \(R_{\rm XX}\), followed by a sublayer of \(R_{Z}\) and subsequently, a layer of the
Figure 4: Optimization results for the realization of the ground state of the (gapped) Ising model with longitudinal field model. The parameters chosen were: \(\lambda^{Z}=1.0,\lambda^{X}=0.06\) [Eq. (3)]. Variation of the overlap to the target state as a function of the number of layers is shown in the top left panel for different system sizes. In contrast to the critical chain, the number of layers \(N_{c}\) required to reach 99% fidelity (black dotted lines in the top panels) is independent of the system size. The variations of the overlap \(O_{N}(l)\), energy \(E_{N}(l)\) and the half-chain entanglement entropy \(S_{N}(l)\) with the number of layers are shown in the top right, bottom left and the bottom right panels respectively. The variation of the latter three quantities are much smoother and approaches a monotonic behavior as the parameter \(\lambda^{X}\) increases. This can be understood as a consequence of the spectral gap separating the target ground state from the remaining eigenstates (see Fig. 5 and the main text for discussion). Note that the half-chain entropy differs from the results obtained from DMRG computations. This is because even though the desired fidelity is reached, the entanglement entropy still differs from the actual values. We checked that increasing the cut-off for fidelity closer to 1 reduces the discrepancy between the DMRG and the SCOm-QAOA computations.
Figure 5: Optimization results for \(L=20\). (Top left) Variation of the overlap as a function of the number of layers for different choices of \(\lambda^{X}\) (recall \(\lambda^{X}=0\) corresponds to the critical Ising chain). With the increase of \(\lambda^{X}\), the number of layers required to reach the target state diminishes. (Top right) Variation of the number of layers needed to reach the desired fidelity \(N_{c}\) with the correlation length \(\xi\) in the ground state is shown. The correlation length plays the role of the effective system size in the gapped case. Its values are obtained using infinite DMRG computations. Clearly, \(N_{c}\propto\xi\), barring the values \(0.04\leq\lambda^{X}\leq 0.08\), where the correlation lengths are too close together to cause a change in \(N_{c}\) which is integer valued. (Bottom panels) Variation of the overlap \(O_{N}(l)\) (left) and energy \(E_{N}(l)\) (right) as a function of the layer index \(l\) for the different choices of \(\lambda^{X}\) when \(N=N_{c}\). As the latter increases, the variation approaches a monotonic behavior, which can be attributed to the increased spectral gap (see main text for discussion).
three-body interacting term \(R_{\rm ZXX}\) [see Fig. 1(e)]. Note that the different terms in the summation of Eq. (24) do not commute amongst themselves. The second-order Suzuki-Trotter decomposition was used for the sublayer of \(R_{\rm ZXX}\) operators [50]. As explained in Sec. II, the angles for the different unitary operators constituting \(U^{l}\) are site-dependent. In an actual quantum simulator, the three-body rotation would be further decomposed into one and two-qubit gates using standard techniques (see, for example, Chap. 4 of Ref. [51]). The TEBD computations were done with three-site unitary operator. Notice that the quantum circuit evolution conserves the \(\mathbb{Z}_{2}\) symmetry of \(H^{\rm TCI}\) associated with the operator is \(\prod_{j}Z_{j}\).
Fig. 6 shows the results for the realization of the ground state of \(H^{\rm TCI}\) for \(\lambda^{Z}=1,\lambda^{\rm ZXX}=0.428\). Notice again the growth of the number of required layers to attain the \(99\%\) fidelity cut-off with system-size. However, the growth is slower than the Ising case (inset of the top left panel) with the desired fidelity achieved in \(\sim L/4\) layers. Intuitively, this can be understood as a consequence of the rapid build-up of entanglement due to the three-qubit gates instead of the Ising case which had only two-qubit gates. It is possible that, in general, for critical models, the SCom-QAOA circuits lead to \(N_{c}\propto\beta L\). Here, \(\beta\) is a function of not only the desired fidelity and the critical model being investigated, but also other non-universal factors and we leave it as a future problem to find out the full functional dependence. The variation of the overlap \(O_{N}(l)\), energy \(E_{N}(l)\) and the entanglement entropy \(S_{N}(l)\) with the layer index \(l\) are shown in the top right, bottom left and the bottom right panels. Their dependence remain non-monotonic, as for the critical Ising chain with the non-monotonicity more severe as the system-size increases (see Sec. III.1 for discussion). The lowest few excited states in the \(Q=\pm 1\) sectors were obtained. The results are qualitatively similar as for the critical Ising case and are not shown for brevity.
## IV Summary and outlook
To summarize, this work proposes a parametrized quantum circuit ansatz for efficient preparation of arbitrary entangled eigenstates of quantum many body Hamiltonians. The proposed SCom-QAOA circuits are a symmetry-conserving modification of the QAOA operator ansatz. The parameters of the proposed quantum circuit are determined using the steepest gradient descent algorithm based on the Quantum Natural Gradient. The latter takes into account the curvature of the manifold of quantum states while performing gradient descent. Explicit demonstrations are provided for the critical Ising chain, the Ising model in a longitudinal field and the tricritical Ising model. The circuit depth was found to scale linearly with the length of the chain for ground states of critical models. The same was found to be independent of the system-size and determined entirely by the finite correlation length for gapped ground states. While the results were presented for desired fidelity of \(99\%\), the SCom-QAOA schemes allow efficient preparation of higher-fidelity states as well. This was verified explicitly for a fidelity cut-off of \(99.9\%\), which led to only quantitative changes in the results. Finally, the SCom-QAOA circuits can be straightforwardly generalized to target different states without conserving any symmetry. While this did not lead to a qualitative change of the scaling of the circuit depth for the critical Ising model, this can happen for other models [52].
The proposed efficient preparation of entangled many-body states could be useful for a wide range of practical quantum algorithms. First, it could be used to probe a wider family of quench dynamics than what has been done on digital quantum simulators [53; 54; 55; 56]. Second, an approximate ground state computed using classical methods could be a starting point for a more accurate determination of the actual ground state using hybrid quantum algorithms such as the variational quantum eigensolver [17], the variational quantum simulator [57] and
Figure 6: Optimization results for the ground state of \(H^{\rm TCI}\) at the tricritical Ising point [Eq. (23)]. (Top left) Variation of the overlap to the target state as a function of the number of layers. Note that the desired fidelity is reached faster (\(N_{c}\sim L/4\), see inset) than the Ising case (see Fig. 2). Intuitively, this can be understood as a consequence of the rapid build-up of entanglement resulting from the three-site unitary operations \(R_{\rm ZXX}\). The top right, bottom left and bottom right panels show the variation of the overlap \(O_{N}(l)\), energy \(E_{N}(l)\) and the half-chain entanglement entropy \(S_{N}(l)\) with the layer index \(l\). In the last three plots, \(N\) is chosen to be the number of layers which is sufficient to reach the desired fidelity. The variation of the latter three quantities remain non-monotonic as was true for the critical Ising chain (see Fig. 2 and discussion in Sec. III.1).
the eigenstate witnessing approach [58].
Before concluding, we outline two future research directions. First, the current work used the overlap cost function [Eq. (16)] to determine the circuit parameters. Naturally, this assumes that the target state is known from other techniques. Generalizations to the case when the target state is unknown is straightforward using the Hamiltonian cost function [Eq. (15)]. This would be particularly useful for models which lie beyond the reach of conventional numerical techniques like density-matrix renormalization group. A digital quantum simulator, coupled to classical optimization routines, could be used to investigate these models. In this case, the computation of the _entire_ Fubini-Study metric as well as the gradients of the cost-functions would have to be done using measurements on digital quantum simulators. Some schemes have been proposed to achieve this goal [30]. Note that the SCom-QAOA circuits can be readily implemented for the investigation of two-dimensional quantum Hamiltonians, critical or otherwise. We will report our findings on this topic in a separate work.
Second, the QNG-based optimization problem of the SCom-QAOA circuits can be viewed as a more restrictive case of the general problem of determining circuit complexity of quantum many-body Hamiltonian ground states [35; 36; 37]. A more general solution, which would enable determining the actual circuit complexity, would involving optimization in space of all possible unitary rotations and minimizing the distance between the desired unitary rotation and the identity [35]. While the QNG-based optimization finds an optimal path for the specific choice of the SCom-QAOA ansatz with the given cost-function, it does not rule out 'a more direct' path to the target state. The non-monotonic growth of entanglement entropy and overlap with the target state (see, for example, Fig. 2) likely indicate the existence of a more optimal circuit. A more quantitative investigation of generalizations of the SCom-QAOA circuits and comparison with Nielsen's bound could potentially help construct the optimal circuits for given target eigenstates of generic strongly-interacting Hamiltonians [59; 60; 61].
## Acknowledgements
Discussions with Sergei Lukyanov, Frank Pollmann, and Yicheng Tang are gratefully acknowledged. This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-SC0012704.
|
2302.07255 | Chemo-Dynamical Evolution of Galaxies | Stars are fossils that retain the history of their host galaxies. Elements
heavier than helium are created inside stars and are ejected when they die.
From the spatial distribution of elements in galaxies, it is therefore possible
to constrain the physical processes during galaxy formation and evolution. This
approach, Galactic archaeology, has been popularly used for our Milky Way
Galaxy with a vast amount of data from Gaia satellite and multi-object
spectrographs to understand the origins of sub-structures of the Milky Way.
Thanks to integral field units, this approach can also be applied to external
galaxies from nearby to distant universe with the James Webb Space Telescope.
In order to interpret these observational data, it is necessary to compare with
theoretical predictions, namely chemodynamical simulations of galaxies, which
include detailed chemical enrichment into hydrodynamical simulations from
cosmological initial conditions. These simulations can predict the evolution of
internal structures (e.g., metallicity radial gradients) as well as that of
scaling relations (e.g., the mass-metallicity relations). After explaining the
formula and assumptions, we will show some example results, and discuss future
prospects. | Chiaki Kobayashi, Philip Taylor | 2023-02-14T18:52:09Z | http://arxiv.org/abs/2302.07255v2 | # Chemo-Dynamical Evolution of Galaxies
###### Abstract
Stars are fossils that retain the history of their host galaxies. Elements heavier than helium are created inside stars and are ejected when they die. From the spatial distribution of elements in galaxies, it is therefore possible to constrain the physical processes during galaxy formation and evolution. This approach, Galactic archaeology, has been popularly used for our Milky Way Galaxy with a vast amount of data from Gaia satellite and multi-object spectrographs to understand the origins of sub-structures of the Milky Way. Thanks to integral field units, this approach can also be applied to external galaxies from nearby to distant universe with the James Webb Space Telescope. In order to interpret these observational data, it is necessary to compare with theoretical predictions, namely chemodynamical simulations of galaxies, which include detailed chemical enrichment into hydrodynamical simulations from cosmological initial conditions. These simulations can predict the evolution of internal structures (e.g., metallicity radial gradients) as well as that of scaling relations (e.g., the mass-metallicity relations). After explaining the formula and assumptions, we will show some example results, and discuss future prospects.
## Introduction
Explaining the origin of the elements is one of the scientific triumphs linking nuclear physics with astrophysics. As Fred Hoyle predicted, carbon and heavier ele |
2302.10729 | Understanding Dynamic Human-Robot Proxemics in the Case of Four-Legged
Canine-Inspired Robots | Recently, quadruped robots have been well developed with potential
applications in different areas, such as care homes, hospitals, and other
social areas. To ensure their integration in such social contexts, it is
essential to understand people's proxemic preferences around such robots. In
this paper, we designed a human-quadruped-robot interaction study (N = 32) to
investigate the effect of 1) different facing orientations and 2) the gaze of a
moving robot on human proxemic distance. Our work covers both static and
dynamic interaction scenarios. We found a statistically significant effect of
both the robot's facing direction and its gaze on preferred personal distances.
The distances humans established towards certain robot behavioral modes reflect
their attitudes, thereby guiding the design of future autonomous robots. | Xiangmin Xu, Emma Liying Li, Mohamed Khamis, Guodong Zhao, Robin Bretin | 2023-02-21T15:26:36Z | http://arxiv.org/abs/2302.10729v3 | # Understanding Dynamic Human-Robot Proxemics in the Case of Four-Legged Canine-Inspired Robots
###### Abstract
Recently, quadruped robots have been well developed with potential applications in different areas, such as care homes, hospitals and other social areas. To ensure their integration in such social contexts, it is essential to understand people's proxemic preferences around such robots. In this paper, we designed a human-quadruped-robot interaction study (N = 32) to investigate the effect of 1) different facing orientations and 2) the gaze of a moving robot on human proxemic distance. Our work covers both static and dynamic interaction scenarios. We found a statistically significant effect of both the robot's facing direction and its gaze on preferred personal distances. The distances humans established towards certain robot behavioral modes reflect their attitudes, thereby guiding the design of future autonomous robots.
Keywords:Human Robot Interaction Quadrupted Robots Proxemics.
## 1 Introduction
The humanoid and animal-shaped robots are expecting robust implementation in real-world practice. Unlike their senior type robotic arms or Unmanned Guided Vehicle (UGV)s, who have already taken their irreplaceable places in manufacturing, medical surgeries, and delivering, these robots find their application in more particular scenarios like health care [22], multi-terrain operating [1] and psychotherapy [41]. Some of these scenarios require a highly human-like or animal-like figure to imitate real-world interaction experience while some have a strong need for robotic arms and legs to achieve multi-terrain mobility. In most of the above use cases, the robot should co-exist or cooperate with humans in the same scenario. To seamlessly align these autonomous robots' behaviors with their human teammates or integrate them into human environments [36], robots must obtain appropriate proxemic behaviors with people.
For the minimum, robots should maintain a safe working distance from nearby humans, and people should be aware to collaborate with robots outside the areas. These are hard limits for robots to work in any environment. Especially on those robots which may severely harm people like robotics arms or autonomous vehicles, many researchers like Safeea [40] and Camara [9] have proposed methods to keep this baseline in interaction secured. Although physical safety is required, it is not enough for their integration into social spaces. People might perceive the robots as threatening and undisciplined figures, or even obstacles, if they are not abiding by the physical and psychological distancing rules of humans [36]. The space people maintain between themselves (interpersonal distance) is an important social cue. It reflects a certain degree of desired intimacy or dislikes. The proxemic distance from the robot will therefore carry social information. To remain a friendly and reliable figure, robot designers should follow the societal psychological and physical norms of humans.
Distance has been the main focus of proxemic research in both human-human and human-robot scenarios. Takayama [42] investigated the static distance humans acquire to infer people's preferred distance in face-to-face human-robot interaction cases. Koay [26] looked into the likability of the robot's orientation and small signaling when doing home service. Hirota examined the emotion of participants when controlling robots at different speed levels and different moves [20], to check the proxemic results brought by the factor of mobility of the robot. Bretin [8] recruited participants to walk through a flying drone in VR to check the personal distances they kept from drones at different heights.
The work on traditional types of robots has been investigated a lot, but little research on the side of quadruped robots. Quadruped robots are a representative kind that has better vertical mobility than wheeled robots and a more living-thing-like appearance than humanoid or robotic arms. In this paper, we present a proxemic user study (N=32) of our canine robot in a motion capture environ
Figure 1: A participant interacting with our canine robot, trajectory recorded by motion capture system
ment. We designed a dynamic scenario in which participants will pass by the moving canine robot when doing an irrelevant task, and the position in the time series will be recorded by motion cameras for feature extraction. The target of this experiment is to investigate the effect of different interaction postures on participants' proxemic behavior.
## 2 Related Works
### Proxemics
#### 2.1.1 Human Proxemics
The proxemic relationship between humans has been studied for years as the research of human's mutual spatial behavior [18, 32]. In 1966, Hall [16] introduced the concept of proxemics in between humans, in which the four stages of personal space zones are first established. The first purpose of maintaining a certain proxemic distance is to present different levels of relationship or intimacy during communication, according to the space people hold, they are described in 4 categories:
* [noitemsep,topsep=0pt]
* Intimate: 6-18 inches, closer relationship.
* Personal: 1.5-4 feet, family members or close friends.
* Social: 4-12 feet, acquaintances.
* Public: 12-25 feet, public speaking situations.
Also, proxemics can be translated the other way, Aiello [2] later proposed hypothesizes that proxemics exist because people: **(1)** intend to depress strong arousal induced from other people's proximity [4], or **(2)** prefer to maintain more space so that they can handle the potential threats [8, 11]. People's proxemic distance varies when interacting with different objects in diverse scenarios. The interpersonal distance established spontaneously and subconsciously reflects people's inner attitude towards objects in their proxemic social space, which in turn will interfere with their behavior in the end. The objects, in modern proxemic research, can often refer to robots [42, 43], which are already an irreplaceable part of manufacturing and service [37] industries. Besides opportunities the autonomous figures bring to human's modern lives, people are also afraid of their potential threats like privacy issues [24] or aggressive behaviors [43] from autonomous robots.
#### 2.1.2 Human Robot Proxemics
The ever-wide and deep integration of robots into human lives has brought them into more private spaces. There are already strong needs and solid integration of robots in home services [28, 44]. It is necessary for robots to understand and use human proxemic rules so that they can be well-situated and maintain a friendly figure. Mumm et al. [36] suggest that robots must be designed to follow societal norms of distancing both physically and psychologically to perform better around humans. The successful manipulation of the robot gaze acts as a key part of psychological proxemics. The static orientation, distance, and signaling of robots are also considered important perspectives [26]. The main intention of human-robot proxemics is to inform the design
of socially acceptable robots, Eresha et al. [14] found that unwanted gaze in human-robot conversation will distract the human talker or put the talker into an uncomfortable zone. These researches explain that gaze is as important as interpersonal distance in the human-robot proxemic study.
### Movement Capture in Proxemics
Since maintained distance is treated as an important index in interaction to analyze how people perceive the encountered object and to investigate people's overall sensory experience of each interaction [31, 17]. It is a widely accepted method to use the minimum distance aligned in time and space between humans and robots as their personal distances [21, 8, 3].
There are already several existing methods on indoor proxemic studies to the position desired targets, like Radio-frequency identification (RFID) [10], Virtual Reality (VR) [8] or other wearable devices [35]. Despite the convenience and mobility of the devices above, none of them can compare to the precision of motion capture systems in positioning. Motion capture systems have been used in the creation of state-of-art virtual reality and other 3D movies due to their high precision, quick response, and high sampling rate [7, 12, 34]. Existing research with motion capture on proxemic study utilized the system's capability of analyzing the participants' movement in all body key points [27] to extract the desired feature in emotion or perception. Mead et al. [32] used a Hidden Markov Model to recognize the participants' behavior based on their motion data collected from a Microsoft Kinect camera. More commonly, Jakobsen et al. [23] puts only the position/velocity and orientation into consideration to find an efficient method of information visualization in the logic of proxemics.
In our research, the motion capture system will only track the position and orientation of objects instead of their motions. As a result, marker rigid bodies
Figure 2: Rigid body markers on Spot and participants
will take the place of marker skeletons, which are more common in motion capturing. See in Figure 2, rigid bodies are formed by 4 or more markers on the same plane, with a clear pre-set pivot to label its orientation. The position and orientation information will be captured at the sampling rate of 100 Hz for later trajectory reconstruction and distance extraction.
## 3 Methodology
This experiment aims to investigate the influence of the canine robot's orientation and gaze on participants' proxemic behavior in a dynamic passing-by interaction scenario in the real world (compared to VR methods). A fully still condition is added as a control condition. We use an OptiTrack [38] motion capture system to record the movement trajectory of both the robot and the participant at minimum interference. Bretin et al. [8] argue that distancing is not a thoughtful decision, but rather an intuitive and straightforward process. The method of stop distance [15, 29, 42] is inappropriate in our case for its interaction pattern requires participants to think before they stop. Instead of letting participants subjectively decide which distance they want to maintain with the canine robot, we integrate the process in a more subconscious way: the participants are asked to perform an unrelated task, in the middle of which the canine robot moves and passes by the participants. The personal distance is still the minimum mutual distance between humans and robots, which can be calculated from trajectories in a time series. This motion capture method has been successfully implemented by Marquardt et al. [30] in another proxemic study.
Figure 3: Design of the experiment: 1) OptiTrack Cameras, 2) Spot Trajectory. The Spot will always move from B to A if not still. Without gaze, in forward conditions, the Spot is facing A point; in sideways conditions, the Spot is vertical to line AB, facing the participant side, and in backward conditions, the Spot is facing B point. The Spot will always face the participant in gaze conditions. 3) Participants’ original orientation. The participants will always face the dotted line direction at the beginning and move from C to D, and can choose their paths at will afterward.
### Apparatus
The experiment took place in our lab as illustrated in Figure 3.
#### 3.1.1 The Motion Capture System
The lab is equipped with an OptiTrack motion capture system, which functions in the outside-in [39] tracking principle. 6 motion cameras are mounted around the experiment zone to take 2D aligned pictures of passive markers on objects, according to the position of retroreflective markers on 2D frames to calculate the real world 3D marker position. The Motive software transfers certain shapes formed by markers into a rigid body, the markers were installed asymmetrically so that the orientation can be identified as in Figure 2. The rigid body coordinate system is left-handed, the same as the world coordinate. The rigid body parameters will be stored in OptiTrack configurations to make them recognizable in every experiment setup. With a sufficient frame rate, the system can capture the in-situ position of the marker rigid bodies in sight. The rigid bodies' information on positions and orientations is sampled at the rate of 100 Hz. The position information is then multi-casted in a local network with Robot Operating System (ROS) Virtual-Reality Peripheral Network (VRPN) communication toolkit using the UDP protocol to guarantee communication speed.
#### 3.1.2 The Canine Robot
Our robot in this experiment is the Spot from Boston Dynamics. It comes with built-in autonomous walking, inspecting, and avoidance functions. Compared to the general model, this one is mounted with a robotic arm on its back, which gives it the appearance of a dog with the gripper as its head (see in Figure 1). There is also an RGBD camera installed on the gripper for autonomous picking functions, which can also be used by the Spot to assist
Figure 4: The setup in the lab corresponding to the design.
in vision tasks. Using the official SDK functions, the Spot gripper can point out at a certain point in space while moving, which looks like gazing as the gripper is considered the head of the dog so that the canine robot has a semi-autonomous (need manual configuration) gaze at objects in space. The Spot also has 3-speed regulation levels for walking, it is capable of moving in any direction at a fixed pace if no terrain or obstacle in front. One of the rigid bodies created from the OptiTrack markers is fixed on the back of the Spot, see in Figure 2. This allows the position and orientation of the Spot to be collected in the meantime. In the stationary condition, the Spot robot was positioned at equal distances from points A and B in Figure 4. In the other conditions, the Spot robot moved from B to A in Figure 4.
### Experiment Design
In our work, we expect the quadruped robot to move at maximum mobility where it can walk in forward, backward, and sideways orientations to accomplish different tasks. The forward and static modes are quite general in all use cases, but backward and sideways moving orientations are also essential for certain scenarios. For example, when the Spot opens a door with handle, it must drag the door back to open it on a certain side of the door [6], this is when backward moving mobility plays its role. And on a factory inspection site, when Spot is facing and scanning pipes or electric wires on a wide wall, it needs to traverse all devices on the wall by sideways moving [5]. We would also like to investigate if the gaze from the robot side can affect the dynamic proxemic distance from human side.
The study follows a 2x4 within-subjects design, in which every participant performed all conditions. There were two independent variables:
**Movement** with four conditions:
1. Moving forward (Baseline): Spot is moving from point B to point A while facing the participant,
2. Moving sideways: Spot is moving sideways from point B to point A with its head facing the pathway of the participant,
3. Moving backward: Spot is moving from point B to point A with its back facing the participant.
4. Stationery: Spot is stationary at equal distances from A and B,
**Gaze** with two conditions: a) Gaze on: Spot is gazing at the participant, and
b) Gaze off: it does not gaze, but its head is rather aligned with its body.
Combining these two variables, each participant went through 2\(\times\)4 = 8 different conditions, and the order of conditions was counterbalanced using an 8\(\times\)8 Latin square. And the participants will not be informed of their sequence of cases.
### Participants
We recruited 32 participants (17 males, 15 females) via mailing lists and word of mouth, who are mainly from STEM (Science, Technology, Engineering, and Mathematics) fields and business schools, between 19 to 37 years old (\(M=26,SD=3.57\)) with differed experience on robots. After the experiment, the participants were given a questionnaire to collect their demographics, and experience with robots.
### Procedure
At the setup stage, six OptiTrack motion cameras were mounted around the experiment zone to capture the in-situ position of the marker rigid bodies in sight. The position and orientation information of the rigid bodies were multi-casted in a local network with ROS built-in UDP communication. The origin of the OptiTrack 3D space coordinate was fixed at the floor center of the lab, then the whole equipment was set up and well-calibrated.
After testing the performance of the motion capture system, we welcomed the participants to our lab, explained the study, and asked them to read and sign the participant info sheet with the consent form. The participants later picked up and put on a rigid body hat marker from table C, as the trajectory sampler. Afterward, they were asked to retrieve a cup from the same table and to get prepared for their starting position. Meanwhile, the experiment sequence in the Latin Square is followed according to the participant number. The participants were informed that their main task is to bring the cup to table D. The Spot's movement alongside their task was also updated according to the condition.
The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction
Figure 5: Trajectories of a single participant. The orange fixed trajectory for Spot, other colored trajectories for humans, and the time-synchronized minimum distance is labeled as dotted lines in the plot.
between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment the participant started moving from C to D. Participants were told to feel free about their walking speed and choice of paths. The walking process of the robot is fully autonomous and so were the participants informed before the experiment. Additionally, the robot obstacle avoidance is disabled so that Spot won't go off the track. The position and orientation information from the OptiTrack system were recorded as soon as the Spot started to move. After 8 repeats, the whole trajectory data of one participant can be reconstructed in 2D as shown in Figure 5. Because of the unpredictable participant height and the fluctuation of height during walking, the Z-axis coordinate is not considered when calculating personal distance. Due to the fact that the Spot is a legged canine robot with no wheels, it is unlikely for it to walk repetitively on a precise same straight route, as a result of which there are small vertical offsets in the Spot's trajectories.
After going through all conditions, participants filled out a final questionnaire. Besides the orientations and gaze as variables, we also included the familiarity of the participant with robots. This extra information was collected using the questionnaires. The study took 5-10 minutes and the participant was compensated with a PS for their time.
This study was approved by our ethics committee (ethics application #XXXXX3).
Footnote 3: Redacted for anonymous review
Figure 6: Distance in bar plot corresponding to Figure 5
## 4 Hypotheses
**Comparing the orientations**. Sideway orientation is the most special condition in all three moving cases because it gives participants the sense of moving in a vertical direction instead of the original path. The head of the Spot points vertically toward the participant's trajectory so that the Spot will move orthogonally to its orientation. People tend to maintain longer distances to establish their distrust of the robot's potential for out-of-track moving. So the personal distances in sideways scenarios are expected to be longer than those of forward cases. This could also be observed from bar plots like Figure 6. Similarly, we expect the backward condition to give participants the feeling of uncertainty about the potential change of direction. The other important factor about the sideway orientation is, in human's daily lives, it is common for vehicles to drive forward or backward, but not usual for any robot or vehicle to move vertically, except for UGVs that are equipped with Mecanum Wheels [19]. So we present our first two hypotheses:
* **H1**_The personal distance in the sideways condition is higher than that in the forward condition._
* **H2**_The personal distance in the backward condition is higher than that in the forward condition._
**Comparing gaze**. According to Mumm et al. [36], gaze has a significant effect on the personal physical distance that humans maintain. The distance increases when more gaze is involved between the robot and the participant. And they also found that there is high consistency in humans' distances from the robot and their likability toward the robot, participants who disliked the robot would distance themselves significantly further when the robot increased the amount of gaze, which fits the interpersonal distancing model from Kaplan el al. [25]. This motivated the following hypothesis:
* **H3**_The personal distance will increase when gazing is induced._
**Comparing participants' experience**. Takayama et al. [42] in 2009 concluded that people with robotics experience or pet ownership experience would distance themselves closer to robots. Mumm et al. [36] also pointed out that there is a difference between males and females in physical and psychological distancing with robots. To summarize, some of the participants' personal characteristics have influenced their maintained robotic proxemic distance. We also noticed that participants with experience in robot operation or development would behave differently as they have quite an amount of knowledge on either Spot or other autonomous robots.
* **H4**_The participants' robot experience will change their personal distances_
## 5 Results
This paper induced motion capture devices into dynamic proxemic research and is trying to investigate the effects of canine robots' orientation and gaze on par
ticipants' personal distances. Our research checks the personal distances affected by different factors in hypothesizes using a t-test. Because of the small sample size, we performed t-tests on different data groups. 7 out of 8 data groups have passed the Shapiro-Wilk test (\(p>0.05\)) except the gaze-still condition with p = 0.0005, which will be talked about in the discussion chapter.
### Gaze
For the influence of gazing in the interaction, we chose the forward movement as the target condition, this is because the distances in forward-moving scenarios are average the shortest, possibly indicating that humans move most at ease. Also, distances in the forward condition have relatively small standard deviations, it is reasonable to say humans kept their movements more consistent in this scenario. In forward movement, no gaze condition has \(M=1.13,SD=0.31\) and gazed condition with \(M=1.32\), \(SD=0.32\). In Figure 7 (a), t-test results confirmed that in forward condition, gaze might increase the personal distance (\(N=32,statistic=-2.33,p>0.05\)).
As we have discussed in hypothesis 4, according to Mumm et al., the personal distance to robots can be user-dependent, which means every user, due to their different personality and experience with robots, would maintain different absolute distances throughout the whole experiment. To counterbalance this effect and make the user independent, we normalized the data within every participant using a min-max scaling method, so that we can know the distance on the same scale for every participant. This approach has been used in [13, 33]. The normalized result can be found in Figure 7 (b) with no gaze condition \(M=0.25,SD=0.30\), gaze condition \(M=0.54,SD=0.35\). Doing a t-test between the two sets of normalized data also confirmed our **H3** that gaze will increase the personal distance (\(statistic=-3.58,p<0.001\)).
### Orientation
To verify **H1** and **H2**, we also investigated the results on a robot with moving orientations (with no gaze). In Figure 7 (c), it is noticeable that condition sideways movement (\(M=1.30,SD=0.25\)) has a significantly different distribution compared to forward movement (\(M=1.13,SD=0.31\)) from the box plot, t-test confirmed the hypothesis that sideways movement will result in longer personal distances with (\(statistic=-2.36,p<0.05\)). In terms of **H2**, the distances in condition backward movement (\(M=1.13,SD=0.27\)), do not present different results in the t-test (\(statistic=0.001,p>0.5\)), no statistical conclusion can support the effect of backward orientation on the personal distances.
Using the same method in gazing analysis, we normalized the distance data participant-wisely in Figure 7 (d). Now we can see a more significant difference in sideways movement condition compared to forward (\(statistic=-3.57,p<0.001\)) and backward (\(statistic=2.98,p<0.01\)) conditions, both results confirm **H1** that sideways moving increases personal distances. While for **H2**, the t-test for normalized groups still rejects the hypothesis (\(statistic=-0.5,p>0.5\)),
Figure 7: Results on effects of gaze, orientation and robotic experience to distance. **(a)** the personal distance in Spot forward-facing condition, with no gaze or with gaze. **(b)** participant-wise normalized personal distance in Spot forward-facing condition, with no gaze or with gaze. **(c)** the personal distance in 3 moving orientations and stationary, with no gaze. **(d)** participant-wise normalized personal distance in 3 moving orientations and stationary, with no gaze.
from which backward moving is proven to have no significant increase or decrease in personal distance.
### Experience
In our questionnaire, among 32 participants of the experiment, 12 (8 Males, 4 females) are marked with skill or experience in robots, they either have knowledge of state-of-art autonomous robots or working/studying experience with robotic arms, UGVs or legged robots. All the other 20 participants are divided into the inexperienced group. The experienced group (\(N=12,M=1.28,SD=0.33\)) has a slightly higher average distance than the inexperienced ones (\(N=20,M=1.06,SD=0.28\)), the inexperience group also showed consistent proxemic behavior with much lower standard deviation. The t-test rejects our hypothesis with the p-value slightly exceeding \(0.05\) (\(statistic=2.03,p=0.0502\)), which indicates there is no significant difference in robotic proxemic distance between the two groups. But this could be because of the small sample size of experienced participants (\(N=12\)). In Figure 7 (e), there is an interesting fact that the potential conclusion is the opposite of our hypothesis, where the experienced group, on the contrary, maintained longer distances at an average of \(1.28m\), compared to \(1.06m\) of the other group.
## 6 Discussion
### The stationary position
Before the experiment design, we had initial thoughts that all moving conditions should result in longer personal distances than stationary cases. But in Figure 7 (c) we found that stationary condition (\(M=1.33,SD=0.36\)) has even higher average personal distances with smaller standard deviation compared to forwarding moving condition (\(M=1.13,SD=0.31\)), t-test also confirmed the conclusion (\(statistic=2.42,p<0.05\)). According to the replies in our after-experiment questionnaire, many participants reported they thought that the Spot stayed still in the experiment zone very suspiciously and worried about its sudden movement. Since the sequence of all conditions is set random by Latin Square, most participants did not do the stationary part as their first contact with the Spot, it becomes very strange for such an agile and sporty figure to stay entirely still. The only experiment data group that is not normally distributed is also the still-gaze scenario (Shapiro-Wilk test \(p<0.01\)), suggesting participants' reaction toward the uncommon still robot is quite random. There is another major difference between canine robots with traditional wheeled UGV robots: for wheeled robots, their appearance is the same in both on or off status as long as they are not moving, but canine robots sit on the ground when their battery is off, while stand when they are on and ready. The standing of canine robots gives people the signal of the potential to move, and the uncertainty of the robot alerts participants to stay further. Consequently, the forward-moving condition, in which the robot moving status is most certain, results in the smallest personal distance of all.
### The robotic experience
We made a similar assumption that the participants with more experience in robotics will change participant's distance from the canine robot. In our experiments, although not confirmed by statistical analysis, according to average values we might infer that robot experience added the personal distances between the participants and the robot. From the reports in questionnaires, we are informed that they kept relatively longer distances because subconsciously they are afraid of potential bugs or failure of the autonomous algorithm so that more space is preserved in case of collision. To make this distance gap even bigger, most of the zero robotic experience participants had not seen such advanced robotic devices before, their curiosity and the friendly framing of the Spot might have driven them closer, but the effect of the Spot's appearance is unclear.
### Limitations
Although gaze is considered a major fact in our work, only the robotic gaze is involved in the current version of the experiment. This work did not measure how much attention humans are paying to the robot. It is noticeable from experiments that some participants did not look at the Spot robot in some setups, especially when they pass each other at the middle point of the track. Also, if the human is fully adapted to co-existing with the Spot, only very limited attention would be paid to the robot, or even worse, the participant would start to ignore it. Some parts of the effect are neutralized with the Latin Square in the experiment design but we still observed such a decline in attention at the end of some participants' experiment cycles.
The other potential factor this work did not take into concern but could bring effect to the proxemic distance is the environment sound. The Spot robot uses a hydraulic driver system, thus needing a cooling fan in every joint and also the processor to keep it running in order. Even when standing, the Spot makes a lot of fan noise, and the motor noise while driving only makes it worse. Participants could hear the Spot noise clearly from 3-5 meters away. According to some participants, to some extent, the noise is making the Spot a more deterrent figure.
## 7 Conclusion and Future Work
This research designed a dynamic human-robot proxemic scenario with a four-legged canine-inspired robot, to see the effects of moving orientation, gazing, and personal robotic experience on distances people maintained from the robot. We got the conclusion that in subconscious interactions, when participants pass by a robot coming from the opposite direction, the shortest distances are maintained when the robot is facing moving direction without gazing at passing participants. The mutual space increased when the robot was sideways from its moving path or gazing at the participant because they made the participants insecure and uncertain about the robot's further behavior. Experience in robotics also increased
personal distances since it added to the uncertainty of the robot's status. Those conclusions suggest that in human proxemic social space, robots moving in their facing directions with less gaze can be integrated as less threatening and way friendlier.
Despite the statistical results, we also made assumptions about potential factors in human-robot proxemics. The special'standing' posture of canine robots could lead to personal space as there is a chance of sudden movement for the body language expressing a ready-to-move framing. And it remains unclear how the sound or noise from the proxemic relations in our research. Neither is the amount of attention participants paid to the canine robot examined, these could be our work in the future.
|
2304.05914 | A survey on Zariski cancellation problems for noncommutative and Poisson
algebras | In this article, we discuss some recent developments of the Zariski
Cancellation Problem in the setting of noncommutative algebras and Poisson
algebras. | Hongdi Huang, Xin Tang, Xingting Wang | 2023-04-12T15:32:16Z | http://arxiv.org/abs/2304.05914v2 | # A survey on Zariski cancellation problems
###### Abstract.
In this article, we discuss some recent developments of Zariski Cancellation Problem in the setting of noncommutative algebras and Poisson algebras.
2010 Mathematics Subject Classification: Primary 16P99, 16W99
## 1. Introduction
As remarked by Kraft in his 1995 survey [32] that "..._there is no doubt that complex affine \(n\)-space \(\mathbb{A}^{n}=\mathbb{A}^{n}_{\mathbb{C}}\) is one of the basic objects in algebraic geometry. It is therefore surprising how little is known about its geometry and its symmetries. Although there has been some remarkable progress in the last few years, many basic problems remain open._"
Among these open questions, features the "Zariski Cancellation Problem". A birational version of the cancellation problem was first raised by Zariski during the 1949 Paris Colloquium on Algebra and Theory of Numbers [54]. Throughout this paper, we work over a base field \(\Bbbk\). All the algebras are over \(\Bbbk\) and we reserve \(\otimes\) to mean \(\otimes_{\Bbbk}\) unless stated otherwise. Let \(K\) and \(K^{\prime}\) be two finitely generated fields over \(\Bbbk\). Denote by \(K(x)\) and \(K^{\prime}(x)\) the simple transcendental extensions of \(K\) and \(K^{\prime}\), respectively. Zariski asked whether the isomorphism \(K(x)\cong K^{\prime}(x)\) implies \(K\cong K^{\prime}\)? Over the years, a 'biregular' version of the cancellation problem has attracted a great deal of research attention. More precisely, let \(\mathbb{A}^{n}=\Bbbk^{n}\) be the affine space. One has the following "Zariski Cancellation Problem" (abbreviated as **ZCP**).
**Question 1.1** (Zariski Cancellation Problem).: Does an isomorphism
\[Y\times\mathbb{A}^{1}\ \cong\ \mathbb{A}^{n+1}\]
imply an isomorphism \(Y\ \cong\ \mathbb{A}^{n}\), for any affine variety \(Y\)?
Algebraically, the Zariski Cancellation Problem is asking whether an algebra isomorphism \(A[x_{1}]\cong\Bbbk[x_{1},\cdots,x_{n+1}]\) implies that \(A\) is isomorphic to \(\Bbbk[x_{1},\cdots,x_{n}]\) as \(\Bbbk\)-algebras. The Zariski Cancellation Problem has been solved for \(A=\Bbbk[x]\) in [1] and for \(A=\Bbbk[x,y]\) in [21, 48, 51]. Recently, the Zariski Cancellation Problem has been settled negatively in [27, 28] for \(\Bbbk[x_{1},\cdots,x_{n}]\) where \(n\geq 3\) and \(\operatorname{char}(\Bbbk)>0\). The Zariski Cancellation Problem remains open for \(A=\Bbbk[x_{1},\cdots,x_{n}]\) for \(n\geq 3\) in the zero characteristic case. The Zariski Cancellation Problem and its analogues have been further studied for commutative domains, see [16, 17, 18, 20, 21, 31, 45] for a sample of references. We refer the reader to the excellent survey papers [29, 30] for a complete account in the setting of commutative domains.
In this survey article, we would like to discuss some recent developments in the Zariski Cancellation Problem and its generalizations in the setting of noncommutative algebras and Poisson algebras. Note that the study of cancellation properties for noncommutative rings or algebras dates back to the early 1970s, see for example [1, 4, 10, 15, 19]. There is a vast literature on the cancellation problem for noncommutative algebras and Poisson algebras; see for instance the incomplete list [5, 8, 7, 9, 23, 24, 37, 38, 56, 57].
In general, it is natural to identify noncommutative algebras \(A\) with similar cancellation property. If \(A\) is a finitely generated \(\Bbbk\)-algebra, then we will simply call \(A\) an _affine algebra_ over \(\Bbbk\). We denote by \(Z(A)\) the center of \(A\). The following notions are formulated in [8].
**Definition 1.2**.: [8, Definition 1.1] Let \(A\) be a \(\Bbbk\)-algebra.
1. \(A\) is said to be _cancellative_ if any \(\Bbbk\)-algebra isomorphism \(A[t]\cong B[t]\) for some \(\Bbbk\)-algebra \(B\) implies that \(A\cong B\).
2. \(A\) is said to be _strongly cancellative_ if, for each \(n\geq 1\), \(A[t_{1},\cdots,t_{n}]\cong B[t_{1},\cdots,t_{n}]\) for some algebra \(B\) implies that \(A\cong B\).
3. \(A\) is said to be _universally cancellative_ if, for every affine commutative \(\Bbbk\)-domain \(R\) such that the natural map \(\Bbbk\to R\to R/I\) is an isomorphism for some ideal \(I\subset R\) and every \(\Bbbk\)-algebra \(B\), \(A\otimes R\cong B\otimes R\) implies that \(A\cong B\).
It follows from the definition that "\(A\) is universally cancellative" implies "\(A\) is strongly cancellative" and "\(A\) is strongly cancellative" implies "\(A\) is cancellative". Moreover, the size of \(Z(A)\) seems to play a significant role in the cancellation problem as shown below.
**Theorem 1.3**.: _[_8_, Proposition 1.3]_ _Let \(A\) be a \(\Bbbk\)-algebra with center being \(\Bbbk\). Then \(A\) is universally cancellative._
This is a useful result since there are many noncommutative algebras have their centers equal to the base field \(\Bbbk\) and are thus universally cancellative. We have the following typical example from [8]. We refer the reader to [36] for the calculation of centers for many classes of noncommutative algebras.
**Example 1.4**.: [8, Example 1.4] (1) Let \(\Bbbk\) be a field of characteristic zero and \(A_{n}\) be the \(n\)th Weyl algebra over \(\Bbbk\). Then \(Z(A_{n})=\Bbbk\). So \(A_{n}\) is universally cancellative.
(2) Let \(\Bbbk\) be a field and \(q\in\Bbbk^{\times}\). Let \(A=\Bbbk_{q}[x_{1},\cdots,x_{n}]\) be the skew polynomial ring generated by \(x_{1},\cdots,x_{n}\) subject to the relations \(x_{j}x_{i}=qx_{i}x_{j}\) for all \(1\leq i<j\leq n\). If \(n\geq 2\) and \(q\) is not a root of unity, then \(Z(A)=\Bbbk\). So \(A\) is universally cancellative.
On the other hand, the centers of many PI noncommutative algebras are strictly larger than the base field \(\Bbbk\). Thus novel ideas are needed to address the cancellation problem for noncommutative algebras. We will discuss more about the cancellation problem for noncommutative algebras, with a focus on many important families of Artin-Schelter regular algebras in Section 2, where we will also address the cancellation problem for non-domains. In Section 3, we will discuss certain generalized cancellation problems such as Morita and skew cancellations. Section 4 is devoted to the Zariski Cancellation Problem for Poisson algebras. We will employ the notion of the _Gelfand-Kirillov (GK) dimension_ instead of the classical Krull dimension. We refer the reader to [33] for the definition and basic properties of the
GK-dimension. It is well known that the GK-dimension of any affine commutative \(\Bbbk\)-algebra is always equal to its Krull dimension.
### Acknowledgements
Some part of this paper was formulated at the conference "Recent Advances and New Directions in the Interplay of Noncommutative Algebra and Geometry" in June 2022 and at the BIRS Workshop on "Noncommutative Geometry and Noncommutative Invariant Theory" in September 2022. The authors want to thank the University of Washington and the Banff International Research Station for their hospitality and support. Wang was partially supported by Simons collaboration grant #688403 and AFOSR grant FA9550-22-1-0272.
## 2. Zariski Cancellation for Noncommutative Algebras
In this section, we will discuss the cancellative properties for noncommutative algebras. Among them is an important family called Artin-Schelter (AS) regular algebras, which are considered as noncommutative analogues of polynomial algebras. We refer the reader to [3] and the references therein for the definition and basic properties of AS-regular algebras. AS-regular algebras should serve as important testing examples for the Zariski Cancellation Problem.
There are two crucial ingredients in the treatment of ZCP for noncommutative algebras, namely the _Makar-Limanov invariant_ and the _discriminant_. The Makar-Limanov invariant was originally introduced by Makar-Limanov [45] and has become a very useful invariant in commutative algebra. It was first used in [8] to solve rigidity, automorphism, isomorphism, and embedding problems for several classes of noncommutative algebras. In the following three subsections, we state the cancellativity results according to the GK-dimension of an algebra.
### Definitions
In this subsection, we list some basic definitions and terminologies. We recall the definition of the Makar-Limanov invariant and its variations from [8], see also [46]. Let \(A\) be a \(\Bbbk\)-algebra and \(\operatorname{Der}(A)\) be the collection of all \(\Bbbk\)-linear derivations of \(A\). Denote by \(\operatorname{LND}(A)=\{\delta\in\operatorname{Der}(A)\mid\delta\text{ is locally nilpotent}\}\). When \(\operatorname{char}(\Bbbk)>0\), one needs to use higher derivations. We denote by \(\operatorname{LND}^{H}(A)\) the set of all locally nilpotent higher derivations of \(A\) and \(\operatorname{LND}^{I}(A)\) the set of locally nilpotent iterative higher derivations of \(A\). We refer the reader to [8, 52, 34] for more details on higher derivations. We have the following definition of rigidity in terms of derivations. Let \({}^{*}\) be either blank, \({}^{H}\), or \({}^{I}\).
**Definition 2.1**.: [8, Definition 2.3] Let \(A\) be an algebra over \(\Bbbk\).
1. The _Makar-Limanov\({}^{*}\) invariant_ of \(A\) is defined to be \[\operatorname{ML}^{*}(A)=\bigcap_{\delta\in\operatorname{LND}^{*}(A)}\ker(\delta)\] This means that we have the original \(\operatorname{ML}(A)\), as well as, \(\operatorname{ML}^{H}(A)\) and \(\operatorname{ML}^{I}(A)\).
2. We say that \(A\) is \(\operatorname{LND}^{*}\)-rigid if \(\operatorname{ML}^{*}(A)=A\), or \(\operatorname{LND}^{*}(A)=\{0\}\).
3. \(A\) is called _strongly_\(\operatorname{LND}^{*}\)-rigid if \(\operatorname{ML}^{*}(A[t_{1},\cdots,t_{n}])=A\), for all \(n\geq 1\).
The Makar-Limanov invariant of an algebra \(A\) plays a critical role in controlling the cancellation property for \(A\). Indeed, we have the following result from [8].
**Theorem 2.2**.: _[_8_, Theorem 3.3]_ _Let \(A\) be an affine \(\Bbbk\)-domain of finite GK-dimension. When \(*\) is blank we further assume that \(A\) contains \(\mathbb{Z}\)._
1. _If_ \(A\) _is strongly_ \(\operatorname{LND}^{*}\)_-rigid, then_ \(A\) _is strongly cancellative._
2. _If_ \(\operatorname{ML}^{*}(A[t])=A\)_, then_ \(A\) _is cancellative._
Therefore, to establish the cancellation property for an algebra \(A\), it suffices to prove that the algebra \(A\) is \(\operatorname{LND}^{*}\)-rigid. In order to achieve that goal, another invariant called _discriminant_ is introduced for any noncommutative algebra that is module-finite over its center.
**Definition 2.3**.: [12, Definition 1.3] Let \(A\) be a \(\Bbbk\)-algebra that is a finitely generated free module over its center \(Z\) of rank \(w\). Let \(\{z_{1},\ldots,z_{w}\}\) be a \(Z\)-basis of \(A\) and let \(\operatorname{tr}:A\to Z\) be a trace map. Then the _\(w\)-discriminant_ of \(A\) over \(Z\) is defined to be
\[d_{w}(A/Z):=_{Z^{\times}}\det(\operatorname{tr}(z_{i}z_{j}))_{w\times w}.\]
It is shown in [8] that certain properties such as being dominating and effective of the discriminant control the \(\operatorname{LND}^{*}\)-rigidity. We refer the reader to [12, 13, 8] for the definition of the notions of effectiveness and dominance of an element in an algebra and their relations to automorphism groups.
**Example 2.4**.: [12, Example 5.6] Suppose \(\operatorname{char}(\Bbbk)\neq 2\). Consider the following algebra
\[A=\Bbbk\langle x,y\rangle/(x^{2}y-yx^{2},xy^{2}-y^{2}x,x^{6}-y^{2}).\]
A computation shows that the center \(Z\) of \(A\) is the polynomial ring generated by \(x^{2}\) and \(xy+yx\) and its \(4\)-discriminant is given by \(d_{4}(A/Z)=(xy-yx)^{4}\) which is both dominating and effective.
### Algebras of GK-dimension one or two
It is well known that the coordinate ring of an affine curve is cancellative [1]. In light of this result, one is led to consider the cancellativity for every affine prime \(\Bbbk\)-algebra of GK-dimension one. Many nice properties of algebras of GK-dimension one are investigated in [55], which are useful to characterize the cancellativity. We remark that the cancellativity is relative to the base field \(\Bbbk\) and its GK-dimension.
For an algebra \(A\) of GK-dimension one, some affirmative answers to the Zariski cancellation problem from [37] and [5] are listed in the following theorem.
**Theorem 2.5**.: _Let \(A\) be an affine \(\Bbbk\)-algebra of GK-dimension one._
1. [37, Theorem 0.6] _If_ \(\Bbbk\) _is algebraically closed and_ \(A\) _is prime, then_ \(A\) _is cancellative._
2. [5, Theorem 1.1] _If_ \(\Bbbk\) _is of characteristic zero and_ \(A\) _is a domain, then_ \(A\) _is cancellative._
Notice that in the above theorem, for part (a) the authors invoke Tsen's theorem which requires the base field \(\Bbbk\) to be algebraically closed. While part (b) is somewhat orthogonal to part (a), since domains of GK-dimension one over algebraically closed fields are commutative by an application of Tsen's theorem to a result of Small and Warfield [55]. Hence the only case of part (b) covered by (a) is when the algebra is commutative, which was previously known from the result of Abhyankar-Eakin-Heinzer [1].
We remark that the base field \(\Bbbk\) plays an important role here, indeed. As mentioned before, \(\Bbbk[x_{1},\cdots,x_{n}]\) with \(n\geq 3\) is not cancellative when \(\operatorname{char}(\Bbbk)>0\) by [27, 28]. When it comes to a noncommutative \(\Bbbk\)-algebra \(A\), where \(\Bbbk\) has positive characteristic and is not algebraically closed, a family of non-cancellative algebras are constructed in [5] as below.
**Example 2.6**.: Let \(p\) be a prime, and let \(K=\mathbb{F}_{p}(x_{1},\ldots,x_{p^{2}-1})\). Let \(k=\mathbb{F}_{p}(x_{1}^{p},\ldots,x_{p^{2}-1}^{p})\) and we let \(\delta\) be the \(k\)-linear derivation of \(K\) given by \(\delta(x_{i})=x_{i+1}\) for \(i=1,\ldots,p^{2}-1\), where we take \(x_{p^{2}}=x_{1}\). Since \(k\) has characteristic \(p>0\), \(\delta^{p^{i}}\) is a \(k\)-linear derivation for every \(i\geq 0\), and since \(\delta^{p^{2}}(x_{i})=\delta(x_{i})=x_{i+1}\) for \(i=1,\ldots,p^{2}-1\), \(\delta^{p^{j+2}}=\delta^{p^{j}}\) for every \(j\geq 0\). We let \(\delta^{\prime}:=\delta^{p}\), which as we have just remarked is a \(k\)-linear derivation of \(K\). We let \(A=K[x;\delta]\) and \(B=K[x^{\prime};\delta^{\prime}]\). Since \(\operatorname{ad}_{u}^{p}=\operatorname{ad}_{u^{p}}\) for any element \(u\) in a ring of characteristic \(p\), we have \(z:=x^{p^{2}}-x\) and \(z^{\prime}:=(x^{\prime})^{p^{2}}-x^{\prime}\) are central by the above remarks. It is easy to check that \(A\) and \(B\) have Gelfand-Kirillov dimension one, but \(A\not\cong B\), and \(A[t]\cong B[t^{\prime}]\).
It is worth of pointing out that the above examples indicate the necessity of the condition that \(\Bbbk\) is algebraically closed in the positive characteristic case. Next, we summarize the results of cancellativity of an algebra having GK-dimension two.
**Theorem 2.7**.: _[_8_, Theorem 0.5]__. Let \(\Bbbk\) be an algebraically closed field of characteristic zero. Let \(A\) be an affine \(\Bbbk\)-domain of GK-dimension two. If \(A\) is not commutative, then \(A\) is cancellative._
Note that, indeed, there are examples of commutative affine domains of GK-dimension two which are not cancellative [18, 20, 21, 31]. We quote some counter-examples from [18] below.
**Example 2.8**.:
1. Let \(n\geq 1\) and let \(B_{n}\) be the coordinate ring of the surface \(x^{n}y=z^{2}-1\) over \(\mathbb{C}.\) Then \(B_{i}\not\cong B_{j}\) if \(i\neq j\), but \(B_{i}[t]\cong B_{j}[t]\) for all \(i,j\geq 1\). Therefore, all the \(B_{n}\)'s are not cancellative.
2. Let \(A\) and \(B\) be the affine coordinate rings of the surfaces \(xy-z^{2}+1=0\) and \(x^{2}y-z^{2}+1=0\). We have \(A\not\cong B\) while \(A[t]\cong B[t]\).
### Algebras with'small' center or higher GK-dimension
As showed in Theorem 1.3, the center \(Z(A)\) of an algebra \(A\) provides some control on the cancellation property of \(A\). Intuitively, if \(A\) has smaller center, then \(A\) is more closed to be cancellative. So it is natural to ask if cancellativity of the center can imply that of the algebra itself.
**Question 2.9**.: _[_37_, Question 0.1]__[_5_, Conjecture 2.1]_ _Let \(A\) be an affine noetherian domain. Suppose that \(Z(A)\) is affine and cancellative. Is \(A\) cancellative? In particular, if the center of \(A\) is finite dimensional over \(\Bbbk\), is \(A\) cancellative?_
In [37], cancellation property was proved for non-domain noncommutative algebras using a generalization of ideas from [8] relating Makar-Limanov invariant and LND-rigidity to their'small' centers, that is, the \(\operatorname{ML}_{Z}\)-invariant and the notion of \(\operatorname{LND}_{Z}\)-rigidity.
**Theorem 2.10**.: _The following algebras \(A\) are cancellative:_
1. _(__[_37_, Theorem 0.2]__)_ \(A\) _is strongly Hopfian with artinian center._
2. (__[_37_, Corollary 0.3]__)_ \(A\) _is left (or right) artinian; in particular,_ \(A\) _is finite-dimensional over a base field_ \(\Bbbk\)_._
3. (__[_37_, Corollary 0.3]__)_ \(A\) _is the path algebra of a finite quiver._
Recall that an algebra \(A\) is called _Hopfian_ if any \(\Bbbk\)-algebra epimorphism from \(A\) to itself is an automorphism. Moreover, we say \(A\) is _strongly Hopfian_ if \(A[t_{1},\cdots,t_{n}]\) is Hopfian for any \(n\geq 0\). (see [37, Definition 3.4]). As a consequence, the following
three large classes of strongly Hopfian algebras are Zariski cancellative once their centers are artinian.
**Example 2.11**.: _[_37_, Lemma 3.5]_ _The following algebras are strongly Hopfian._
1. _Left or right noetherian algebras._
2. _Locally finite_ \(\mathbb{N}\)_-graded affine_ \(\Bbbk\)_-algebras._
3. _Prime affine_ \(\Bbbk\)_-algebras satisfying a polynomial identity._
One may also consider the cancellation problem in the category of graded \(\Bbbk\)-algebras. An isomorphism lemma is established for graded algebras in [7] stating that for any two connected graded \(\Bbbk\)-algebras that are finitely generated in degree one, if they are isomorphic as ungraded algebras, then they must be isomorphic as graded algebras. As an application, we have the following result.
**Theorem 2.12** ([7]).: _Let \(\Omega\) be the class of connected graded \(\Bbbk\)-algebras finitely generated in degree one. If \(A\in\Omega\) satisfies \(Z(A)\cap A_{1}=\{0\},\) then \(A\) is cancellative within \(\Omega\)._
The following results are proved in [10], which indicates how the center of an algebra affects its cancellativity. The property of _(strongly) retractable_ and its weaker version of _(strongly) detectable_ are introduced in [37] for non-domain noncommutative algebras parallel to the discriminant computation method. Recall that _the nilpotent radical_ of a ring \(R\) is defined to be the intersection of all prime ideals in \(R\).
**Theorem 2.13**.: _Let \(R\) be a ring with center \(Z\) and \(N\) be the nilpotent radical of \(R\). In each of the following cases, \(R\) is strongly cancellative:_
1. \(Z/N\) _is strongly retractable._
2. \(Z\) _has Krull dimension zero,_
3. \(Z\) _is a finite direct sum of local rings._
For arbitrary affine \(\Bbbk\)-domain of finite GK-dimension, Theorem 2.2 provides a general approach to the Zariski cancellation problem using LND-rigidity. And discriminant computation so far is the most practical way to achieve that goal in the PI case.
**Theorem 2.14**.: _[_8_, Theorem 5.2]_ _Let \(A\) be a PI \(\Bbbk\)-domain. Suppose the \(w\)-discriminant of \(A\) over its center is effective for some \(w\). Then \(A\) is strongly \(\operatorname{LND}^{H}\)-rigid. As a consequence, \(A\) is strongly cancellative._
The above theorem can be applied to several important families of noncommutative PI algebras.
**Example 2.15**.: _[_8_, Theorem 0.8]_ _Let \(A\) be a finite tensor product of the skew polynomial rings \(\Bbbk_{q}[x_{1},\cdots,x_{n}]\) where \(n\) is even and \(q\in\Bbbk\setminus\{0,1\}\) is a root of unity. Then \(A\) is LND-rigid. As a consequence, \(A\) is cancellative._
In the example of skew polynomial rings \(\Bbbk_{q}[x_{1},\cdots,x_{n}]\), we ask the following question when \(n\) is even.
**Question 2.16**.: _[_56_, Question 0.8]__. Let \(q\in\Bbbk\setminus\{0,1\}\) be a root of unity. Is the skew polynomial ring \(\Bbbk_{q}[x_{1},\cdots,x_{n}]\) cancellative for \(n\) odd and \(n\geq 3\)?_
However, it is very challenging to calculate the discriminants. Even if the discriminant exists and can be computed, it is usually difficult to verify the dominating property or effectiveness. Therefore, it is important to search for other means to prove the LND-rigidity for a given algebra. Via the approach of _Nakayama automorphisms_, the cancellation problem is considered for certain special classes of non-PI AS-regular algebras of GK-dimension \(3\) in [39]. In particular, in [39, Corollary 0.9] many examples of cancellative algebras \(A\) of GK-dimension no greater than \(3\) are given.
The cancellation problem is further considered for noetherian connected graded AS-regular algebras of GK-dimension \(3\) or higher in [56]. The following more general results are proved in [56].
**Theorem 2.17**.: _[_56_, Theorem 0.2 and 0.3]_ _Suppose \(\mathrm{char}(\Bbbk)=0\). Let \(A\) be a noetherian AS-regular algebra generated in degree \(1\). If either of the following sets of hypotheses hold, then \(A\) is cancellative:_
1. \(\mathrm{gl.dim}(A)=3\) _and_ \(A\) _is not PI._
2. \(\mathrm{GKdim}(Z(A))\leq 1\)_, and_ \(\mathrm{gl.dim}(A/(t))=\infty\) _for every homogeneous central element_ \(t\in Z(A)\) _of positive degree._
The following conjecture is presented in [56].
**Conjecture 2.18**.: _[_56_, Conjecture 0.4]_ _Suppose \(\mathrm{char}(\Bbbk)=0\). Let \(A\) be a noetherian finitely generated prime algebra._
1. _If_ \(\mathrm{GKdim}(Z(A))\leq 1\)_, then_ \(A\) _is cancellative._
2. _If_ \(\mathrm{GKdim}(A)=3\) _and_ \(A\) _is not PI, then_ \(A\) _is cancellative._
Regarding Theorem 2.17, Conjecture 2.18 is just saying that the global dimension hypotheses can be dropped if A is finitely generated prime (or replaced the \(\mathrm{GKdim}(A)=3\) in part (2). This further leads to the following natural question from the perspective of global or Krull dimension.
**Question 2.19**.: _[_37_, Question 0.8]_ _Let \(A\) be a \(\Bbbk\)-algebra of global dimension one (respectively, Krull dimension one). Is then \(A\) cancellative?_
## 3. Morita and Skew Cancellations
It is natural to consider other generalizations of the Zariski cancellation problem. Indeed, one may consider the categorical versions of the ZCP. The notion of cancellation for abelian categories and derived categories are initiated in [38]. For any algebra \(A\), let \(M(A)\) denote the category of the right \(A\)-modules and \(D(A)\) denote the derived category of right \(A\)-modules. The following definitions are proposed in [38].
**Definition 3.1**.: Let \(A\) be any algebra.
1. [38, Definitions 0.1] We say \(A\) is _Morita cancellative_ if the statement that \(M(A[t])\) is equivalent to \(M(B[t])\) for another algebra \(B\) implies that \(M(A)\) is equivalent to \(M(B)\).
2. [38, Definitions 0.2] We say \(A\) is _derived cancellative_ if the statement that \(D(A[t])\) is triangulated equivalent to \(D(B[t])\) for an algebra \(B\) implies that \(D(A)\) is triangulated equivalent to \(D(B)\).
From Morita theory, any two Morita equivalent algebras must have isomorphic centers. This suggests that many techniques used in Zariski cancellation problem for non-domain noncommutative algebras based on their centers can be applied for Morita cancellation problem. For example, according to [38, Theorem 0.7] a commutative domain is cancellative if and only if it is Morita cancellative if and only if \(Z\) is derived cancellative.
Moreover, a representation-theoretical version of discriminant was introduced in [38] in terms of the module category which is well-behaved under Morita equivalence. As a consequence, relative \(\mathrm{ML}_{Z}\)-invariant and the notion of \(\mathrm{LND}_{Z}\)-rigidity with respect to the center \(Z\) was extended to element-wise versions with respect to some representation-theoretical discriminant. Using these new ideas, we have the Morita cancellation property for the following noncommutative algebras
**Theorem 3.2**.: _The following algebras are Morita cancellative._
1. _(__[_38_, Theorem 0.3]__) Every strongly Hopfian algebra with artinian center._
2. _(__[_38_, Theorem 0.5]__) The path algebra_ \(\Bbbk\,Q\) _for every finite quiver_ \(Q\)_._
3. _(__[_38_, Theorem 0.6]__) Every affine prime_ \(\Bbbk\)_-algebra of GK-dimension one over an algebraically closed field_ \(\Bbbk\)_._
4. _(__[_38_, Corollary 0.8]__) Every non-PI Sklyanin algebra of global dimension three._
The Morita cancellation is further studied in [57], where Morita version of the universally cancellative property is proposed in [57, Definition 0.3] as a natural combination of being Morita cancellative and universally cancellative.
As a Morita version of [8, Proposition 1.3], one has the following result, which once again demonstrates the control of the center of \(A\) over its cancellation property.
**Theorem 3.3**.: _[_57_, Theorem 0.4]_ _Let \(A\) be an algebra with center \(Z(A)\) being the base field \(\Bbbk\). Then \(A\) is universally Morita cancellative._
As both a Morita version and a strengthened version of a partial combination of [37, Theorem 4.1] with [37, Theorem 4.2], the following result is proved in [57]. Note that _strongly Morita cancellation_ can be defined analogously as strongly cancellation. We denote the nilradical of an algebra \(A\) by \(N(A)\).
**Theorem 3.4**.: _[_57_, Theorem 0.5]_ _Let \(A\) be an algebra with center \(Z\) such that either \(Z\) or \(Z/N(Z)\) is strongly retractable_ (_respectively, strongly detectable_)_, then \(A\) is strongly Morita cancellative._
We remark that the hypotheses of being "strongly Hopfian" in the previous results such as [37, Theorem 0.2, Theorem 4.2] and [38, Theorem 0.3, Lemma 3.6, Theorem 4.2(2), Corollary 4.3, Corollary 7.3] are not needed. So the above theorem provides a method to study the Morita Zariski cancellation problem for noncommutative algebras via their centers (e.g., [37, Question 0.1]). Recall that a commutative algebra is called _von Neumann regular_ if it is reduced and has Krull dimension zero. One has the following result.
**Corollary 3.5**.: _[_57_, Corollary 0.6]_ _Let \(A\) be an algebra with center \(Z\)._
1. _If_ \(Z/N(Z)\) _is generated by a set of units of_ \(Z/N(Z)\)_, then_ \(Z\) _and_ \(A\) _are strongly cancellative and strongly Morita cancellative._
2. _If_ \(Z/N(Z)\) _is a von Neumann regular algebra, then_ \(Z\) _and_ \(A\) _are strongly cancellative and strongly Morita cancellative._
_
3. _If_ \(Z\) _is a finite direct sum of local algebras, then_ \(Z\) _and_ \(A\) _are strongly cancellative and strongly Morita cancellative._
The problem of skew cancellations was considered in [2] and has been recently revisited in [5, 9]. Let \(A\) be a \(\Bbbk\)-algebra. Let \(\sigma\) be a \(\Bbbk\)-algebra automorphism of \(A\) and \(\delta\) be a \(\sigma\)-derivation of \(A\). Then one can form the Ore extension, denoted by \(A[t;\sigma,\delta]\), which shares many nice properties with the polynomial extension \(A[t]\). An iterated Ore extension of an algebra \(A\) is of the form
\[A[t_{1};\sigma_{1},\delta_{1}][t_{2};\sigma_{2},\delta]\cdots[t_{n};\sigma_{n},\delta_{n}],\]
where \(\sigma_{i}\) is an algebra automorphism of \(A_{i-1}:=A[t_{1};\sigma_{1},\delta_{1}]\cdots[t_{i-1};\sigma_{i-1},\delta_{i- 1}]\) and \(\delta_{i}\) is a \(\sigma_{i}\)-derivation of \(A_{i-1}\). The reader is referred to [47, Chapter 1] for more details, see also [26].
**Definition 3.6**.: _[_57_, Definition 0.7 and 0.8]_ Let \(A\) be an algebra.
1. We say \(A\) is _skew cancellative_ if any isomorphism of algebras (3.6.1) \[A[t;\sigma,\delta]\cong A^{\prime}[t^{\prime};\sigma^{\prime},\delta^{\prime}]\] for another algebra \(A^{\prime}\), implies an isomorphism of algebras \[A\cong A^{\prime}.\]
2. We say \(A\) is _\(\sigma\)-cancellative_ if it is skew cancellative for Ore extensions (3.6.1) with \(\delta=0\) and \(\delta^{\prime}=0\).
3. We say \(A\) is _\(\delta\)-cancellative_ if it is skew cancellative for Ore extensions (3.6.1) with \(\sigma=\operatorname{Id}_{A}\) and \(\sigma^{\prime}=\operatorname{Id}_{A^{\prime}}\).
4. We say \(A\) is _\(\sigma\)-algebraically cancellative_ if it is skew cancellative for Ore extensions (3.6.1) with locally algebraic \(\sigma\) and \(\sigma^{\prime}\).
Moreover, we can define \(A\) to be _strongly skew cancellative_ if
\[A[t_{1};\sigma_{1},\delta_{1}][t_{2};\sigma_{2},\delta_{2}]\cdots[t_{n};\sigma _{n},\delta_{n}]\cong A^{\prime}[t^{\prime}_{1};\sigma^{\prime}_{1},\delta^{ \prime}_{1}][t^{\prime}_{2};\sigma^{\prime}_{2},\delta^{\prime}_{2}]\cdots[t^ {\prime}_{n};\sigma^{\prime}_{n},\delta^{\prime}_{n}]\]
implies that \(A\cong A^{\prime}\) for any \(n\geq 1\).
Skew cancellation has appeared earlier in different notions. Given two Ore extensions \(A[t;\delta_{1}]\) and \(B[t;\delta_{2}]\), \(A\) is said to be _Ore invariant_ if the isomorphism \(A[t;\delta_{1}]\cong B[t;\delta_{2}]\) implies \(A\cong B\). When, moreover, the isomorphism between the Ore extensions carries \(A\) onto \(B\), then \(A\) is said to be _strongly Ore invariant_[2]. In [2], it is proved that certain abelian regular rings are strongly Ore invariant, and regular self-injective PI rings with no \(\mathbb{Z}\)-torsion are Ore invariant.
In [9], it is proved that if the skew polynomial ring \(R[t;\delta_{1}]\) and \(\Bbbk[x][t;\delta_{2}]\) are isomorphic and \(\delta_{2}(x)\in\Bbbk[x]\) has degree at least one, then \(R\cong\Bbbk[x]\). This result is generalized in [5] as follows.
**Theorem 3.7**.: _[_5_, Theorem 1.2]_ _Let \(A\) be an affine commutative \(\Bbbk\)-domain of Krull dimension one. Then we have the following._
1. \(A\) _is_ \(\sigma\)_-cancellative._
2. _If_ \(\Bbbk\) _has characteristic zero, then_ \(A\) _is_ \(\delta\)_-cancellative._
However, the question whether the cancellation property holds for any skew polynomial extensions of mixed type with coefficient rings being domains of Krull dimension one remains as an unresolved one in [5].
**Question 3.8**.: _[_5_, Question 5.7]_ _Let \(R\) be an affine commutative \(\Bbbk\)-domain of Krull dimension one. Is any Ore extension \(R[x;\sigma,\delta]\) skew cancellative?_
Skew cancellation is further studied in [57] in terms of iterated Ore extensions and Ore extensions of special types. Moreover, an automorphism \(\sigma\) of \(A\) is called _locally algebraic_ if every finite dimensional subspace of \(A\) is contained in a \(\sigma\)-stable finite dimensional subspace of \(A\)[60]. It is obvious that the identity map is locally algebraic.
There are two new tools used in the skew cancellation problem. The first tool is the notion of divisor subalgebra, which was introduced in [14] to solve Zariski cancellation and automorphism problems for certain noncommutative algebras. Let \(A\) be a \(\Bbbk\)-domain, and \(F\) be any subset of \(A\). We denote by \(Sw(F)\) the set of \(g\in A\) such that \(f=agb\) for some \(a,b\in A\) and \(0\neq f\in F\). That is, \(Sw(F)\) is the set consisting of all the subwords of the elements in \(F\). Let us set \(D_{0}(F)=F\) and inductively define \(D_{n}(F)\) for \(n\geq 1\) as the \(\Bbbk\)-subalgebra of \(A\) generated by \(Sw(D_{n-1}(F))\). The subalgebra \(\mathbb{D}(F)=\bigcup_{n\geq 0}D_{n}(F)\) is called the _divisor subalgebra_ of \(A\) generated by \(F\). Any nonzero element \(f\in A\) is called _controlling_ if \(\mathbb{D}(\{f\})=A\); see [57, Examples] for controlling elements for various examples of noncommutative affine domains.
**Theorem 3.9**.: _[_57_, Theorem 0.9]_ _Let \(A\) be an affine \(\Bbbk\)-domain of finite GK-dimension. Suppose the unit element \(1\) is controlling. Then \(A\) is strongly \(\sigma\)-algebraically cancellative. As a consequence, \(A\) is strongly \(\delta\)-cancellative._
The second tool relies on a structural result of division algebras. Recall that a simple artinian ring \(S\) is called _stratiform_ over \(\Bbbk\)[53] if there is a chain of simple artinian rings
\[S=S_{n}\supseteq S_{n-1}\supseteq\cdots\supseteq S_{1}\supseteq S_{0}=\Bbbk\]
where, for every \(i\), either
1. \(S_{i+1}\) is finite over \(S_{i}\) on both sides; or
2. \(S_{i+1}\) is equal to the quotient ring of the Ore extension \(S_{i}[t_{i};\sigma_{i},\delta_{i}]\) for an automorphism \(\sigma_{i}\) of \(S_{i}\) and \(\sigma_{i}\)-derivation \(\delta_{i}\) of \(S_{i}\).
Such a chain of simple artinian rings is called a stratification of \(S\). The _stratiform length_ of \(S\) is the number of steps in the chain that are of type (ii). A crucial result proved in [53] is that the stratiform length is an invariant of \(S\). Moreover, a Goldie prime ring \(A\) is called _stratiform_ if its quotient division ring, denoted by \(Q(A)\), is stratiform. Many stratiform noetherian domains are proved to be skew cancellative.
**Theorem 3.10**.: _[_57_, Theorem 0.10]_ _Let \(A\) be a noetherian domain that is stratiform. Suppose the unit element \(1\) is controlling. Then \(A\) is strongly skew cancellative in the category of noetherian stratiform domains._
At the end of [57], motivated by the results [5, Lemma 4.3, Proposition 5.6], it is established that every LND-rigid algebra is \(\delta\)-cancellative. The following is the detailed statement.
**Theorem 3.11**.: _[_57_, Theorem 5.4]_ _Suppose that \(\Bbbk\) is a field of characteristic zero. Let \(A\) be an affine \(\Bbbk\)-domain of finite GK-dimension. Suppose that \(\operatorname{ML}(A)=A\). Then \(A\) is \(\delta\)-cancellative._
We end this section by stating the following question from [57].
**Question 3.12**.: _[_57_, Question 5.6]__. Suppose that \(A\) is an affine domain of finite GK-dimension over a base field \(\Bbbk\) of characteristic zero. Suppose either \(\operatorname{ML}(A)=A\) or \(A\) is strongly LND-rigid. Is \(A\) strongly \(\delta\)-cancellative?_
## 4. Zariski cancellation for Poisson algebras
Zariski cancellation problems can be asked for different types of associative and non-associative algebras. In this section, we focus on the Zariski cancellation problem for Poisson algebras.
The notion of the Poisson bracket, first introduced by Simeon Denis Poisson, arises naturally in Hamiltonian mechanics and differential geometry.
**Definition 4.1**.: A _Poisson algebra_ is a commutative \(\Bbbk\)-algebra \(A\) together with a bilinear form \(\{-,-\}:A\times A\to A\) that is both a Lie bracket and a biderivation.
Poisson algebras have recently been studied intensively by many researchers with topics related to noncommutative discriminant [11, 49], representation theory of PI Sklyanin algebras [58, 59], Poisson Dixmier-Moeglin equivalences [6, 35, 44], Poisson enveloping algebras [40, 42, 41, 43], invariant theory [22] and so on. Moreover, the following question was asked in [23] regarding the Zariski cancellation property.
**Question 4.2** (Zariski Cancellation Problem for Poisson Algebras).: When is a Poisson algebra \(A\) cancellative? That is, when does an isomorphism of Poisson algebras \(A[t]\cong B[t]\) for another Poisson algebra \(B\) imply an isomorphism \(A\cong B\) as Poisson algebras?
We first restrict our attention to graded Poisson algebras. The main result below can be viewed as a Poisson version of [7, Theorem 3.1]. Recall that any Poisson algebra \(A\), the _Poisson center_ of \(A\) is denoted by \(\mathcal{Z}_{P}(A):=\{a\in A\,|\,\{a,-\}=0\}\).
**Theorem 4.3**.: _[_23_, Theorem 4.5]_ _Let \(A\) and \(B\) be two connected graded Poisson algebras finitely generated in degree one. Suppose either \(\mathcal{Z}_{P}(A)\) or \(\mathcal{Z}_{P}(B)\) is generated in degree at least \(2\). If \(A[t_{1},\ldots,t_{n}]\cong B[t_{1},\ldots,t_{n}]\) as ungraded Poisson algebras for some \(n\geq 1\), then \(A\cong B\) as connected graded Poisson algebras._
The following result shows a deep connection between the Poisson center and Poisson cancellation.
**Theorem 4.4**.: _Let \(A\) be a Poisson algebra._
1. _(__[_23_, Corollary 5.4]__) If_ \(A\) _is noetherian with artinian Poisson center, then_ \(A\) _is Poisson cancellative._
2. _(__[_23_, Theorem 5.5]__) If_ \(A\) _has trivial Poisson center, then_ \(A\) _is Poisson cancellative._
Theorem 4.4 (b) can be applied to show that Poisson integral domains of Krull dimension two which have nontrivial Poisson brackets are Poisson cancellative ([23, Corollary 5.6]). Here the non-triviality of the Poisson bracket plays an essential role since by [18, 20] there are commutative domains of Krull dimension two that are not cancellative.
It is important to point out that many theories developed in the (associative) noncommutative setting [8, 7] can be adapted to the Poisson setting. Hence the main work of [23] is to modify the ideas of the noncommutative discriminant [12, 13] and the Makar-Limanov invariant [46] into the setting of Poisson algebras.
**Definition 4.5**.: Let A be a Poisson algebra. A derivation \(\delta\) of \(A\) is called a _Poisson derivation_ if
\[\delta(\{a,b\})\ =\ \{\delta(a),b\}+\{a,\delta(b)\}\ \text{for all}\ a,b\in A.\]
The _Poisson Makar-Limanov invariant_ of \(A\) is defined to be
\[\mathrm{PML}(A)\ =\ \bigcap_{\delta\in\mathrm{PLND}(A)}\ker(\delta),\]
where \(\mathrm{PLND}(A)\) denotes the space of all locally nilpotent Poisson derivations of \(\mathrm{A}\).
**Theorem 4.6**.: _[_23_, Theorem 6.12]_ _Assume \(\Bbbk\) is a field of characteristic zero. Let \(A\) be an affine Poisson domain over \(\Bbbk\) with finite Krull dimension. If \(A\) has no nontrivial locally nilpotent Poisson derivations, then \(A\) is Poisson cancellative._
In positive characteristic, we can prove a similar result by replacing Poisson derivations by higher Poisson derivations introduced in [34].
It is important to mention that for Poisson algebras in characteristic zero, their Poisson centers are usually not large enough for us to emulate the definition of discriminant for noncommutative algebras by simply replacing the algebraic center by Poisson center. The following definition used the idea in [38, SS2] to introduce the notion of Poisson discriminant from a representation-theoretic point of view.
**Definition 4.7**.: _[_23_, Definition 7.1]_ Let \(A\) be a Poisson algebra and let \(\mathcal{Z}_{P}=\mathcal{Z}_{P}(A)\) be its Poisson center. Let \(\mathcal{P}\) be a property defined for Poisson algebras that is invariant under isomorphism classes of Poisson algebras. We define the following terms for sets/ideals in \(A\).
1. \((\mathcal{P}\)-locus) \(L_{\mathcal{P}}(A):=\{\mathfrak{m}\in\mathrm{Maxspec}(\mathcal{Z}_{P}):A/ \mathfrak{m}A\text{ has property }\mathcal{P}\}\).
2. \((\mathcal{P}\)-discriminant set) \(D_{\mathcal{P}}(A):=\mathrm{Maxspec}(\mathcal{Z}_{P})\backslash L_{\mathcal{P}}(A)\).
3. \((\mathcal{P}\)-discriminant ideal) \(I_{\mathcal{P}}(A):=\bigcap_{\mathfrak{m}\in D_{\mathcal{P}}(A)}\mathfrak{m} \subset\mathcal{Z}_{P}\).
In the case that \(I_{\mathcal{P}}(A)\) is a principal ideal, generated by \(d\in\mathcal{Z}_{P}\), then \(d\) is called the \(\mathcal{P}\)_-discriminant_ of \(A\), denoted by \(d_{\mathcal{P}}(A)\). Observe that, if \(\mathcal{Z}_{P}\) is a domain, \(d_{\mathcal{P}}(A)\) is unique up to an element of \(\mathcal{Z}_{P}^{\times}\).
There is also a notion of effectiveness for Poisson discriminant. As in the non-commutative setting, effectiveness controls the locally nilpotent Poisson derivations and hence yields Poisson cancellation.
**Theorem 4.8**.: _[_23_, Theorem 7.16]_ _Let \(A\) be an affine Poisson domain with affine Poisson center. If the Poisson discriminant exists and is effective either in \(A\) or its Poisson center, then \(A\) is Poisson cancellative._
Next, we turn to the cancellation of quadratic Poisson polynomial algebras in three variables in comparison with their quantizations as AS-regular algebras of dimension three. By analyzing the Poisson center of an arbitrary connected graded Poisson domain, it was shown that Poisson polynomial algebras in three variables with Poisson bracket either being quadratic or derived from a Lie algebra are cancellative.
**Theorem 4.9**.: _Let \(\Bbbk\) be a base field that is algebraically closed of characteristic zero._
1. _(_[_24_, Theorem 3.11]_ _and_ _[_24_, Corollary 3.14]_) Let_ \(A=\Bbbk[x,y,z]\) _be a quadratic Poisson algebra with nontrivial Poisson bracket. Then the_ \(d\)_th Veronese Poisson subalgebra_ \(A^{(d)}\) _is Poisson cancellative for any_ \(d\geq 1\)_. In particular,_ \(A\) _is Poisson cancellative._
2. _(_[_24_, Theorem 3.15]_) Let_ \(\mathfrak{g}\) _be a non-abelian Lie algebra of dimension_ \(\leq 3\)_. Then the symmetric algebra_ \(S(\mathfrak{g})\) _on_ \(\mathfrak{g}\) _together with the Konstant-Kirillov bracket is Poisson cancellative._
As the Ore extensions of rings, there is a Poisson version of extension of an arbitrary Poisson algebra, called Poisson-Ore extension, using a Poisson derivation \(\alpha\) and a Poisson \(\alpha\)-derivation \(\delta\) of the base Poisson algebra. In [24], the _skew Zariski cancellation problem_ was introduced for Poisson algebras in terms of any Poisson-Ore extension. That is, it asks whether or not an isomorphism between the Poisson-Ore extensions of two base Poisson algebras implies an isomorphism between the two base Poisson algebras. Like in the noncommutative algebraic setting [5, 9, 57], similar invariants including the Poisson Makar-Limanov invariant, the divisor Poisson subalgebra, and the Poisson stratiform length can be defined for Poisson algebras, which play important roles in their skew cancellation problem.
**Theorem 4.10**.: _Let \(\Bbbk\) be a base field of characteristic zero, and \(A\) be a noetherian Poisson domain._
1. _(__[_24_, Theorem 4.4]__) If_ \(A\) _is either Poisson simple of finite Krull dimension with units in_ \(\Bbbk\) _or affine of Krull dimension one, then_ \(A\) _is Poisson_ \(\alpha\)_-cancellative._
2. _(__[_24_, Theorem 4.5]__) If the Poisson Makar-Limanov invariant of_ \(A\) _equals_ \(A\)_, then_ \(A\) _is Poisson_ \(\delta\)_-cancellative._
3. _(__[_24_, Theorem 4.8]__) If_ \(A\) _has finite Krull dimension and the_ \(1\)_-divisor Poisson subalgebra of_ \(A\) _equals_ \(A\)_, then_ \(A\) _is Poisson skew cancellative._
4. _(__[_24_, Theorem 4.15]__) If_ \(A\) _is Poisson stratiform and the_ \(1\)_-divisor Poisson subalgebra of_ \(A\) _equals_ \(A\)_, then_ \(A\) _is Poisson skew cancellative in the category of noetherian Poisson stratiform domains._
In the last part of this section, we will discuss some open questions regarding Poisson cancellation problem. (see [23, SS8] and [24, SS5] for more questions).
For any Poisson algebra \(A\), there is a notion of Poisson universal enveloping algebra \(U(A)\) (see [50]), whose representation category is Morita equivalent to the category of Poisson modules over \(A\).
**Question 4.11**.: _[_23_, Question 8.7]_ _Let \(A\) be any affine Poisson algebra, and \(U(A)\) be its Poisson universal enveloping algebra. What is the relationship between \(U(A)\) being cancellative and A being Poisson cancellative?_
In practice, many Poisson structures can be derived from the process of semiclassical limits of quantized coordinate rings, for instance see [25].
**Question 4.12**.: _[_23_, Question 8.8]_ _What is the relation between the cancellation property of a quantized coordinate ring and the Poisson cancellation property of its semiclassical limit._
Recall that, in [38, Definition 0.2], a notion of _derived cancellation_ is further introduced for any associative algebra by replacing the respective module category by its derived category. We are interested in its Poisson version.
**Question 4.13**.: _[_24_, Question 5.4]_ _Can one define a suitable notion of_ _derived Poisson cancellation_? Under what conditions does a Poisson algebra satisfy this condition?_
|
2301.09728 | Injecting the BM25 Score as Text Improves BERT-Based Re-rankers | In this paper we propose a novel approach for combining first-stage lexical
retrieval models and Transformer-based re-rankers: we inject the relevance
score of the lexical model as a token in the middle of the input of the
cross-encoder re-ranker. It was shown in prior work that interpolation between
the relevance score of lexical and BERT-based re-rankers may not consistently
result in higher effectiveness. Our idea is motivated by the finding that BERT
models can capture numeric information. We compare several representations of
the BM25 score and inject them as text in the input of four different
cross-encoders. We additionally analyze the effect for different query types,
and investigate the effectiveness of our method for capturing exact matching
relevance. Evaluation on the MSMARCO Passage collection and the TREC DL
collections shows that the proposed method significantly improves over all
cross-encoder re-rankers as well as the common interpolation methods. We show
that the improvement is consistent for all query types. We also find an
improvement in exact matching capabilities over both BM25 and the
cross-encoders. Our findings indicate that cross-encoder re-rankers can
efficiently be improved without additional computational burden and extra steps
in the pipeline by explicitly adding the output of the first-stage ranker to
the model input, and this effect is robust for different models and query
types. | Arian Askari, Amin Abolghasemi, Gabriella Pasi, Wessel Kraaij, Suzan Verberne | 2023-01-23T21:41:25Z | http://arxiv.org/abs/2301.09728v1 | # Injecting the BM25 Score as Text Improves BERT-Based Re-rankers
###### Abstract
In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token in the middle of the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and BERT-based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the BM25 score and inject them as text in the input of four different cross-encoders. We additionally analyze the effect for different query types, and investigate the effectiveness of our method for capturing exact matching relevance. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both BM25 and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden and extra steps in the pipeline by explicitly adding the output of the first-stage ranker to the model input, and this effect is robust for different models and query types.
Keywords:Injecting BM25 Two-stage retrieval Transformer-based rankers BM25 Combining lexical and neural rankers
## 1 Introduction
The commonly used ranking pipeline consists of a first-stage retriever, e.g. BM25 [47], that efficiently retrieves a set of documents from the full document collection, followed by one or more re-rankers [59, 40] that improve the initial ranking. Currently, the most effective re-rankers are BERT-based rankers with a cross-encoder architecture, concatenating the query and the candidate document in the input [40, 2, 25, 44]. In this paper, we refer to these re-rankers as \(\text{Cross-Encoder}_{\text{CAT}}\) (\(\text{CE}_{\text{CAT}}\)). In the common re-ranking set-up, BM25 [47] is
widely leveraged [7; 27; 20] for finding the top-\(k\) documents to be re-ranked; however, the relevance score produced by BM25 based on exact lexical matching is not explicitly taken into account in the second stage. Besides, although cross-encoder re-rankers substantially improve the retrieval effectiveness compared to BM25 alone [34], Rau et al. [43] show that BM25 is a more effective _exact lexical matcher_ than \(\mathrm{CE_{CAT}}\) rankers; in their exact-matching experiment they only use the words from the passage that also appear in the query as the input of the \(\mathrm{CE_{CAT}}\). This suggests that \(\mathrm{CE_{CAT}}\) re-rankers can be further improved by a better exact word matching, as the presence of query words in the document is one of the strongest signals for relevance in ranking [48; 50]. Moreover, obtaining improvement in effectiveness by interpolating the scores (score fusion [58]) of BM25 and \(\mathrm{CE_{CAT}}\) is challenging: a linear combination of the two scores has shown to decrease effectiveness on the MSMARCO Passage collection compared to only using the \(\mathrm{CE_{CAT}}\) re-ranker in the second stage retrieval [34].
To tackle this problem, in this work, we propose a method to enhance \(\mathrm{CE_{CAT}}\) re-rankers by directly injecting the BM25 score as a string to the input of the Transformer. Figure 2 show our method for the injection of BM25 in the input of the CE re-ranker. We refer to our method as \(\mathrm{CE_{BM25CAT}}\). Our idea is inspired by the finding by Wallace et al. [54] that BERT models can capture numeracy. In this regard, we address the following research questions:
**RQ1:** What is the effectiveness of BM25 score injection in addition to the query and document text in the input of CE re-rankers?
To answer this question we setup two experiments on three datasets: MSMARCO, TREC DL'19 and '20. First, since the BM25 score has no defined range, we investigate the effect of different representations of the BM25 score by applying various normalization methods. We also analyze the effect of converting the normalized scores of BM25 to integers. Second, we evaluate the best representation of BM25 - based on our empricial study - on four cross-encoders: BERT-base, BERT-large [53], DistillBERT [49], and MiniLM [56], comparing \(\mathrm{CE_{BM25CAT}}\) to \(\mathrm{CE_{CAT}}\) across different Transformer models with a smaller and larger number of parameters. Next, we compare our proposed approach to common interpolation approaches:
**RQ2:** What is the effectiveness of \(\mathrm{CE_{BM25CAT}}\) compared to common approaches for combining the final relevance scores of \(\mathrm{CE_{CAT}}\) and BM25?
To analyze \(\mathrm{CE}_{\mathrm{BM25CAT}}\) and \(\mathrm{CE}_{\mathrm{CAT}}\) in terms of exact matching compared to BM25 we address the following question:
**RQ3:** How effective can \(\mathrm{CE}_{\mathrm{BM25CAT}}\) capture exact matching relevance compared to BM25 and \(\mathrm{CE}_{\mathrm{CAT}}\)?
Furthermore, to provide an explanation on the improvement of \(\mathrm{CE}_{\mathrm{BM25CAT}}\), we perform a qualitative analysis of a case where \(\mathrm{CE}_{\mathrm{CAT}}\) fails to identify the relevant document that is found using \(\mathrm{CE}_{\mathrm{BM25CAT}}\) with the help of the BM25 score.3
Footnote 3: In this work, we interchangeably use the words document and passage to refer to unit that should be retrieved.
To the best of our knowledge, there is no prior work on the effectiveness of cross-encoder re-rankers by injecting a retrieval model's score into their input. Our main contributions in this work are four-fold:
1. We provide a strategy for efficiently utilizing BM25 in cross-encoder re-rankers, which yields statistically significant improvements on all official metrics and is verified by thorough experiments and analysis.
2. We find that our method is more effective than the approaches which linearly interpolate the scores of BM25 and \(\mathrm{CE}_{\mathrm{CAT}}\).
3. We analyze the exact matching effectiveness of \(\mathrm{CE}_{\mathrm{CAT}}\) and \(\mathrm{CE}_{\mathrm{BM25CAT}}\) in comparison to BM25. We show that \(\mathrm{CE}_{\mathrm{BM25CAT}}\) is a more powerful exact matcher than BM25 while \(\mathrm{CE}_{\mathrm{CAT}}\) is less effective than BM25.
4. We analyze the effectiveness of \(\mathrm{CE}_{\mathrm{CAT}}\) and \(\mathrm{CE}_{\mathrm{BM25CAT}}\) on different query types. We show that \(\mathrm{CE}_{\mathrm{BM25CAT}}\) consistently outperforms \(\mathrm{CE}_{\mathrm{CAT}}\) over all type of queries.
After a discussion of related work in Section 2, we describe the retrieval models employed in section 3 and the specifics of our experiments and methods in Section 4. The results are examined and the research questions are addressed in Section 5. Finally, the conclusion is described in Section 6.
## 2 Related work
Modifying the input of re-rankers.Boualili et al. [12, 13] propose a method for highlighting exact matching signals by marking the start and the end of each occurrence of the query terms by adding markers to the input. In addition, they modify original passages and expand each passage with a set of generated queries using Doc2query [41] to overcome the vocabulary mismatch problem. This strategy is different from ours in two aspects: (1) the type of information added to the input: they add four tokens as markers for each occurrence of query terms, adding a burden to the limited input length of 512 tokens for query and document together, while we only add the BM25 score. (2) The need for data augmentation: they need to train a Doc2query model to provide the exact matching signal for improving the BERT re-ranker while our strategy does not need any extra overhead in terms of data augmentation. A few recent, but less related examples are Al-Hajj et al. [4], who experiment with the use of different
supervised signals into the input of the cross-encoder to emphasize target words in context and Li et al. [30], who insert boundary markers into the input between contiguous words for Chinese named entity recognition.
**Numerical information in Transformer models.** Thawani et al. [52] provide an extensive overview of numeracy in NLP models up to 2021. Wallace et al. [54] analyze the ability of BERT models to work with numbers and come to the conclusion that the models capture numeracy and are able to do numerical reasoning; however the models appeared to struggle with interpreting floats. Moreover, Zhang et al. [63] show that BERT models capture a significant amount of information about numerical scale except for general common-sense reasoning. There are various studies that are inspired by the fact that Transformer models can correctly process numbers [11; 38; 26; 15; 22; 21]. Gu et al. [23] incorporate text, categorical and numerical data as different modalities with Transformers using a combining module accross different classification tasks. They discover that adding tabular features increases the effectiveness while using only text is insufficient and results in the worst performance.
**Methods for combining rankers.** Linearly interpolating different rankers' scores has been studied extensively in the literature [34; 58; 10; 9; 8]. In this paper, we investigate multiple linear and non-linear interpolation ensemble methods to analyze the performance of them for combining BM25 and \(\text{CE}_{\text{CAT}}\) scores in comparison to \(\text{CE}_{\text{BM25CAT}}\). For the sake of a fair analysis, we do not compare \(\text{CE}_{\text{BM25CAT}}\) with a Learning-to-rank approach that is trained on 87 features by [65]. The use of ensemble methods brings additional overhead in terms of efficiency because it adds one more extra step to the re-ranking pipeline. It is noteworthy to mention that in this paper, we concentrate on analyzing the improvement by combining the first-stage retriever and a BERT-based re-ranker: BM25 and \(\text{CE}_{\text{CAT}}\) respectively. However, we are aware that combining scores of BM25 and Dense Retrievers that both are first-stage retrievers has also shown improvements [55; 1, 6] that are outside the scope of our study. In particular, CLEAR [20] proposes an approach to train the dense retrievers to encode semantics that BM25 fails to capture for first stage retrieval. However, in this study, our aim is to improve re-ranking in the second stage of two-stage retrieval setting.
## 3 Methods
### First stage ranker: BM25
Lexical retrievers estimate the relevance of a document to a query based on word overlap [46]. Many lexical methods, including vector space models, Okapi BM25, and query likelihood, have been developed in previous decades. We use BM25 because of its popularity as first-stage ranker in current systems. Based on the statistics of the words that overlap between the query and the document, BM25 calculates a score for the pair:
\[s_{lex}(q,d)=BM25(q,d)=\sum_{t\in q\cap l}rsj_{t}.\frac{tf_{t,d}}{tf_{t,d}+k_ {1}\{(1-b)+b\frac{|d|}{l}\}} \tag{1}\]
where \(t\) is a term, \(tf_{t,d}\) is the frequency of \(t\) in document \(d\), \(rsj_{t}\) is the Robertson-Sparck Jones weight [47] of \(t\), and \(l\) is the average document length. \(k_{1}\) and \(b\) are parameters [33, 32].
### \(\text{CE}_{\text{CAT}}\): cross-encoder re-rankers without BM25 injection
Concatenating query and passage input sequences is the typical method for using cross-encoder (e.g., BERT) architectures with pre-trained Transformer models in a re-ranking setup [40, 36, 60, 25]. This basic design is referred to as \(\text{CE}_{\text{CAT}}\) and shown in Figure 1. The query \(q_{1:m}\) and passage \(p_{1:n}\) sequences are concatenated with the \([SEP]\) token, and the \([CLS]\) token representation computed by CE is scored with a single linear layer \(W_{s}\) in the \(\text{CE}_{\text{CAT}}\) ranking model:
\[CE_{CAT}(q_{1:m},p_{1:n})=CE([CLS]\,q\,[SEP]\,p\,[SEP])*W_{s} \tag{2}\]
We use \(\text{CE}_{\text{CAT}}\) as our baseline re-ranker architecture. We evaluate different cross-encoder models in our experiments and all of them follow the above design.
### \(\text{CE}_{\text{BM25CAT}}\): cross-encoder re-rankers with BM25 injection
To study the effectiveness of injecting the BM25 score into the input, we modify the input of the basic input format as follows and call it \(\text{CE}_{\text{BM25CAT}}\):
\[CE_{BM25CAT}(q_{1:m},p_{1:n})=CE([CLS]\,q\,[SEP]\,BM25\,[SEP]\,p\,[SEP])*W_{s} \tag{3}\]
where BM25 represent the relevance score produced by BM25 between query and passage.
We study different representations of BM25 to find the optimal approach for injecting BM25 into the cross-encoders. The reasons are: (1) BM25 scores do not have an upper bound and should be normalized for having an interpretable score given a query and passage; (2) BERT-based models can process integers better than floating point numbers [54] so we analyze if converting the normalized score to an integer is more effective than injecting the floating point score. For normalizing BM25 scores, we compare three different normalization methods: Min-Max, Standardization (Z-score), and Sum:
\[Min\text{-}Max(s_{BM25})=\frac{s_{BM25}-s_{min}}{s_{max}-s_{min}} \tag{4}\]
\[Standard(s_{BM25})=\frac{s_{BM25}-\mu(S)}{\sigma(S)} \tag{5}\]
\[Sum(s_{BM25})=\frac{s_{BM25}}{sum(S)} \tag{6}\]
Where \(s_{BM25}\) is the original score, and \(s_{max}\) and \(s_{min}\) are the maximum and minimum scores respectively, in the ranked list. \(Sum(S)\), \(\mu(S)\), and \(\sigma(S)\) refer to sum, average and standard deviation over the scores of all passages retrieved
for a query. The anticipated effect of the Sum normalizer is that the sum of the scores of all passages in the ranked list will be 1; thus, if the top-\(n\) passages receive much higher scores than the rest, their normalized scores will have a larger difference with the rest of passages' scores in the ranked list; this distance could give a good signal to \(\mathrm{CE}_{\mathrm{BM25CAT}}\). We experiment with Min-Max and Standardization in a local and a global setting. In the local setting, we get the minimum or maximum (for Min-Max) and mean and standard deviation (for Standard) from the ranked list of scores per query. In the global setting, we use \(\{0,50,42,6\}\) as {minimum, maximum, mean, standard deviation} as they have been empirically suggested in prior work to be used as default values across different queries to globally normalize BM25 scores [37]. In our data, the {minimum, maximum, mean, standard deviation} values are \(\{0,98,7,5\}\) across all queries. Because of the differences between the recommended defaults and the statistics of our collections, we explore other global values for Min-Max, using \(25,50,75,100\) as maximum and 0 as minimum. However, we got the best result using default values of [37]. To convert the float numbers to integers we multiply the normalized score to 100 and discard decimals. Finally, we store the number as a string.
### Linear interpolation ensembles of BM25 and \(\mathrm{CE}_{\mathrm{CAT}}\)
We compare our approach to common ensemble methods [34, 64] for interpolating BM25 and BERT re-rankers. We combine the scores linearly using the following methods: (1) Sum: compute sum over BM25 and \(\mathrm{CE}_{\mathrm{CAT}}\) scores, (2) Max: select maximum between BM25 and \(\mathrm{CE}_{\mathrm{CAT}}\) scores, and (3) Weighted-Sum:
\[s_{i}=\alpha\,.\,s_{BM25}+(1-\alpha)\,.\,s_{CE_{CAT}} \tag{7}\]
Where \(s_{i}\) is the weighted sum produced by the interpolation, \(s_{BM25}\) is the normalized BM25 score, \(s_{CE_{CAT}}\) is the \(\mathrm{CE}_{\mathrm{CAT}}\) score, and \(\alpha\in[0..1]\) is a weight that indicates the relative importance. Since \(\mathrm{CE}_{\mathrm{CAT}}\) score \(\in[0,1]\), we also normalize BM25 score using Min-Max normalization. Furthermore, we train ensemble models that take \(s_{BM25}\) and \(s_{CE_{CAT}}\) as features. We experiment with three different classifiers for this purpose: SVM with a linear kernel, SVM with an RBFkernel, Naive Bayes, and Multi Layer Perceptron (MLP) as a non-linear method and report the best classifier performance in Section 5.1.
## 4 Experimental design
**Dataset and metrics.** We conduct our experiments on the MSMARCO-passage collection [39] and the two TREC Deep Learning tracks (TREC-DL'19 and TREC-DL'20) [19, 17]. The MSMARCO-passage dataset contains about 8.8 million passages (average length: 73.1 words) and about 1 million natural language queries (average length: 7.5 words) and has been extensively used to train deep language models for ranking because of the large number of queries. Following prior work on MSMARCO [28, 34, 35, 68, 67], we use the dev set (\(\sim 7k\) queries) for
our empirical evaluation. \(MAP@1000\) and \(nDCG@10\) are calculated in addition to the official evaluation metric \(MRR@10\). The passage corpus of MSMARCO is shared with TREC DL'19 and DL'20 collections with 43 and 54 queries respectively. We evaluate our experiments on these collections using \(nDCG@10\) and \(MAP@1000\), as is standard practice in TREC DL [17, 19] to make our results comparable to previously published and upcoming research. We cap the query length at 30 tokens and the passage length at 200 tokens following prior work [25].
Footnote 3: [https://github.com/arian-askari/injecting_bm25_score_bert](https://github.com/arian-askari/injecting_bm25_score_bert)
**Training configuration and model parameters.** We use the Hugging-face library [57], Cross-encoder package of Sentence-transformers library [45], and PyTorch [42] for the cross-encoder re-ranking training and inference. For injecting the BM25 score as text, we pass the BM25 score in string format into the BERT tokenizer in a similar way to passing query and document. Please note that the integer numbers are already included in the BERT tokenizer's vocabulary, allowing for appropriate tokenization. Following prior work [25] we use the Adam [29] optimizer with a learning rate of \(7*10^{-6}\) for all cross-encoder layers, regardless of the number of layers trained. To train cross-encoder re-rankers for each TREC DL collection, we use the other TREC DL query set as the validation set and we select both TREC DL ('19 and '20) query sets as the validation set to train CEs for the MSMARCO Passage collection. We employ early stopping, based on the nDCG@10 value of the validation set. We use a training batch size of 32. For all cross-Encoder re-rankers, we use Cross-Entropy loss [66]. For the lexical retrieval with BM25 we employ the tuned parameters from the Anserini documentation [33, 32]. 4
Footnote 4: [https://huggingface.co/microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased)
## 5 Results
### Main results: addressing our research questions
Choice of BM25 score representationAs introduced in Section 3.3, we compare different representations of the BM25 score in Table 1 for injection into \(\text{CE}_{\text{BM25CAT}}\). We chose MiniLM [56] for this study as it has shown competitive results in comparison to BERT-based models while it is 3 times smaller and 6 times faster. 5 Our first interesting observation is that injecting the original float score rounded down to 2 decimal points (row \(b\)) of BM25 into the input seems to slightly improve the effectiveness of re-ranker. We assume this is due to the fact that the average query and passage length is relatively small in the MSMARCO Passage collection, which prevents from getting high numbers - with low interpretability for BERT - as BM25 score. Second, we find that the normalized BM25 score with Min-Max in the global normalization setting converted to integer (row \(f\)) is the most significant effective6 representation for injecting BM25.
Footnote 6: Although the evaluation metrics are not in an interval scale, Craswell et al. [18] show that they are mostly reliable in practice on MSMARCO for statistical testing.
The global normalization setting gives better results for both Min-Max (rows \(e,f\)) and Standardization (rows \(i,j\)) than local normalization (rows \(c,d\) and \(g,h\)).7 The reason is probably that in the global setting a candidate document obtains a high normalized score (close to 1 in the floating point representation) if its original score is close to default maximum (for Min-Max normalization) so the normalized score could be more interpretable across different queries. On the other hand, in the local setting, the passages ranked at position 1 always receive 1 as normalized score with Min-Max even if its original score is not high and it does not have a big difference with the last passage in the ranked list.
Footnote 7: The range of normalized integer scores using the best normalizer (row \(f\)) are from 0 to 196 as the maximum BM25 score in the collection is 98.
Moreover, converting the normalized float score to integers gives better results for both Min-Max (rows \(d,f\)) and Standardization (rows \(h,j\)) than the float representation (rows \(c,e\) and \(g,i\)). We find that Min-Max normalization is a better representation for injecting BM25 than Standardization, which could be due to the fact that in Min-Max the normalized score could not be negative, and, as a result, interpreting the injected score is easier for CE\({}_{\text{BM25CAT}}\). We find that the Sum normalizer (rows \(k\) and \(l\)) decreases effectiveness. Apparently, our expectation that Sum would help distinguish between the top-\(n\) passages and the remaining passages in the ranked list (see Section 5.1) is not true.
\begin{table}
\begin{tabular}{l|c|c|c c c} \hline \hline \multicolumn{1}{c|}{**Normalization**} & \multicolumn{1}{c|}{**Local/Global**} & \multicolumn{1}{c|}{**Float/Integer**} & \multicolumn{3}{c}{**MSMARCO**} & \multicolumn{1}{c}{**DEV**} \\ & & & nDCG@10 & MAP & MRR@10 \\ \hline \multicolumn{1}{l|}{\((a)\) MiniLM\({}_{\text{CAT}}\) (without injecting BM25 score)} &.419 &.363 &.360 \\ \hline \multicolumn{1}{l|}{\((b)\) Original Score} & — & — &.420 &.364 &.362 \\ \hline \multicolumn{1}{l|}{\((c)\) Min-Max} & Local & Float &.411 &.359 &.354 \\ \multicolumn{1}{l|}{\((d)\) Min-Max} & Local & Integer &.414 &.361 &.355 \\ \multicolumn{1}{l|}{\((e)\) Min-Max} & Global & Float &.422 &.365 &.363 \\ \multicolumn{1}{l|}{\((f)\) Min-Max} & Global & Integer & **.424\(\dagger\)** & **.368\(\dagger\)** & **.367\(\dagger\)** \\ \hline \multicolumn{1}{l|}{\((g)\) Standard} & Local & Float &.407 &.355 &.352 \\ \multicolumn{1}{l|}{\((h)\) Standard} & Local & Integer &.410 &.358 &.354 \\ \multicolumn{1}{l|}{\((i)\) Standard} & Global & Float &.420 &.363 &.361 \\ \multicolumn{1}{l|}{\((j)\) Standard} & Global & Integer &.421 &.365 &.363 \\ \hline \multicolumn{1}{l|}{\((k)\) Sum} & - & Float &.402 &.349 &.338 \\ \multicolumn{1}{l|}{\((l)\) Sum} & - & Integer &.405 &.350 &.342 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Effectiveness results. Lines \(b\)-\(n\) refer to the MiniLM\({}_{\text{BMCAT}}\) re-ranker using different representations of the BM25 score as text. Significance is shown with \(\dagger\) for the best result (row \(f\)) compared to MiniLM\({}_{\text{CAT}}\) (row \(a\)). Statistical significance was measured with a paired t-test (\(p<0.05\)) with Bonferroni correction for multiple testing.
Impact of BM25 injection for various cross-encoders (Rq1)
Table 2 shows that injecting the BM25 score - using the best normalizer which is MinMax in the global normalization setting converted to integer - into all four cross-encoders improves their effectiveness in all of the metrics compared to using them without injecting BM25. This shows that injecting the BM25 score into the input as a small modification to the current re-ranking pipeline improves the re-ranking effectiveness. This is without any additional computational burden as we train \(\text{CE}_{\text{CAT}}\) and \(\text{CE}_{\text{BM25CAT}}\) in a completely equal setting in terms of number of epochs, batch size, etc. We receive the highest result by BERT-Large\({}_{\text{BM25CAT}}\) for cross-encoder with BM25 injection, which could be due to the higher number of parameters of the model. We find that the results of MiniLM are similar to those for BERT-Base on MSMARCO-DEV while the former is more efficient.
Comparing BM25 Injection with Ensemble Methods (Rq2)
Table 3 shows that while injecting BM25 leads to improvement, regular ensemble methods and Naive Bayes classifier fail to do so; combining the scores of BM25 and BERT\({}_{\text{CAT}}\) in a linear and non-linear (MLP) interpolation ensemble setting even leads to lower effectiveness than using the cross-encoder as sole re-ranker. Therefore, our strategy is a better solution than linear interpolation. We only report results for Naive Bayes - having BM25 and BERT\({}_{\text{CAT}}\) score as features - as
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**TREC DL 20**} & \multicolumn{2}{c|}{**TREC DL 19**} & \multicolumn{2}{c}{**MSMARCO DEV**} \\ & nDCG@10 & MAP & nDCG@10 & MAP & nDCG@10 & MAP & MRR@10 \\ \hline BM25 &.480 &.286 &.506 &.377 &.234 &.195 &.187 \\ \hline
**Re-rankers** & & & & & & \\ BERT-Base\({}_{\text{CAT}}\) &.689 &.447 &.713 &.441 &.399 &.346 &.342 \\ BERT-Base\({}_{\text{BM25CAT}}\) &.705\(\dagger\) &.475\(\dagger\) &.723\(\dagger\) &.453\(\dagger\) &.422\(\dagger\) &.367\(\dagger\) &.364\(\dagger\) \\ \hline BERT-Large\({}_{\text{CAT}}\) &.695 &.464 &.714 &.467 &.401 &.344 &.360 \\ BERT-Large\({}_{\text{BM25CAT}}\) & **.728\(\dagger\)** & **.482\(\dagger\)** & **.731\(\dagger\)** & **.477\(\dagger\)** & **.424\(\dagger\)** &.367\(\dagger\) & **.369\(\dagger\)** \\ \hline DistilBERT\({}_{\text{CAT}}\) &.670 &.442 &.679 &.440 &.383 &.310 &.325 \\ DistilBERT\({}_{\text{BM25CAT}}\) &.682\(\dagger\) &.456\(\dagger\) &.699\(\dagger\) &.451\(\dagger\) &.390\(\dagger\) &.323\(\dagger\) &.339\(\dagger\) \\ \hline MiniLM\({}_{\text{CAT}}\) &.681 &.448 &.704 &.452 &.419 &.363 &.360 \\ MiniLM\({}_{\text{BM25CAT}}\) &.710\(\dagger\) &.473\(\dagger\) &.711\(\dagger\) &.463\(\dagger\) & **.424\(\dagger\)** & **.368\(\dagger\)** &.367\(\dagger\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effectiveness results. Fine-tuned cross-encoders are used for re-ranking over BM25 first stage retrieval with a re-ranking depth of 1000. \(\dagger\) indicates a statistically significant improvement of a cross-encoder with BM25 score injection as text into the input (Cross-encoder\({}_{\text{BM25CAT}}\)) over the same cross-encoder without BM25 score injection (Cross-encoder\({}_{\text{CAT}}\)). Statistical significance was measured with a paired t-test (\(p<0.05\)) with Bonferroni correction for multiple testing.
it had the highest effectiveness of the four estimators. Still, the effectiveness is much lower than \(\mathrm{BERT}_{\mathrm{BM25CAT}}\) and also lower than a simple Weighted-Sum. Weighted-Sum (tuned) in Table 3 is tuned on the validation set, for which \(\alpha=0.1\) was found to be optimal. We analyze the effect of different \(\alpha\) values in a weighted linear interpolation (Weighted-Sum) to draw a more complete picture on the impact of combining scores on the DEV set. Figure 3 shows that by increasing the weight of BM25, the effectiveness decreases. The figure also shows that the tuned alpha which was found on the validation set in Table 3 is not the most optimal possible alpha value for the DEV set. The highest effectiveness for \(\alpha=0.0\) in Figure 3 confirms we should not combine the scores by current interpolation methods and only using scores of \(\mathrm{Bert}\)-\(\mathrm{Base}_{\mathrm{CAT}}\) is better, at least for the MSMARCO passage collection.
the \([MASK]\) token, leaving the model only with a skeleton of the original passage and force it to rely on the exact word matches between query and passage [43]. We do not train models on this input but use our models that were fine-tuned on the original data. Table 4 shows that BERT-Base\({}_{\text{BM25CAT}}\) performs better than both BM25 and BERT-Base\({}_{\text{CAT}}\) in the exact matching setting on all metrics. Moreover, we found that the percentage of relevant passages ranked in top-10 that are common between BM25 and BERT\({}_{\text{BM25CAT}}\) is 40%, which is higher than the percentage of relevant passages between BM25 and BERT\({}_{\text{CAT}}\) (37%). Therefore, the higher effectiveness of BERT\({}_{\text{BM25CAT}}\) in exact matching setting could be at least partly because it mimics BM25 more than BERT\({}_{\text{CAT}}\). In comparison, this percentage is 57 between BERT\({}_{\text{BM25CAT}}\) and BERT\({}_{\text{CAT}}\).
### Analysis of the results
#### 5.2.1 Query types.
In order to analyze the effectiveness of BERT-base\({}_{\text{CAT}}\) and BERT-base\({}_{\text{BM25CAT}}\) across different types of questions, we classify questions based on the lexical answer type. We use the rule-based answer type classifier8 inspired by [31] to extract answer types. We classify MSMARCO queries into 6 answer types: abbreviation, location, description, human, numerical and entity. 4105 queries have a valid answer type and at least one relevant passage in the top-1000. We perform our analysis in two different settings: normal (full-text) and exact-matching (keeping only query words and replacing non-query words with \([MASK]\)). The average \(MRR@10\) per query type is shown in Table 5. The table shows that BERT\({}_{\text{BM25CAT}}\) is more effective than BERT\({}_{\text{CAT}}\) consistently on all types of queries.
Footnote 8: [https://github.com/superscriptjs/qtypes](https://github.com/superscriptjs/qtypes)
**Qualitative analysis.** We show a qualitative analysis of one particular case in Figure 4 to analyze more in-depth what the effect of BM25 injection is and why it works. In the top row, while BERT\({}_{\text{CAT}}\) mistakenly ranked the relevant passage at position 104, BM25 ranked that passage at position 3 and BERT\({}_{\text{BM25CAT}}\) - apparently helped by BM25 - ranked that relevant passage at position 1. In the
\begin{table}
\begin{tabular}{l|l|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Input} & \multicolumn{4}{c}{**MSMARCO DEV**} \\ & & nDCG@10 & MAP & MRR@10 \\ \hline BM25 & Full text &.234 &.195 &.187 \\ BERT-Base\({}_{\text{CAT}}\) & Only query words &.218 (\(\downarrow\)1.6) &.186 (\(\downarrow\)0.9) &.180 (\(\downarrow\)0.7) \\ BERT-Base\({}_{\text{BM25CAT}}\) & Only query words & **.243** (\(\uparrow\).9) & **.209** (\(\uparrow\)1.4) & **.202** (\(\uparrow\)1.5) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparing exact matching effectiveness of BERT-Base\({}_{\text{BM25CAT}}\) and BERT-Base\({}_{\text{CAT}}\) by keeping only the query words in each passage for re-ranking. The increase and decrease of effectiveness compared to BM25 is indicated with \(\uparrow\) and \(\downarrow\).
bottom row, BERT\({}_{\rm CAT}\) mistakenly ranked the irrelevant passage at position 1 and informed by the low BM25 score, BERT\({}_{\rm BM25CAT}\) ranked it much lower, at 69. In order to interpret the importance of the injected BM25 score in the input of CE\({}_{\rm BM25CAT}\) and show its contributions to the matching score in comparison to other words in the query and passage, we use Integrated Gradient (IG) [51] which has been proven to be a stable and reliable interpretation method in many different applications including Information Retrieval [62; 16; 61]. 9 On both rows of Figure 4, we see that the BM25 score ('22' in the top row and '11' in the bottom row) is a highly attributed term in comparison to other terms. This shows that injecting the BM25 score assists BERT\({}_{\rm BM25CAT}\) to identify relevant or non-relevant passages better than BERT\({}_{\rm CAT}\).
Footnote 9: We refer readers to [51] for a detailed explanation.
As a more general analysis, we randomly sampled 100 queries from MSMARCO-DEV. For each query, we took the top-1000 passages retrieved by BM25, we fed all pairs of query and their corresponding retrieved passages (\(100k\) pairs) into BERT\({}_{\rm BM25CAT}\), and computed the attribution scores over the input at the word
Figure 4: Example query and two passages in the input of BERT\({}_{\rm BM25CAT}\). The color of each word indicates the word-level attribution value according to Integrated Gradient (IG) [51], where red is positive, blue is negative, and white is neutral. We use the brightness of different colors to indicate the values of these gradients.
\begin{table}
\begin{tabular}{l|l|c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Input} & \multirow{2}{*}{ABBR} & \multirow{2}{*}{LOC} & \multirow{2}{*}{DESC} & \multirow{2}{*}{HUM} & \multirow{2}{*}{NUM} & \multirow{2}{*}{ENTY} \\ \hline \# queries & & 9 & 493 & 1887 & 455 & 933 & 328 \\ BERT-BaseCAT & Full text &.574 &.477 &.397 &.435 &.361 &.399 \\ BERT-BaseBM25CAT & Full text & **.592** & **.503** & **.428** & **.457** & **.405** & **.411** \\ \hline BM25 & Only query words &.184 &.256 &.215 &.238 &.200 & **.221** \\ BERT-BaseCAT & Only query words &.404 &.204 &.224 &.240 &.177 &.200 \\ BERT-BaseBM25CAT & Only query words & **.438** & **.278** & **.245** & **.258** & **.215** &.216 \\ \hline \hline \end{tabular}
\end{table}
Table 5: MRR@10 on MSMARCO-DEV per query type for comparing BERT-Base\({}_{\rm BM25CAT}\) and BERT-Base\({}_{\rm CAT}\) on different query types in full-text and exact-matching (only keeping query words) settings.
level. We ranked tokens based on their importance using the absolute value of their attribution score and found the mode of the rank of the BM25 token over all samples is 3. This shows that \(\text{BERT}_{\text{BM25CAT}}\) highly attributes the BM25 token for ranking.
## 6 Conclusion and future work
In this paper we have proposed an efficient and effective way of combining BM25 and cross-encoder re-rankers: injecting the BM25 score as text in the input of the cross-encoder. We find that the resulting model, \(\text{CE}_{\text{BM25CAT}}\), achieves a statistically significant improvement for all evaluated cross-encoders. Additionally, we find that our injection approach is much more effective than linearly interpolating the initial ranker and re-ranker scores. In addition, we show that \(\text{CE}_{\text{BM25CAT}}\) performs significantly better in an exact matching setting than both BM25 and \(\text{CE}_{\text{CAT}}\) individually. This suggests that injecting the BM25 score into the input could modify the current paradigm for training cross-encoder re-rankers.
While it is crystal clear that our focus is not on chasing the state-of-the-art, we believe that as future work, our method could be applied into any cross-encoder in the current multi-stage ranking pipelines which are state-of-the-art for the MSMARCO Passage benchmark [24]. Moreover, previous studies show that combining BM25 and BERT re-rankers on _Robust04_[5] leads to improvement [3]. It is interesting to study the effect of injecting BM25 for this task because documents often have to be truncated to fit the maximum model input length [14]; injecting the BM25 score might give information to the cross-encoder re-ranker about the lexical relevance of the whole text of the document. Another interesting direction is to study how Dense Retrievers can benefit from injecting lexical ranker scores. Moreover, injecting scores of several lexical rankers and adding more traditional Learning-to-Rank features could be also interesting.
## Acknowledgments
This work was supported by the EU Horizon 2020 ITN/ETN on Domain Specific Systems for Information Extraction and Retrieval (H2020-EU.1.3.1., ID: 860721).
|
2301.01374 | On Hypergeometric Duality Conjecture | We give an explicit formula for the duality, previously conjectured by Horja
and Borisov, of two systems of GKZ hypergeometric PDEs. We prove that in the
appropriate limit this duality can be identified with the inverse of the Euler
characteristics pairing on cohomology of certain toric Deligne-Mumford stacks,
by way of $\Gamma$-series cohomology valued solutions to the equations. | Lev Borisov, Zengrui Han | 2023-01-03T22:06:30Z | http://arxiv.org/abs/2301.01374v1 | # On hypergeometric duality conjecture
###### Abstract.
We give an explicit formula for the duality, previously conjectured by Horja and Borisov, of two systems of GKZ hypergeometric PDEs. We prove that in the appropriate limit this duality can be identified with the inverse of the Euler characteristics pairing on cohomology of certain toric Deligne-Mumford stacks, by way of \(\Gamma\)-series cohomology valued solutions to the equations.
###### Contents
* 1 Introduction
* 2 Pairing of solutions
* 3 Pairing of the Gamma series
* 4 Euler characteristic pairing
* 5 Extensions and open questions
## 1. Introduction
Let \(C\) be a finite rational polyhedral cone in a lattice \(N=\mathbb{Z}^{\mathrm{rk}N}\). We assume that all ray generators of \(C\) lie on a primitive hyperplane \(\deg(\cdot)=1\) where \(\deg:N\to\mathbb{Z}\) is a linear function. This data encodes an affine toric variety \(X=\operatorname{Spec}\mathbb{C}[N^{\vee}\cap C^{\vee}]\), with the hyperplane condition equivalent to \(X\) being Gorenstein, i.e. having trivial dualizing sheaf.
Let \(\{v_{i}\}_{i=1}^{n}\) be a set of \(n\) lattice points in \(C\) which includes all of its ray generators, with \(\deg(v_{i})=1\) for all \(i\). One can construct crepant resolutions \(\mathbb{P}_{\Sigma}\to X\) by looking at subdivisions \(\Sigma\) of \(C\) based on triangulations that involve some of the points \(v_{i}\). Typically, \(\mathbb{P}_{\Sigma}\) is a smooth Deligne-Mumford stack rather than a smooth variety, with the rare exception of when all cones in \(\Sigma\) are unimodular.
A particular case of Kawamata-Orlov \(K\to D\) conjecture asserts that the derived categories of coherent sheaves on \(\mathbb{P}_{\Sigma}\) are independent of the choice of \(\Sigma\). In fact, it is expected that there is an isotrivial family of triangulated categories which interpolates between the categories in question. This rather mysterious family is well understood at the level of complexified Grothendieck \(K\)-groups. Namely, these should correspond to solutions of
Introduction
Let \(X\) be a smooth smooth manifold and \(\mathbb{P}_{\Sigma}\) be a smooth smooth manifold. Let \(\Sigma\) be a smooth manifold and \(\Sigma\) be a smooth manifold. Let \(\Sigma\) be a smooth manifold and \(\Sigma\) be a smooth manifold. Let \(\Sigma\) be a smooth manifold and \(\Sigma\) be a smooth manifold. Let \(\Sigma\) be a smooth manifold and \(\Sigma\) be a smooth manifold. Let \(\Sigma\) be a smooth manifold and \(\Sigma\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold and \(\Sigma^{\prime}\) be a smooth manifold. Let \(\Sigma^{\prime}\) be a smooth manifold.
spaces. In this paper we are able to verify both statements and thus prove Conjecture 7.3 of [1] in full generality.
Specifically, the following formula provides the pairing in question. Let \(v\in C^{\circ}\) be an element in general position. For a subset \(I\subseteq\{1,\dots,n\}\) of size \(\operatorname{rk}N\) we consider the cone \(\sigma_{I}=\sum_{i\in I}\mathbb{R}_{\geq 0}v_{i}\). We define the coefficients \(\xi_{c,d,I}\) for \(c+d=v_{I}\) as
\[\xi_{c,d,I}=\begin{cases}&(-1)^{\deg(c)},\text{ if }\dim\sigma_{I}=\operatorname{ rk}N\text{ and both }c+\varepsilon v\text{ and }d-\varepsilon v\in\sigma_{I}^{\circ}\\ &0,\text{ otherwise.}\end{cases}\]
Here the condition has to hold for all sufficiently small \(\varepsilon>0\). As usual, we denote by \(\operatorname{Vol}_{I}\) the absolute value of the determinant of the matrix of coefficients of \(v_{i},\ i\in I\) in a basis of \(N\) (i.e., the normalized volume of \(I\)). We can now formulate the first result of this paper.
**Theorem 2.4**.: For any pair of solutions \((\Phi_{c})\) and \((\Psi_{d})\) of bbGKZ\((C,0)\) and bbGKZ\((C^{\circ},0)\) respectively, the pairing
\[\langle\Phi,\Psi\rangle=\sum_{c,d,I}\xi_{c,d,I}\operatorname{Vol}_{I}\left( \prod_{i\in I}x_{i}\right)\Phi_{c}\Psi_{d}\]
is a constant.
As was mentioned before, for a regular triangulation \(\Sigma\) there is a description of solutions to bbGKZ\((C,0)\) and bbGKZ\((C^{\circ},0)\) in terms of the Gamma series \(\Gamma=(\Gamma_{c})\) and \(\Gamma^{\circ}=(\Gamma_{d}^{\circ})\) with values in certain orbifold cohomology spaces \(H\) and \(H^{c}\) associated to \(\mathbb{P}_{\Sigma}\), considered in [1]. Then the second main result of the paper is the following.
**Theorem 4.2**.: The constant pairing \(\langle\Gamma,\Gamma^{\circ}\rangle\) is equal up to a constant factor to the inverse of the Euler characteristic pairing \(\chi(-,-):H\otimes H^{c}\to\mathbb{C}\).
The paper is organized as follows. In Section 2 we prove the above Theorem 2.4. In Section 3 we introduce the spaces \(H\) and \(H^{c}\), the solutions \(\Gamma\) and \(\Gamma^{c}\) with values in them and compute the pairing of Theorem 2.4 on them. We also calculate the asymptotic behavior of the series and their pairing in the large Kahler limit, which is used in the next section. In Section 4 we prove that this pairing is the inverse of the Euler characteristic pairing between \(H\) and \(H^{c}\). This, in particular, implies that the pairing of Theorem 2.4 is nondegenerate. Finally, in Section 5 we explain some easy extensions of our results and state some open questions.
## 2. Pairing of solutions
The goal of this section is to define a pairing between the solution spaces of the better-behaved GKZ systems associated to \(C\) and \(C^{\circ}\). We first study a particular class of pairings and find a sufficient condition to make it give a constant for any pair of solutions of better-behaved GKZ systems. Then we provide a special example of this pairing, inspired by the fan displacement
formula for the resolution of the diagonal in toric varieties, due to Fulton and Sturmfels [6].
To state the first main result of this section, we first introduce some notations. Suppose \(J\) is a subset of \(\{1,2,\ldots,n\}\) with \(|J|=\operatorname{rk}N+1\). We will call such subset _spanning_ if \(\{v_{i},i\in J\}\) spans \(N_{\mathbb{R}}\) over \(\mathbb{R}\). For a spanning set \(J\) there is a unique (up to multiplication by a constant factor) linear relation among the vectors \(\{v_{i}\}_{i\in J}\)
\[\sum_{i\in J}a_{i}v_{i}=0.\]
We introduce \(\operatorname{sgn}:J\to\{0,\pm 1\}\) by \(\operatorname{sgn}(j)\) being \(-1\), \(0\) or \(1\) if \(a_{i}\) is negative, zero or positive, respectively. This gives a decomposition \(J=J_{+}\sqcup J_{-}\sqcup J_{0}\) of the spanning set \(J\). Note that while \(\operatorname{sgn}\) depends on the choice of scaling of the above linear relation, the expressions \(\operatorname{sgn}(j_{1})\operatorname{sgn}(j_{2})\) are well-defined.
The following lemma will be used later in this section. For a subset \(I\subseteq\{1,\ldots,n\}\) of size \(\operatorname{rk}N\) we denote by \(\operatorname{Vol}_{I}\) the normalized volume of the convex hull of the origin and \(v_{i},i\in I\).
**Lemma 2.1**.: Let \(I\subset\{1,\ldots,n\}\) be such that \(\{v_{i},i\in I\}\) form a basis of \(N_{\mathbb{R}}\). Suppose that \(I\) contains \(1\) and consider \(j\not\in I\). Consider the spanning set \(J=I\cup\{j\}\). Let \(\mu\) denote the unique linear function that takes value \(\operatorname{Vol}_{I}\) on \(v_{1}\) and \(0\) on \(v_{i},i\in I\setminus 1\). Then \(\mu(v_{j})=-\operatorname{sgn}(1)\operatorname{sgn}(j)\operatorname{Vol}_{J \setminus 1}\) for the \(\operatorname{sgn}\) defined for \(J\).
Proof.: Up to sign, we can think of the linear function \(\mu\) as taking a wedge product with \(\Lambda_{i\in I\setminus 1}v_{i}\). Thus, \(\mu(v_{j})=\pm\operatorname{Vol}_{J\setminus 1}\) and we just need to determine the sign. Since \(\{v_{i},i\in I\}\) form a basis, the coefficient \(a_{j}\) in the relation \(\sum_{i\in J}a_{i}v_{i}=0\) is nonzero and we may consider it to be \(1\), which ensures \(\operatorname{sgn}(j)=1\). We apply \(\mu\) to \(\sum_{i\in J}a_{i}v_{i}=0\) to get \(a_{1}\operatorname{Vol}_{I}+\mu(v_{j})=0\). This implies that \(a_{1}\) and \(\mu(v_{j})\) have opposite signs, and the definition of \(\operatorname{sgn}(1)\) finishes the argument.
Motivated by our previous work [2], we will look at pairings \(\langle\cdot,\cdot\rangle\) that only have monomial terms \(x_{I}=\prod_{i\in I}x_{i}\) for subsets \(I\) of \(\{1,\cdots,n\}\) of size \(\operatorname{rk}N\). The following proposition provides a sufficient condition on the pairing being a constant.
**Proposition 2.2**.: Let \(\{\xi_{c,d,I}\}\) be a collection of complex numbers for all \(c\in C,\,d\in C^{\circ},\,I\subseteq\{1,\ldots,n\}\) such that \(c+d=\sum_{i\in I}v_{i}\) and \(|I|=\operatorname{rk}N\). Suppose that
\[0=\sum_{j\in J}\operatorname{sgn}(j)\Big{(}\xi_{c-v_{j},d,J\setminus j}\chi(c -v_{j}\in C)+\xi_{c,d-v_{j},J\setminus j}\chi(d-v_{j}\in C^{\circ})\Big{)}\]
holds for all \(c\in C,\,d\in C^{\circ}\) and all spanning subsets \(J\subseteq\{1,2,\ldots,n\}\) with \(|J|=\operatorname{rk}N+1\) and \(\sum_{i\in J}v_{i}=c+d\). Here \(\chi\) denotes the characteristic
function (1 if the statement is true and 0 if it is false). Then
\[\langle\Phi,\Psi\rangle=\sum_{|I|=\operatorname{rk}N}\sum_{c+d=v_{I}}\xi_{c,d,I} \text{Vol}_{I}x_{I}\Phi_{c}\Psi_{d}\]
is a constant for any pair of solutions \((\Phi,\Psi)\).
Proof.: Without loss of generality, it suffices to show that \(\partial_{1}\langle\Phi,\Psi\rangle=0\). We compute it as follows
\[\begin{split}\partial_{1}\langle\Phi,\Psi\rangle=\sum_{ \begin{subarray}{c}|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}&\xi_{c,d,I}\text{Vol}_{I}x_{I}(\Phi_{c+ v_{1}}\Psi_{d}+\Phi_{c}\Psi_{d+v_{1}})\\ &+\sum_{\begin{subarray}{c}1\in I,|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}\xi_{c,d,I}\text{Vol}_{I}x_{I\setminus 1}\Phi_{c}\Psi_{d} \end{split} \tag{2.1}\]
and now use relations on \(\Phi\) and \(\Psi\) to manipulate the second sum. For each term, let \(\mu\) be the linear function given by
\[\mu(v)=v\wedge(\Lambda_{j\in I\setminus 1}v_{j})\]
under the standard identification of \(\Lambda^{\operatorname{rk}N}N_{\mathbb{R}}\cong\mathbb{R}\) where we choose the order of \(\{v_{j}:j\in I\backslash 1\}\) in the wedge product such that \(\mu(v_{1})=\text{Vol}_{I}\) is positive. Note that \(\mu(v_{j})=0\) for all \(j\in I\setminus 1\).
We use \(\mu(c)+\mu(d)=\mu(\sum_{i\in I}v_{i})=\text{Vol}_{I}\) and add appropriate multiples of equations for \(\Phi_{c}\) and \(\Psi_{d}\) with this \(\mu\) to get
\[\Phi_{c}\Psi_{d}\text{Vol}_{I}=-\sum_{j\notin I\setminus 1}x_{j}\mu(v_{j})( \Phi_{c+v_{j}}\Psi_{d}+\Phi_{c}\Psi_{d+v_{j}}).\]
Thus, (2.1) can be rewritten as
\[\begin{split}\partial_{1}\langle\Phi,\Psi\rangle=& \sum_{\begin{subarray}{c}|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}&\xi_{c,d,I}\text{Vol}_{I}x_{I}(\Phi_{c+ v_{1}}\Psi_{d}+\Phi_{c}\Psi_{d+v_{1}})\\ &-\sum_{\begin{subarray}{c}1\in I,|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}&\sum_{j\notin I\setminus 1}\xi_{c,d,I}\mu(v_{j})x_{I_{1 \to j}}(\Phi_{c+v_{j}}\Psi_{d}+\Phi_{c}\Psi_{d+v_{j}})\\ &=\sum_{\begin{subarray}{c}1\notin I,|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}&\xi_{c,d,I}\text{Vol}_{I}x_{I}(\Phi_{c+v_{1} }\Psi_{d}+\Phi_{c}\Psi_{d+v_{1}})\\ &-\sum_{\begin{subarray}{c}1\in I,|I|=\operatorname{rk}N,\\ c+d=v_{I}\end{subarray}}&\sum_{j\notin I}\xi_{c,d,I}\mu(v_{j})x_{I_{1 \to j}}(\Phi_{c+v_{j}}\Psi_{d}+\Phi_{c}\Psi_{d+v_{j}})\\ \end{split}\]
where we canceled the terms with \(1\in I\) in the first sum with \(j=1\) in the second sum. Here \(I_{1\to j}=I\backslash\{1\}\cup\{j\}\). Note that \(\mu\) depends on the set \(I\).
Let us now compute the coefficient at \(x_{\hat{I}}\Phi_{\hat{c}}\Psi_{\hat{d}}\) in the above expression. This coefficient gets contributions from the first sum with \(I=\hat{I}\) and from the second sum with \(I=\hat{I}\cup 1\setminus j\). We observe that \(\hat{I}\) has size \(\operatorname{rk}N\) and
does not contain \(1\). Also note that if \(\operatorname{Vol}_{\hat{I}}=0\), then the coefficient is zero. Indeed, in the second sum, \(\mu(v_{j})=\pm\operatorname{Vol}_{I_{1\to j}}\). Finally, we must have
\[\hat{c}+\hat{d}=v_{J},\ J=\{1\}\sqcup\hat{I}.\]
We look at the set \(J\) which we know to be spanning, since it contains \(\hat{I}\). By Lemma 2.1, we see that \(\mu(v_{j})=\operatorname{sgn}(1)\operatorname{sgn}(j)\operatorname{Vol}_{ \hat{I}}\). Therefore, the first line contributes
\[\xi_{\hat{c}-v_{1},\hat{d},\hat{I}}\operatorname{Vol}_{\hat{I}}\chi(\hat{c}-v_ {1}\in C)+\xi_{\hat{c},\hat{d}-v_{1},\hat{I}}\operatorname{Vol}_{\hat{I}}\chi( \hat{d}-v_{1}\in C^{\circ})\]
and the second line contributes
\[\sum_{j\in J\setminus 1}\operatorname{sgn}(1)\operatorname{sgn}(j) \xi_{\hat{c}-v_{j},\hat{d},J\setminus j}\operatorname{Vol}_{\hat{I}}\chi(\hat{ c}-v_{j}\in C)\] \[+\sum_{j\in J\setminus 1}\operatorname{sgn}(1)\operatorname{sgn}(j )\xi_{\hat{c},\hat{d}-v_{j},J\setminus j}\operatorname{Vol}_{\hat{I}}\chi( \hat{d}-v_{j}\in C^{\circ}).\]
We observe that if \(\operatorname{sgn}(1)=0\), then \(\{v_{i},i\in J\setminus 1\}\) do not span \(N_{\mathbb{R}}\), so \(\operatorname{Vol}_{\hat{I}}=0\) and the statement trivially holds. Thus we can introduce \(\operatorname{sgn}(1)^{2}\) into the first term to have the coefficient at \(x_{\hat{I}}\Phi_{\hat{c}}\Psi_{\hat{d}}\) equal
\[\operatorname{sgn}(1)\operatorname{Vol}_{\hat{I}}\sum_{j\in J}\operatorname{ sgn}(j)\Big{(}\xi_{\hat{c}-v_{j},\hat{d},J\setminus j}\chi(\hat{c}-v_{j}\in C)+ \xi_{\hat{c},\hat{d}-v_{j},J\setminus j}\chi(\hat{d}-v_{j}\in C^{\circ}) \Big{)},\]
and the claim follows.
**Remark 2.3**.: After some sign changes, one can rephrase the condition of Proposition 2.2 as \(d\xi=0\) for an appropriate element \(\xi\in\mathbb{C}[C]\otimes\mathbb{C}[C^{\circ}]\otimes\Lambda^{\operatorname{ rk}N}(\oplus_{i=1}^{n}\mathbb{C}e_{i})\) with the differential
\[d=\sum_{i=1}^{n}[v_{i}]\otimes 1\otimes(e_{i}\wedge)+\sum_{j=1}^{n}1\otimes[v_ {j}]\otimes(e_{j}\wedge)\]
on \(\xi\in\mathbb{C}[C]\otimes\mathbb{C}[C^{\circ}]\otimes\Lambda^{\bullet}(\oplus _{i=1}^{n}\mathbb{C}e_{i})\). We do not pursue this direction further in the paper.
Now we give an explicit formula of the pairing \(\langle-,-\rangle\) between solutions of the better-behaved GKZ systems \(\operatorname{bbGKZ}(C,0)\) and \(\operatorname{bbGKZ}(C^{\circ},0)\). We prove that \(\langle\Phi,\Psi\rangle\) is a constant for any pair of solutions \(\Phi\) and \(\Psi\) by using Proposition 2.2.
Fix a choice of a generic vector \(v\in C^{\circ}\). For a set \(I\) of size \(\operatorname{rk}N\) we consider the cone \(\sigma_{I}=\sum_{i\in I}\mathbb{R}_{\geq 0}v_{i}\). We define the coefficients \(\xi_{c,d,I}\) for \(c+d=v_{I}\) as
\[\xi_{c,d,I}=\begin{cases}&(-1)^{\deg(c)},\ \text{if}\ \dim\sigma_{I}=\operatorname{ rk}N\ \text{and both}\ c+\varepsilon v\ \text{and}\ d-\varepsilon v\in\sigma_{I}^{\circ}\\ &0,\ \text{otherwise}.\end{cases} \tag{2.2}\]
Here the condition has to hold for all sufficiently small \(\varepsilon>0\). It is clear that \(\xi\) is well-defined as long as the vector \(v\) is chosen sufficiently generic. Note
that \(\xi_{c,d,I}\neq 0\) implies that both \(c\) and \(d\) lie in the maximum-dimensional cone \(\sigma_{I}\) (but not necessarily in its interior).
We are now ready to tackle the main result of this section.
**Theorem 2.4**.: For any pair of solutions \((\Phi_{c})\) and \((\Psi_{d})\) of bbGKZ\((C,0)\) and bbGKZ\((C^{\circ},0)\) respectively, the pairing
\[\langle\Phi,\Psi\rangle=\sum_{c,d,I}\xi_{c,d,I}\operatorname{Vol}_{I}\left( \prod_{i\in I}x_{i}\right)\Phi_{c}\Psi_{d}\]
is a constant.
Proof.: We prove this theorem by showing that these coefficients \(\xi_{c,d,I}\) satisfy the conditions in Proposition 2.2, namely
\[0=\sum_{j\in J}\xi_{c-v_{j},d,J\setminus j}\operatorname{sgn}(j)\chi\left(c-v _{j}\in C\right)+\sum_{j\in J}\xi_{c,d-v_{j},J\setminus j}\operatorname{sgn}( j)\chi\left(d-v_{j}\in C^{\circ}\right)\]
for all spanning subsets \(J\subseteq\{1,2,\cdots,n\}\) with \(|J|=\operatorname{rk}N+1\), and all \(c\in C,\ d\in C^{\circ}\) with \(c+d=\sum_{i\in J}v_{i}\).
We first observe that the conditions \(c-v_{j}\in C\) and \(d-v_{j}\in C^{\circ}\) in the equations above are redundant. Indeed, to ensure that \(\xi_{c-v_{j},d,J\setminus j}\neq 0\) we must have \(c-v_{j}+\varepsilon v\in\sigma_{J\setminus j}^{\circ}\), which implies that \(c-v_{j}\in\sigma_{J\setminus j}\subseteq C\). For the second term, to ensure that \(\xi_{c,d-v_{j},J\setminus j}\neq 0\) we must have \(d-v_{j}-\varepsilon v\in\sigma_{J\setminus j}^{\circ}\subseteq C^{\circ}\) (since \(J\setminus j\) is a maximal cone), which implies \(d\in C^{\circ}\). Thus, it suffices to consider the equations
\[0=\sum_{j\in J_{+}\sqcup J_{-}}\xi_{c-v_{j},d,J\setminus j}\operatorname{sgn }(j)+\sum_{j\in J_{+}\sqcup J_{-}}\xi_{c,d-v_{j},J\setminus j}\operatorname{ sgn}(j) \tag{2.3}\]
for \(\xi\) defined in (2.2). The nonzero terms occur for the indices \(j\) such that both \(c-v_{j}+\varepsilon v\) and \(d-\varepsilon v\) lie in \(\sigma_{J\setminus j}^{\circ}\), or both \(c+\varepsilon v\) and \(d-v_{j}-\varepsilon v\) lie in \(\sigma_{J\setminus j}^{\circ}\).
We consider the equation in the variables \(a_{i}\)
\[\sum_{i\in J}a_{i}v_{i}=c+\varepsilon v.\]
The solution set to this equation is an affine line \(l_{c+\varepsilon v}\) in the space \(\mathbb{R}^{\operatorname{rk}N+1}\). A contribution to the first term of (2.3) happens when there is a point on \(l_{c+\varepsilon v}\) with \(a_{j}=1\) and all other \(a_{i}\) lie in \((0,1)\) due to the definition of the coefficient \(\xi\). Similarly, a contribution to the second term happens for \(a_{j}=0\) and all other \(a_{i}\) lie in \((0,1)\).
Recall from Lemma 2.1 that we have a decomposition \(J=J_{+}\sqcup J_{-}\sqcup J_{0}\). For \(i\in J_{0}\), the value of \(a_{i}\) on the line \(l_{c+\varepsilon v}\) is constant. Since \(v\) is generic, we may assume it to be non-integer. Thus, it either prohibits any contributions to (2.3) (if \(a_{i}\not\in(0,1)\)) or provides no restrictions. Therefore, we may now assume that the latter happens for all \(i\in J_{0}\).
The key idea of the proof is to consider the line segments
\[S_{i}=l_{c+\varepsilon v}\cap\{0\leq a_{i}\leq 1\}\]
on \(l_{c+\varepsilon v}\) for all \(i\in J_{+}\sqcup J_{-}\). The nonzero contributions to (2.3) happen exactly for the endpoints of a line segment \(S_{j}\) that lie strictly inside all other segments. The assumption that \(\varepsilon v\) is generic implies that the endpoints of different \(S_{i}\) do not coincide. Indeed, if it were the case, then \(c+\varepsilon v\) would lie in a shift of the span of \(\operatorname{rk}\!N-1\) of \(v\)-s by a lattice element, and we may assure that this does not happen. Consider now \(S=\bigcap_{i\in J_{+}\sqcup J_{-}}S_{i}\). If \(S\) is empty then there are no contribution, since this point would not lie in the interior of other \(S_{i}\). So it suffices to consider the case when \(S\) is a segment \([p,q]\). It is clear that the only points that could contribute to (2.3) are \(p\) and \(q\). In particular, there are at most two nonzero terms in (2.3). We will show that they always cancel each other.
We also note that the orientation of the segment \(S_{i}\) (i.e., the direction in which the parameter \(a_{i}\) increases) on the line \(l_{c+\varepsilon v}\) is determined by \(\operatorname{sgn}(i)\), since the vector along the line is given by the nontrivial linear relation on \(v_{k,k\in J}\). If both \(p\) and \(q\) are the \(a_{i}=1\) and \(a_{j}=1\) ends of the segments \(S_{i}\) and \(S_{j}\), then the segments must have opposite orientations on \(l_{c+\varepsilon v}\) (since they both should point towards the other point). This means that \(\operatorname{sgn}(i)=-\operatorname{sgn}(j)\) and the two terms of (2.3) cancel. Similarly, they cancel if \(p\) and \(q\) are the \(a_{i}=0\) and \(a_{j}=0\) ends of \(S_{i}\) and \(S_{j}\).
Now suppose that \(p\) and \(q\) correspond to \(a_{i}=0\) and \(a_{j}=1\) ends of \(S_{i}\) and \(S_{j}\) (in this case it is possible to have \(i=j\)). In this case the two segments must have the same orientation, and then the factor \((-1)^{\deg c}\) in the definition of \(\xi\) ensures that the two terms cancel each other.
**Remark 2.5**.: As \(v\) varies, we get a finite number of different formulas for the pairing. It is also possible to take a more uniform choice of the pairing by integrating over \(v\) of degree \(1\) (ignoring the contributions of measure zero set of nongeneric \(v\)). However, there does not appear to be any advantage in doing so. We will later see that the pairing is in fact independent of the choice of \(v\).
## 3. Pairing of the Gamma series
In this section we compute the pairing from the previous one on the cohomology-valued solutions to the better-behaved GKZ systems provided by the \(\Gamma\) series. We will show in the next section that the result is the dual of the intersection pairing which provides the proof of Conjecture 7.3 from [1].
We consider a regular triangulation \(\Sigma\) of the cone \(C\) whose vertices are among these vectors \(\{v_{i}\}_{i=1}^{n}\) and its corresponding toric Deligne-Mumford stack \(\mathbb{P}_{\Sigma}\).
**Remark 3.1**.: It will be convenient for us to abuse notation and denote by \(I\) both a subset of \(\{1,\dots,n\}\) and the corresponding cone \(\sum_{i\in I}\mathbb{R}_{\geq 0}v_{i}\)
Similarly, \(\Sigma\) denotes both a simplicial complex on \(\{1,\ldots,n\}\) and the corresponding simplicial fan in \(N_{\mathbb{R}}\) which refines \(C\) and its faces.
**Definition 3.2**.: For each cone \(\sigma\in\Sigma\) we define \(\operatorname{Box}(\sigma)\) to be the set of lattice points \(\gamma\) which can be written as \(\gamma=\sum_{i\in\sigma}\gamma_{i}v_{i}\) with \(0\leq\gamma_{i}<1\). We denote the union of all \(\operatorname{Box}(\sigma)\) by \(\operatorname{Box}(\Sigma)\). To each element \(\gamma\in\operatorname{Box}(\Sigma)\) we associate a _twisted sector_ of \(\mathbb{P}_{\Sigma}\) corresponding to the minimal cone \(\sigma(\gamma)\) in \(\Sigma\) containing \(\gamma\). We define the dual of a twisted sector \(\gamma=\sum\gamma_{i}v_{i}\) by
\[\gamma^{\vee}=\sum_{\gamma_{i}\neq 0}(1-\gamma_{i})v_{i}.\]
or equivalently, the unique element in \(\operatorname{Box}(\sigma(\gamma))\) that satisfies
\[\gamma^{\vee}=-\gamma\ \ \text{mod}\ \sum_{i\in\sigma}\mathbb{Z}v_{i}\]
**Remark 3.3**.: The dual of \(\gamma=0\) is itself. Clearly, we have \(\sigma(\gamma)=\sigma(\gamma^{\vee})\) and \((\gamma^{\vee})^{\vee}=\gamma\).
Twisted sectors are themselves smooth toric DM stacks and the following propositions describe a Stanley-Reisner type presentation of the spaces of cohomology and cohomology with compact support of their coarse moduli spaces, see [1].
**Proposition 3.4**.: As usual, \(\operatorname{Star}(\sigma(\gamma))\) denotes the set of cones in \(\Sigma\) that contain \(\sigma(\gamma)\). Cohomology space \(H_{\gamma}\) of the twisted sector \(\gamma\) is naturally isomorphic to the quotient of the polynomial ring \(\mathbb{C}[D_{i}:i\in\operatorname{Star}(\sigma(\gamma))\backslash\sigma( \gamma)]\) by the ideal generated by the relations
\[\prod_{j\in J}D_{j},\ J\not\in\operatorname{Star}(\sigma(\gamma)),\quad\text{ and }\sum_{i\in\operatorname{Star}(\sigma(\gamma))\backslash\sigma(\gamma)}\mu(v_{i})D_ {i},\ \mu\in\operatorname{Ann}(v_{i},i\in\sigma(\gamma)).\]
We can also view \(H_{\gamma}\) as a module over the polynomial ring \(\mathbb{C}[D_{1},\ldots,D_{n}]\) by declaring \(D_{i}=0\) for \(i\not\in\operatorname{Star}(\sigma(\gamma))\) and solving (uniquely) for \(D_{i},i\in\sigma(\gamma)\) to satisfy the linear relations \(\sum_{i=1}^{n}\mu(v_{i})D_{i}=0\) for all \(\mu\in N^{\vee}\).
**Proposition 3.5**.: Cohomology space with compact support \(H_{\gamma}^{c}\) (viewed as a module over \(H_{\gamma}\)) is generated by \(F_{I}\) for \(I\in\operatorname{Star}(\sigma(\gamma))\) such that \(\sigma_{I}^{\circ}\subseteq C^{\circ}\) with relations
\[D_{i}F_{I}-F_{I\cup\{i\}}\text{ for }i\not\in I,I\cup\{i\}\in \operatorname{Star}(\sigma(\gamma))\] \[\text{ and }D_{i}F_{I}\text{ for }i\not\in I,I\cup\{i\}\not\in \operatorname{Star}(\sigma(\gamma))\]
Similarly, it is given a structure of a module over \(\mathbb{C}[D_{1},\ldots,D_{n}]\).
**Definition 3.6**.: The orbifold cohomology \(H\) of the smooth toric DM stack \(\mathbb{P}_{\Sigma}\) is defined as the direct sum \(\bigoplus_{\gamma}H_{\gamma}\) over all twisted sectors. Similarly, the orbifold cohomology with compact support \(H^{c}\) is defined as \(\bigoplus_{\gamma}H_{\gamma}^{c}\). We denote by \(1_{\gamma}\) the generator of \(H_{\gamma}\).
There is a natural perfect pairing between \(H\) and \(H^{c}\) called _Euler characteristic pairing_. Its origin is the eponymous pairing on certain Grothendick \(K\)-groups, which is then translated to the cohomology via the Chern character, see [1]. We will not be using the original definition, but rather the following formula for the Euler characteristic pairing, which is proved in [2].
**Proposition 3.7**.: The Euler characteristic pairing \(\chi:H\otimes H^{c}\to\mathbb{C}\) on the toric DM stack \(\mathbb{P}_{\Sigma}\) is given by
\[\chi(a,b)=\chi(\oplus_{\gamma}a_{\gamma},\oplus_{\gamma}b_{\gamma})=\sum_{ \gamma}\frac{1}{|\operatorname{Box}(\sigma(\gamma))|}\int_{\gamma^{\vee}} \operatorname{Td}(\gamma^{\vee})a_{\gamma}^{*}b_{\gamma^{\vee}}\]
Here \(*:H\to H\) is the duality map given by \((1_{\gamma})^{*}=1_{\gamma^{\vee}}\) and \((D_{i})^{*}=-D_{i}\), and \(\operatorname{Td}(\gamma)\) is the Todd class of the twisted sector \(\gamma\) which is defined as
\[\operatorname{Td}(\gamma)=\frac{\prod_{i\in\operatorname{Star}\sigma(\gamma) \setminus\sigma(\gamma)}D_{i}}{\prod_{i\in\operatorname{Star}\sigma(\gamma)}( 1-e^{-D_{i}})}.\]
The linear function \(\int\colon H^{c}_{\gamma}\to\mathbb{C}\) takes values \(\frac{1}{\operatorname{Vol}_{\overline{I}}}\) on each generator \(F_{I}\), where \(\operatorname{Vol}_{\overline{I}}\) denotes the volume of the cone \(\overline{\sigma_{I}}\) in the quotient fan \(\Sigma/\sigma(\gamma)\). It takes value zero on all elements of \(H^{c}_{\gamma}\) of lower degree.
Let \(\Sigma\) be a regular (=projective) subdivision of \(C\) based on some of the \(v_{i}\). Let \(\psi_{i}\) be the real numbers such that \(\Sigma\) reads off the lower boundary of the convex hull of the origin and \(\{(v_{i},\psi_{i}),1\leq i\leq n\}\) in \(N_{\mathbb{R}}\oplus\mathbb{R}\). We assume that \(\psi_{i}\) are generic so this convex hull is simplicial. We denote by \(\psi\) the strictly convex piecewise linear function on \(C\) whose graph is the aforementioned lower boundary. It takes values \(\psi_{i}\) on all \(v_{i}\) which generate rays in \(\Sigma\) and has lower values than \(\psi_{i}\) on other \(v_{i}\). Its key property is that for any finite collection \(w_{i}\in C\) and \(\alpha_{i}\in\mathbb{R}_{>0}\) there holds
\[\psi(\sum_{i}\alpha_{i}w_{i})\leq\sum_{i}\alpha_{i}\psi(w_{i})\]
with equality if and only if there exists a cone in \(\Sigma\) which contains all of the \(w_{i}\).
Recall from [1] the following solution to the equations bbGKZ\((C,0)\) with values in \(H=\bigoplus_{\gamma}H_{\gamma}\). We define
\[\Gamma_{c}(x_{1},\dots,x_{n})=\bigoplus_{\gamma}\sum_{l\in L_{c,\gamma}}\prod _{i=1}^{n}\frac{x_{i}^{l_{i}+\frac{D_{i}}{2\pi i}}}{\Gamma(1+l_{i}+\frac{D_{i }}{2\pi i})} \tag{3.1}\]
where the direct sum is taken over twisted sectors \(\gamma=\sum_{j\in\sigma(\gamma)}\gamma_{j}v_{j}\) and the set \(L_{c,\gamma}\) is the set of solutions to \(\sum_{i=1}^{n}l_{i}v_{i}=-c\) with \(l_{i}-\gamma_{i}\in\mathbb{Z}\) for all \(i\). The numerator is defined by picking a branch of \(\log(x_{i})\).
We will first prove that for each \(c\in C\cap N\) the series for \(\Gamma\) converges absolutely and uniformly on compacts for \(\mathbf{x}\) such that the \((-\log|x_{i}|)\) are in an appropriate shift of the cone of values on \(v_{i}\) of convex \(\Sigma\)-piecewise
linear functions. The proof was skipped in [1] because it is essentially the same as that in [4], but we will present it here, both for completeness and to facilitate arguments about the asymptotic behavior of \(\Gamma_{c}\).
**Proposition 3.8**.: We denote by \(C_{\Sigma}\) the cone of the secondary fan that corresponds to \(\Sigma\), i.e. the cone of \((\psi_{i})\in\mathbb{R}^{n}\) that give rise to \(\Sigma\). For each \(c\in C\cap N\) there exists \(\hat{\psi}\in\mathbb{R}^{n}\) such that the series (3.1) converges absolutely and uniformly on compacts in the region of \(\mathbb{C}^{n}\)
\[\{(-\log|x_{1}|,\ldots,-\log|x_{n}|)\in\hat{\psi}+C_{\Sigma},\ \arg(\mathbf{x}) \in(-\pi,\pi)^{n}\}. \tag{3.2}\]
Proof.: An immediate observation is that we can ignore the factor
\[\prod_{i=1}^{n}x_{i}^{\frac{D_{i}}{2\pi i}}=\prod_{i=1}^{n}\mathrm{e}^{\frac{D _{i}\log x_{i}}{2\pi i}}\]
because it does not depend on \(l\) and is bounded on compacts in the region (3.2).
It suffices to understand what happens for a fixed \(\gamma\). Note that while the summation takes place over an affine lattice \(L_{c,\gamma}\), the nonzero contributions only occur for \((l_{1},\ldots,l_{n})\) such that the set
\[I(l)=\{i,l_{i}\in\mathbb{Z}_{<0}\}\sqcup\sigma(\gamma)\]
is a cone \(\sigma\) in \(\Sigma\), because each \(l_{i}\in\mathbb{Z}_{<0}\) contributes a factor \(D_{i}\) due to a pole of \(\Gamma\) at a nonpositive integer. Consequently, it suffices to bound the summation over the subset \(L_{c,\gamma,\sigma}\) of \(L_{c,\gamma}\) with the additional property that the above defined \(I(l)\) is a subset of some fixed maximum-dimensional cone \(\sigma\) of \(\Sigma\) that contains \(\sigma(\gamma)\). For any such \(l\in L_{c,\gamma,\sigma}\) we have
\[\sum_{i,l_{i}<0}(-l_{i})v_{i}=\sum_{i,l_{i}\geq 0}l_{i}v_{i}+c.\]
Let us denote by \(\psi\) the \(\Sigma\)-piecewise linear convex function that corresponds to \((-\hat{\psi}_{i}-\log|x_{i}|)\) by the assumption on \(\mathbf{x}\). Since the \(v_{i}\) on the left hand side of the above equation lie in \(\sigma\in\Sigma\), we have
\[\sum_{i,l_{i}<0}(-l_{i})(-\hat{\psi}_{i}-\log|x_{i}|) =\psi(\sum_{i,l_{i}<0}(-l_{i})v_{i})\leq\sum_{i,l_{i}\geq 0}l_{i} \psi(v_{i})+\psi(c)\] \[=\sum_{i,l_{i}\geq 0}l_{i}(-\hat{\psi}_{i}-\log|x_{i}|)+\psi(c)\]
and therefore
\[\sum_{i=1}^{n}l_{i}\log|x_{i}|\leq-\sum_{i=1}^{n}l_{i}\hat{\psi}_{i}+\psi(c). \tag{3.3}\]
This leads to an upper bound
\[\Big{|}\prod_{i=1}x_{i}^{l_{i}}\Big{|}\leq\mathrm{e}^{\psi(c)}\mathrm{e}^{- \sum_{i=1}^{n}l_{i}\hat{\psi}_{i}}. \tag{3.4}\]
Crucially, since all \(v_{i}\) have degree \(1\), we see that \(\sum_{i}l_{i}=-\deg c\). Thus, we can apply the key estimate of [4, Lemma A.4] which states that for any \(\delta>0\) and any collection of real numbers \(a_{i},b_{i}\) for \(i=1,\ldots,n\) with
\[|\sum_{i}a_{i}|\leq\delta,\ \sum_{i}|b_{i}|\leq\delta\]
there exists a constant \(A\) such that
\[\Big{|}\prod_{i=1}^{n}\frac{1}{\Gamma(a_{i}+\mathrm{i}b_{i})}\Big{|}\leq A(4n )^{\sum_{i=1}^{n}|a_{i}|}.\]
By the Cauchy's formula for partial derivatives, this implies an upper bound of the form \(A_{1}(A_{2})^{\sum_{i=1}^{n}|l_{i}|}\) on the coefficients on all monomials in \(D_{i}\) of bounded degree of the function
\[\prod_{i=1}^{n}\frac{1}{\Gamma(1+l_{i}+\frac{D_{i}}{2\pi\mathrm{i}})}.\]
Together with (3.4), we conclude that in any Euclidean norm on \(H_{\gamma}\) the absolute value of each term of the series is bounded by
\[\Big{|}\prod_{i=1}^{n}\frac{x_{i}^{l_{i}}}{\Gamma(1+l_{i}+\frac{D_{i}}{2\pi \mathrm{i}})}\Big{|}\leq A_{1}(A_{2})^{\sum_{i=1}^{n}|l_{i}|}\Big{|}\prod_{i=1 }x_{i}^{l_{i}}\Big{|}\leq A_{3}(A_{2})^{\sum_{i=1}^{n}|l_{i}|}\mathrm{e}^{- \sum_{i=1}^{n}l_{i}\hat{\psi}_{i}}. \tag{3.5}\]
We observe that the set \(L_{c,\gamma,\sigma}\) is the set of lattice points in a shift of a (lower-dimensional) polyhedral cone \(C_{\sigma}\) in \(\mathbb{R}^{n}\) given by the equality \(\sum_{i=1}^{n}l_{i}v_{i}=0\) and inequalities \(l_{i}\geq 0\) for all \(i\not\in\sigma\). We may assume \(\hat{\psi}\) to give a strictly \(\Sigma\)-convex function. It then follows that for any ray generator \(l\) of \(C_{\sigma}\) there holds
\[\sum_{i}l_{i}\hat{\psi}_{i}>0.\]
Indeed, by convexity for \(\hat{\psi}\) for any \(l\in C_{\sigma}\) we have the inequality \(\sum_{i}l_{i}\hat{\psi}_{i}\geq 0\) (the proof is the same as that of (3.3)) which holds even if \(\hat{\psi}\) is deformed slightly, so it can only be equality for \(l=0\). As a consequence, there is a constant \(r\) such that
\[\sum_{i=1}^{n}|l_{i}|\leq r(\sum_{i}l_{i}\hat{\psi}_{i})\]
on \(C_{\sigma}\).
Therefore, we can replace \(\hat{\psi}\) by a large enough multiple of itself and use (3.5) to get on any compact subset of the region (3.2)
\[\Big{|}\prod_{i=1}^{n}\frac{x_{i}^{l_{i}}}{\Gamma(1+l_{i}+\frac{D_{i}}{2\pi \mathrm{i}})}\Big{|}\leq A_{4}\mathrm{e}^{-A_{5}\sum_{i=1}^{n}l_{i}\hat{\psi}_ {i}}\]
for some \(A_{5}>0\). Since the number of terms in \(L_{c,\gamma,\sigma}\) with \(\sum_{i=1}^{n}l_{i}\hat{\psi}_{i}\in[m,m+1)\) is bounded by a polynomial in \(m\), we get the desired convergence.
There is a similarly defined \(\Gamma\)-series solution \(\Gamma^{\circ}\) of bbGKZ\((C^{\circ},0)\), with values in \(H^{c}=\bigoplus_{\gamma}H^{c}_{\gamma}\). We define
\[\Gamma^{\circ}_{c}(x_{1},\ldots,x_{n})=\bigoplus_{\gamma}\sum_{l\in L_{c, \gamma}}\prod_{i=1}^{n}\frac{x_{i}^{l_{i}+\frac{D_{i}}{2\pi\mathrm{i}}}}{\Gamma (1+l_{i}+\frac{D_{i}}{2\pi\mathrm{i}})}\left(\prod_{i\in\sigma}D_{i}^{-1} \right)F_{\sigma}\]
where \(\sigma\) is the set of \(i\) with \(l_{i}\in\mathbb{Z}_{<0}\).
**Proposition 3.9**.: The series \(\Gamma^{\circ}\) converges uniformly on compacts in the region (3.2) for an appropriate choice of \(\hat{\psi}\).
Proof.: The idea of the proof are the same as that of Proposition 3.8 and we leave the details to the reader.
Our next goal is to understand the asymptotic behavior of
\[\Gamma_{c}(t^{-\psi(v_{1})}x_{1},\ldots,t^{-\psi(v_{n})}x_{n})\]
for real \(t\to+\infty\). We can assume \(x_{i}\) to be generic nonzero complex numbers, so that for large enough \(t\) we fall within the range of convergence of \(\Gamma\).
For each \(c\) we consider the minimum cone \(\sigma(c)\) of \(\Sigma\) that contains \(c\). We have \(c=\sum_{j\in\sigma(c)}c_{j}v_{j}\). It defines a twisted sector \(\gamma(c)=\sum_{j\in\sigma(c)}\{c_{j}\}v_{j}\). We also consider the dual twisted sector \(\gamma^{\vee}(c)=\sum_{j\in\sigma(c),c_{j}\not\in\mathbb{Z}}(1-\{c_{j}\})v_{j}\). There is a special element
\[-c=\sum_{i\in I}(-c_{i})v_{i} \tag{3.6}\]
in \(L_{c,\gamma^{\vee}(c)}\).
**Lemma 3.10**.: As \(t\to+\infty\), we have for \(c\in C\cap N\) and \(\gamma\neq\gamma^{\vee}(c)\) the \(\gamma\) summand of \(\Gamma_{c}(t^{-\psi(v_{1})}x_{1},\ldots,t^{-\psi(v_{n})}x_{n})\) is \(o(t^{\psi(c)})\). For \(\gamma=\gamma^{\vee}(c)\) we have
\[\Gamma_{c}(t^{-\psi(v_{1})}x_{1},\ldots)=t^{\psi(c)}\prod_{i=1}^{n}\mathrm{e} ^{\frac{D_{i}}{2\pi\mathrm{i}}(\log x_{i}-\psi(v_{i})\log t)}\prod_{i=1}^{n} \frac{x_{i}^{-c_{i}}}{\Gamma(1-c_{i}+\frac{D_{i}}{2\pi\mathrm{i}})}(1+o(1)).\]
Proof.: Let \(\gamma=\sum_{j\in\sigma(\gamma)}\gamma_{j}v_{j}\). Let \((l_{i})\) be an element of \(L_{c,\gamma}\). The contribution to \(\Gamma_{c}(t^{-\psi(v_{1})}x_{1},\ldots)\) is only nonzero if the set of \(i\) for which \(l_{i}\in\mathbb{Z}_{<0}\) together with \(\sigma(\gamma)\) is a cone in \(\Sigma\). Consequently, \(i\) for which \(l_{i}\) are negative lie in a cone of \(\Sigma\). Therefore,
\[\sum_{l_{i}<0}(-l_{i})\psi(v_{i})=\psi(\sum_{l_{i}<0}(-l_{i})v_{i})=\psi(c+ \sum_{l_{i}>0}l_{i}v_{i})\leq\sum_{l_{i}>0}l_{i}\psi(v_{i})+\psi(c), \tag{3.7}\]
which implies
\[-\sum_{i=1}^{n}l_{i}\psi(v_{i})\leq\psi(c). \tag{3.8}\]
Now notice that the equality in (3.8) holds if and only if the minimal cone of \(\sum_{l_{i}<0}(-l_{i})v_{i}\) is a cone in \(\Sigma\) which contains \(c\) and all \(v_{i}\) with \(l_{i}>0\). This cone would then contain \(c\) and all \(v_{i}\) for which \(l_{i}\neq 0\). This means that \(l_{i}=-c_{i}\), which implies that \(\gamma=\gamma^{\vee}(c)\). This gives the claimed asymptotic contribution.
It is not enough to bound the asymptotic behavior of each individual term as \(t\to\infty\), one also needs to ensure that the rest of the terms _together_ do not contribute to anything larger than \(o(t^{\psi(c)})\). This follows either from the estimates of Proposition 3.8 or simply from the fact that we have absolute convergence at \(\mathbf{x}\) and then all other terms decay faster. Indeed, if we have an absolutely convergent series \(\sum_{i\geq 0}a_{i}\) and then consider \(\sum_{i\geq 0}a_{i}t^{\alpha_{i}}\) with \(\alpha_{0}-\alpha_{i}\) larger than some positive \(\varepsilon\), then as \(t\to\infty\) we have
\[\sum_{i\geq 0}a_{i}t^{\alpha_{i}}=a_{0}t^{\alpha_{0}}(1+o(1))\]
because
\[\Big{|}\sum_{i>0}a_{i}t^{\alpha_{i}-\alpha_{0}}\Big{|}\leq t^{-\varepsilon} \sum_{i>0}|a_{i}|.\]
We can apply it to our situation since \((l_{i})\) are in a countable set and there exists \(\varepsilon>0\) so that for all other terms the inequality (3.8) is strict by at least \(\varepsilon\). The logarithmic terms \(\prod_{i}(t^{-\psi(v_{1})}x_{i})^{\frac{D_{i}}{2\pi i}}\) can be absorbed by a slight change of \(\varepsilon\).
We can state a similar result for \(\Gamma^{\circ}\). For \(d\in C^{\circ}\) we consider the element of \(L_{d,\gamma^{\vee}(d)}\)
\[-d=\sum_{i\in\sigma(d)}(-d_{i})v_{i}.\]
**Lemma 3.11**.: As \(t\to+\infty\), we have for \(c\in C\cap N\) and \(\gamma\neq\gamma^{\vee}(d)\) the \(\gamma\) summand of \(\Gamma^{\circ}_{d}(t^{-\psi(v_{1})}x_{1},\ldots,t^{-\psi(v_{n})}x_{n})\) is \(o(t^{\psi(d)})\). For \(\gamma=\gamma^{\vee}(d)\) we have
\[\Gamma_{d}(t^{-\psi(v_{1})}x_{1},\ldots)=t^{\psi(d)}\prod_{i=1}^ {n}\mathrm{e}^{\frac{D_{i}}{2\pi i}(\log x_{i}-\psi(v_{i})\log t)}\prod_{i=1} ^{n}\frac{x_{i}^{-d_{i}}}{\Gamma(1-d_{i}+\frac{D_{i}}{2\pi i})}\] \[\left(\prod_{i\in\sigma(d)}D_{i}^{-1}\right)F_{\sigma(d)}(1+o(1)).\]
Proof.: The proof is analogous to that of Lemma 3.10 and is left to the reader.
Now we use this information about the asymptotic behavior of \(\Gamma\) and \(\Gamma^{\circ}\) to compute the constant \(\langle\Gamma,\Gamma^{\circ}\rangle=\sum_{c,d,I}\xi_{c,d,I}\operatorname{Vol}_{I} \left(\prod_{i\in I}x_{i}\right)\Gamma_{c}\otimes\Gamma_{d}^{\circ}\) where \(\xi\) are defined in Theorem 2.4.
As in Section 2, let \(I\) be a subset of \(\{1,\ldots,n\}\) of size \(\operatorname{rk}N\), which may or may not be a cone in \(\Sigma\). Let \(c\) and \(d\) be such that \(c+d=\sum_{i\in I}v_{i}\) and \(c+\varepsilon v,d-\varepsilon v\in\sum_{i\in I}\mathbb{R}_{\geq 0}v_{i}\) for small \(\varepsilon>0\). The following observation is key.
**Proposition 3.12**.: Under the above assumptions on \(c,d,I\) we have
\[\lim_{t\to+\infty}\prod_{i=1}^{n}(t^{-\psi(v_{i})}x_{i})\Gamma_{c}(t^{-\psi(v_ {1})}x_{1},\ldots)\Gamma_{d}^{\circ}(t^{-\psi(v_{1})}x_{1},\ldots)=0\]
unless \(\gamma(d)=\gamma^{\vee}(c)\) and \(I\) contains \(\sigma(\gamma(c))\).
Proof.: Since \(c\) and \(d\) are contained in \(\sum_{i\in I}\mathbb{R}_{\geq 0}v_{i}\) and \(c+d=v_{I}\), we have
\[c=\sum_{i\in I}\alpha_{i}v_{i},\ d=\sum_{i\in I}(1-\alpha_{i})v_{i}\]
with \(\alpha_{i}\in[0,1]\). Convexity of \(\psi\) implies that
\[\psi(c)\leq\sum_{i}\alpha_{i}\psi(v_{i}),\ \psi(d)\leq\sum_{i}(1-\alpha_{i})\psi(v _{i}) \tag{3.9}\]
which leads to \(\psi(c)+\psi(d)-\sum_{i}\psi(v_{i})\leq 0\), so we can use Propositions 3.10 and 3.11 to see that the leading power of \(t\) is nonpositive. In fact, it is negative, unless the inequalities in (3.9) are equalities, which means that the subset of \(I\) for which \(\alpha_{i}>0\) is a cone in \(\Sigma\), and similarly for the subset of \(\alpha_{i}<1\). This implies the claim.
**Proposition 3.13**.: If \(\gamma(c)=\gamma^{\vee},\ \gamma(d)=\gamma=\sum_{i\in I}\gamma_{i}v_{i}\), then we define \(I_{c}\) to be the subset of \(I\) such that the coefficients \(c_{i}\) of \(c\) are equal to \(1\) and similarly for \(I_{d}\). The asymptotic behavior as \(t\to\infty\) is
\[\prod_{i=1}^{n}(t^{-\psi(v_{i})}x_{i})\Gamma_{c}(t^{-\psi(v_{1})} x_{1},\ldots)\Gamma_{d}^{\circ}(t^{-\psi(v_{1})}x_{1},\ldots)=o(1)+\frac{1}{(2 \pi\mathrm{i})^{\operatorname{rk}N-|\sigma(\gamma)|}}\] \[\cdot\frac{D_{I_{c}}}{\prod_{i\in\sigma(\gamma)}\Gamma(\gamma_{i }+\frac{D_{i}}{2\pi\mathrm{i}})\prod_{i\in\operatorname{Star}(\sigma(\gamma)) \setminus\sigma(\gamma)}\Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}})}\prod_{i=1}^{ n}\mathrm{e}^{\frac{D_{i}}{2\pi\mathrm{i}}(\log x_{i}-\psi(v_{i})\log t)}\] \[\bigotimes\frac{F_{I_{d}}}{\prod_{i\in\sigma(\gamma)}\Gamma(1- \gamma_{i}+\frac{D_{i}}{2\pi\mathrm{i}})\prod_{i\in\operatorname{Star}(\sigma( \gamma))\setminus\sigma(\gamma)}\Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}})}\prod_ {i=1}^{n}\mathrm{e}^{\frac{D_{i}}{2\pi\mathrm{i}}(\log x_{i}-\psi(v_{i})\log t)}\]
in \(H_{\gamma}\otimes H_{\gamma^{\vee}}^{c}\).
Proof.: The proof of Proposition 3.12 shows that the only contribution other than \(o(1)\) can come from the terms that give better than \(o(t^{\psi(c)})\) and \(o(t^{\psi(d)})\) contributions to the asymptotic behavior of \(\Phi_{c}\) and \(\Phi_{d}\). So by Propositions
3.10 and 3.11 the only contributions come from elements of \(L_{c,\gamma^{\vee}}\) and \(L_{d,\gamma}\) given by
\[-c=\sum_{i\in I}(-c_{i})v_{i},\ -d=\sum_{i\in I}(c_{i}-1)v_{i}.\]
For \(i\in\sigma(\gamma)\) we note that \(\gamma_{i}=1-c_{i}\) if \(c_{i}\in(0,1)\). For \(i\in I_{c}\) we use
\[\frac{1}{\Gamma(1-c_{i}+\frac{D_{i}}{2\pi\mathrm{i}})}=\frac{1}{\Gamma(\frac{D _{i}}{2\pi\mathrm{i}})}=\frac{\frac{D_{i}}{2\pi\mathrm{i}}}{\Gamma(1+\frac{D_{ i}}{2\pi\mathrm{i}})}\]
and similarly for \(i\in I_{d}\), and the result follows.
Now we recall that \(\langle\Gamma,\Gamma^{\circ}\rangle\) is constant.
**Corollary 3.14**.: The constant pairing \(\langle\Gamma,\Gamma^{\circ}\rangle\) lies in \(\bigoplus_{\gamma}H_{\gamma}\otimes H_{\gamma^{\vee}}^{c}\) and is given by
\[\frac{1}{(2\pi\mathrm{i})^{\mathrm{rk}\,N}}\bigoplus_{\gamma}\sum_{\begin{subarray} {c}c\in C,d\in C^{\circ}\\ |I|=\mathrm{rk}\,N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi\mathrm{ i})^{|\sigma(\gamma)|}\frac{D_{I_{c}}}{\hat{\Gamma}_{\gamma}}\otimes\frac{F_{I_{ d}}}{\hat{\Gamma}_{\gamma^{\vee}}}\]
where \(\hat{\Gamma}_{\gamma}=\prod_{i\in\sigma(\gamma)}\Gamma(\gamma_{i}+\frac{D_{i }}{2\pi\mathrm{i}})\prod_{i\in\operatorname{Star}(\sigma(\gamma))\setminus \sigma(\gamma)}\Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}})\) and similarly for \(\hat{\Gamma}_{\gamma^{\vee}}\). There also holds for each \(k\)
\[0 =\bigoplus_{\gamma}\sum_{\begin{subarray}{c}c\in C,d\in C^{ \circ}\\ |I|=\mathrm{rk}\,N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\Big{(}D_{k}\frac{D_{I_{c}}}{\hat{\Gamma}_{ \gamma}}\Big{)}\otimes\frac{F_{I_{d}}}{\hat{\Gamma}_{\gamma^{\vee}}}\] \[+\bigoplus_{\gamma}\sum_{\begin{subarray}{c}c\in C,d\in C^{ \circ}\\ |I|=\mathrm{rk}\,N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\frac{D_{I_{c}}}{\hat{\Gamma}_{\gamma}}\otimes \Big{(}D_{k}\frac{F_{I_{d}}}{\hat{\Gamma}_{\gamma^{\vee}}}\Big{)}.\]
Proof.: Proposition 3.13 gives the asymptotic behavior of \(\langle\Gamma,\Gamma^{\circ}\rangle\) as a polynomial in \(\log x_{i}\). However, we also know it is a constant by Theorem 2.4. The first statement of the proposition is reading off the constant term of the polynomial and the second statement is reading off the coefficient by \(\log x_{k}\).
## 4. Euler characteristic pairing
Now we are ready to prove that the pairing of Gamma series \(\langle\Gamma,\Gamma^{\circ}\rangle\) is inverse to the Euler characteristic pairing on \(\mathbb{P}_{\Sigma}\). Before we state the main theorem of this section, we have the following useful observation, which is an orbifold analog of the relationship between the \(\Gamma\)-class and the Todd class of a smooth manifold. Recall that \(*\) is the duality map on \(H\) defined in Proposition 3.7.
**Lemma 4.1**.: \((\hat{\Gamma}_{\gamma})^{*}\hat{\Gamma}_{\gamma^{\vee}}=(2\pi\mathrm{i})^{| \sigma(\gamma)|}(-1)^{\deg\gamma^{\vee}}\operatorname{Td}(\gamma^{\vee})\)_._
Proof.: We can expand \((\widehat{\Gamma}_{\gamma})^{*}\widehat{\Gamma}_{\gamma^{\vee}}\) as
\[\prod_{i\in\sigma(\gamma)}\Gamma(\gamma_{i}+\frac{D_{i}}{2\pi \mathrm{i}})^{*}\Gamma(1-\gamma_{i}+\frac{D_{i}}{2\pi\mathrm{i}}) \cdot\prod_{i\in\operatorname{Star}(\sigma(\gamma))\setminus\sigma(\gamma)} \Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}})^{*}\Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}})\] \[=\prod_{i\in\sigma(\gamma)}\Gamma(\gamma_{i}-\frac{D_{i}}{2\pi \mathrm{i}})\Gamma(1-\gamma_{i}+\frac{D_{i}}{2\pi\mathrm{i}}) \cdot\prod_{i\in\operatorname{Star}(\sigma(\gamma))\setminus\sigma(\gamma)} \Gamma(1-\frac{D_{i}}{2\pi\mathrm{i}})\Gamma(1+\frac{D_{i}}{2\pi\mathrm{i}}).\]
We use the identity \(\Gamma(z)\Gamma(1-z)=-\frac{2\pi\mathrm{i}\mathrm{e}^{\pi\mathrm{i}z}}{1- \mathrm{e}^{2\pi\mathrm{i}z}}\) to rewrite the first product as
\[(-2\pi\mathrm{i})^{|\sigma(\gamma)|}\mathrm{e}^{\sum_{i\in\sigma(\gamma)} \pi\mathrm{i}\gamma_{i}}\mathrm{e}^{-\frac{1}{2}\sum_{i\in\sigma(\gamma)}D_{i }}\prod_{i\in\sigma(\gamma)}\frac{1}{1-\mathrm{e}^{2\pi\mathrm{i}\gamma_{i}- D_{i}}}.\]
For the second product, we use \(\Gamma(1-\frac{z}{2\pi\mathrm{i}})\Gamma(1+\frac{z}{2\pi\mathrm{i}})=\frac{ ze^{-\frac{z}{2}}}{1-\mathrm{e}^{-z}}\) to rewrite it as
\[\mathrm{e}^{-\frac{1}{2}\sum_{i\in\operatorname{Star}(\sigma(\gamma))\setminus \sigma(\gamma)}D_{i}}\prod_{i\in\operatorname{Star}(\sigma(\gamma))\setminus \sigma(\gamma)}\frac{D_{i}}{1-\mathrm{e}^{-D_{i}}}.\]
Putting the two formulas together, we get
\[(\widehat{\Gamma}_{\gamma})^{*}\widehat{\Gamma}_{\gamma^{\vee}} =(2\pi\mathrm{i})^{|\sigma(\gamma)|}(-1)^{\deg\gamma^{\vee}}\mathrm{e}^{-\frac {1}{2}\sum_{i\in\operatorname{Star}(\sigma(\gamma))}D_{i}}\frac{\prod_{i\in \operatorname{Star}(\sigma(\gamma))\setminus\sigma(\gamma)}D_{i}}{\prod_{i\in \operatorname{Star}(\sigma(\gamma))}1-\mathrm{e}^{-D_{i}}}\] \[=(2\pi\mathrm{i})^{|\sigma(\gamma)|}(-1)^{\deg\gamma^{\vee}} \operatorname{Td}(\gamma^{\vee})\]
where we used \(\sum_{i\in\operatorname{Star}(\sigma(\gamma))}D_{i}=\sum_{i=1}^{n}D_{i}=0\).
Now we can state and prove the main theorem of this section. Recall that we defined the pairing \(\langle\cdot,\cdot\rangle\) on solutions of the better-behaved GKZ systems. When we apply it to \(\Gamma\) and \(\Gamma^{\circ}\), we get a constant element of \(H\otimes H^{c}\).
**Theorem 4.2**.: The constant pairing \(\langle\Gamma,\Gamma^{\circ}\rangle\) is equal up to a constant factor to the inverse of the Euler characteristic pairing \(\chi(-,-):H\otimes H^{c}\to\mathbb{C}\).
Proof.: It's clear that we can consider each twisted sector individually. For a fixed \(\gamma\), the statement is equivalent to the assertion that
\[\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\chi\left(P,\frac{F_{I_{d}}}{\widehat{\Gamma}_{ \gamma^{\vee}}}\right)\frac{D_{I_{c}}}{\widehat{\Gamma}_{\gamma}}=P\]
holds for all classes \(P\in H_{\gamma}\). Since the class \(\widehat{\Gamma}_{\gamma}\) is invertible in \(H_{\gamma}\), dividing by it induces an automorphism on the cohomology, hence it suffices to prove
\[\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\chi\left(\frac{P}{\widehat{\Gamma}_{\gamma}}, \frac{F_{I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\right)\frac{D_{I_{c}}}{ \widehat{\Gamma}_{\gamma}}=\frac{P}{\widehat{\Gamma}_{\gamma}} \tag{4.1}\]
for all \(P\). We prove this by induction on the degree of \(P\).
The base case \(\deg P=0\) corresponds to \(P=1_{\gamma}\). Since
\[\chi\left(\frac{1_{\gamma}}{\widehat{\Gamma}_{\gamma}},\frac{F_{I_{d}}}{\widehat {\Gamma}_{\gamma^{\vee}}}\right)=0\]
unless \(|I_{d}|=\operatorname{rk}N-|\sigma(\gamma)|\), the equation becomes
\[\sum_{|I_{d}|=\operatorname{rk}N-|\sigma(\gamma)|}\xi_{\gamma^{ \vee},\gamma+v_{I_{d}},I_{d}\sqcup\sigma(\gamma)}\operatorname{Vol}_{I_{d} \sqcup\sigma(\gamma)}(2\pi\mathrm{i})^{|\sigma(\gamma)|}\chi\left(\frac{1_{ \gamma}}{\widehat{\Gamma}_{\gamma}},\frac{F_{I_{d}}}{\widehat{\Gamma}_{\gamma^ {\vee}}}\right)\frac{1_{\gamma}}{\widehat{\Gamma}_{\gamma}}=\frac{1_{\gamma}}{ \widehat{\Gamma}_{\gamma}}. \tag{4.2}\]
Then by definition of \(\chi\) and Lemma 4.1, we have
\[\chi\left(\frac{1_{\gamma}}{\widehat{\Gamma}_{\gamma}},\frac{F_{ I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\right) =\frac{1}{|\operatorname{Box}(\sigma(\gamma))|}\int_{\gamma^{ \vee}}\operatorname{Td}(\gamma^{\vee})\left(\frac{1}{\widehat{\Gamma}_{ \gamma}}\right)^{*}\frac{F_{I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\] \[=\frac{1}{|\operatorname{Box}(\sigma(\gamma))|}\int_{\gamma^{ \vee}}\frac{F_{I_{d}}}{(\widehat{\Gamma}_{\gamma})^{*}\widehat{\Gamma}_{\gamma ^{\vee}}}\operatorname{Td}(\gamma^{\vee})\] \[=\frac{1}{|\operatorname{Box}(\sigma(\gamma))|}\int_{\gamma^{ \vee}}\frac{F_{I_{d}}}{(2\pi\mathrm{i})^{|\sigma(\gamma)|}(-1)^{\deg\gamma^{ \vee}}}\] \[=\frac{(-1)^{\deg\gamma^{\vee}}}{(2\pi\mathrm{i})^{|\sigma(\gamma )|}\operatorname{Vol}_{\overline{I_{d}}}|\operatorname{Box}(\sigma(\gamma))|}\]
here \(\operatorname{Vol}_{\overline{I_{d}}}\) denotes the volume of the cone \(\overline{\sigma_{I_{d}}}\) in the quotient fan \(\Sigma/\sigma(\gamma)\). Note that we have
\[\operatorname{Vol}_{I_{d}\sqcup\sigma(\gamma)}=\operatorname{Vol}_{\overline{ I_{d}}}|\operatorname{Box}(\sigma(\gamma))|\]
hence (4.2) becomes
\[\sum_{|I_{d}|=\operatorname{rk}N-|\sigma(\gamma)|}(-1)^{\deg\gamma^{\vee}}\xi _{\gamma^{\vee},\gamma+v_{I_{d}},I_{d}\sqcup\sigma(\gamma)}=1.\]
If we perturb \(\gamma^{\vee}\) by \(\varepsilon v\), then it will fall in the interior of exactly one maximal cone in \(\Sigma\), and the corresponding coefficient \(\xi\) is the only nonzero term in the sum above (recall the definition of \(\xi_{c,d,I}\) in Theorem 2.4), which is equal to
\[(-1)^{\deg\gamma^{\vee}}(-1)^{\deg\gamma^{\vee}}=1\]
So the base case is proved.
Now we assume the equality (4.1) holds for all classes of degree less than \(m\). Since the cohomology \(H_{\gamma}\) is generated as an algebra by classes \(D_{k}\), it suffices to prove the identity
\[\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\chi\left(\frac{D_{k}P}{\widehat{\Gamma}_{ \gamma}},\frac{F_{I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\right)D_{I_{c}}=D _{k}P\]
for each \(D_{k}P\) where \(P\in H_{\gamma}\) is of degree \(m-1\). Since \(D_{k}\) is skew-symmetric with respect to the \(\chi\) pairing, the above statement can be rewritten as
\[D_{k}P=-\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\chi\left(\frac{P}{\widehat{\Gamma}_{\gamma}}, \frac{D_{k}F_{I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\right)D_{I_{c}}.\]
On the other hand, we can multiply the induction assumption for \(P\) by \(D_{k}\) to get
\[\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}(2\pi \mathrm{i})^{|\sigma(\gamma)|}\chi\left(\frac{P}{\widehat{\Gamma}_{\gamma}}, \frac{F_{I_{d}}}{\widehat{\Gamma}_{\gamma^{\vee}}}\right)D_{k}\,D_{I_{c}}=D_{ k}P.\]
Compare these two identities. It suffices to show
\[0=\sum_{\begin{subarray}{c}c\in C,d\in C^{\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}\left(D_ {k}\cdot\frac{D_{I_{c}}}{\widehat{\Gamma}_{\gamma}}\right)\otimes\frac{F_{I_{ d}}}{\widehat{\Gamma}_{\gamma^{\vee}}} \tag{4.3}\] \[\qquad\qquad\qquad\qquad+\sum_{\begin{subarray}{c}c\in C,d\in C^ {\circ}\\ |I|=\operatorname{rk}N\end{subarray}}\xi_{c,d,I}\operatorname{Vol}_{I}\frac{D_ {I_{c}}}{\widehat{\Gamma}_{\gamma}}\otimes\left(D_{k}\cdot\frac{F_{I_{d}}}{ \widehat{\Gamma}_{\gamma^{\vee}}}\right)\]
which follows from Corollary 3.14.
**Remark 4.3**.: Theorem 4.2 implies, in particular, that the pairing of Theorem 2.4 is nondegenerate and is independent of \(v\). We are not aware of a direct proof of this fact.
We conclude this section by an explanation of our motivation behind the definition of the coefficients \(\xi_{c,d,I}\) in Theorem 2.4. This definition is inspired by the following fan displacement resolution of diagonal formula of Fulton-Sturmfels [6].
**Proposition 4.4**.: Let \(X\) be the toric variety corresponds to a _complete_ fan \(\Sigma\) in a lattice \(N\), denote the diagonal embedding \(X\hookrightarrow X\times X\) by \(\delta\). Let \(\sigma\in\Sigma\) be any cone and \(v\) a generic point in \(N\), then the diagonal class decomposes as
\[[\delta(V(\sigma))]=\sum_{\sigma_{1},\sigma_{2}}m^{\sigma}_{\sigma_{1},\sigma _{2}}\cdot[V(\tau_{1})\times V(\tau_{2})]\]
where \(m^{\sigma}_{\sigma_{1},\sigma_{2}}=[N:N_{\sigma_{1}}+N_{\sigma_{2}}]\) and the sum is over all cones \(\sigma_{1},\sigma_{2}\in\Sigma\) with \(\operatorname{codim}\sigma_{1}+\operatorname{codim}\sigma_{2}=\operatorname{ codim}\sigma\) and \(\sigma\subseteq\sigma_{1},\sigma_{2}\) such that \((v+\sigma_{1})\cap\sigma_{2}\neq\emptyset\).
Note that the coefficient \(m^{\sigma}_{\sigma_{1},\sigma_{2}}\) is exactly the volume \(\operatorname{Vol}_{\sigma_{1}\cup\sigma_{2}}\) of the cone spanned by \(\sigma_{1}\) and \(\sigma_{2}\). This formula cannot be applied to our case directly, since the toric varieties they worked with are complete while ours are not. Nevertheless we have the following relationship between the definition of \(\xi_{c,d,I}\) and the conditions occurred in Fulton-Sturmfels formula.
**Proposition 4.5**.: Let \(c,d\in\sigma_{I}\) and \(v\) be a generic point in \(C^{\circ}\). Then both \(c+\varepsilon v\) and \(d-\varepsilon v\) lies in \(\sigma_{I}^{\circ}\) for all sufficiently small \(\varepsilon>0\) if and only if
\[(v+\sigma(c))\cap\sigma(d)\neq\emptyset\]
where \(\sigma(c)\) denotes the minimal cone of \(\Sigma\) that contains \(c\).
Proof.: Assume both \(c+\varepsilon v\) and \(d-\varepsilon v\) lies in \(\sigma_{I}^{\circ}\). Then we can write \(c+\varepsilon v=\sum_{i\in I}s_{i}v_{i}\) where all \(s_{i}\in(0,1)\). Recall that \(I=\sigma(c)\cup\sigma(d)=I_{c}\sqcup I_{d}\sqcup\sigma(\gamma(c))\), this equation can be rewritten into the form \(v=v_{1}-v_{2}\), where \(v_{1}\in\sigma(c)\) and \(v_{2}\in\sigma(d)\), which is equivalent to the second statement. The other direction can be proved similarly.
**Remark 4.6**.: We believe our methods should allow one to give a new proof of the Fulton-Sturmfels formula, which could be done by restricting our results to the twisted sectors that are compact. We do not go into details further in this paper.
## 5. Extensions and open questions
There is a more general version of the better-behaved GKZ systems which includes a parameter \(\beta\in N_{\mathbb{C}}\), with \(\beta=0\) case being the one we considered so far. Namely, the torus homogeneity equations of Definition 1.1 read
\[\sum_{i=1}^{n}\langle\mu,v_{i}\rangle x_{i}\partial_{i}\Phi_{c}+\langle\mu,c- \beta\rangle\Phi_{c}=0\]
and similarly for \(\Psi_{d}\). Much of what we did in this paper is applicable to the pair of better behaved GKZ systems with parameters \(\pm\beta\). For instance, we readily observe that our argument in Section 2 goes through for arbitrary parameter \(\beta\) to give a pairing between spaces of solutions to bbGKZ\((C,\beta)\) and bbGKZ\((C^{\circ},-\beta)\).
We would like to see what happens in the limit given by a regular subdivision \(\Sigma\) for a generic \(\beta\). While there are certain versions of \(H\) and \(H^{c}\) considered in [8] it will be easier for our purposes to simply write \(\operatorname{Vol}(\Delta)\) linearly independent solutions given by \(\Gamma\)-series, essentially along the lines of the solutions of the original GKZ paper [7].
Let \(\Sigma\) be a regular subdivision of \(C\). For each maximum-dimensional cone \(\sigma\) we consider \(\operatorname{Vol}(\sigma)\) linearly independent solutions in the large Kahler limit of \(\mathbb{P}_{\Sigma}\), in bijection with the elements \(\gamma\) of \(N/\sum_{i\in\sigma}\mathbb{Z}v_{i}\). Namely, we define the set \(L_{c,\gamma,\sigma;\beta}\subset\mathbb{C}^{n}\) by
\[\sum_{i=1}^{n}l_{i}v_{i}=\beta-c\]
and the properties \(l_{i}\in\mathbb{Z}\) for all \(i\not\in\sigma\) and \(c+\sum_{i\not\in\sigma}l_{i}v_{i}=-\gamma\mod\sum_{i\in\sigma}\mathbb{Z}v_{i}\). Then for each \(\gamma\) we define a solution \(\Phi^{\gamma,\sigma}\) of bbGKZ\((C,\beta)\) by
\[\Phi_{c}^{\gamma,\sigma}(x_{1},\dots,x_{n})=\sum_{l\in L_{c,\gamma,\sigma; \beta}}\prod_{i=1}^{n}\frac{x_{i}^{l_{i}}}{\Gamma(1+l_{i})}.\]
We define \(\Gamma\)-series solutions \(\Psi^{\gamma,\sigma}\) to bbGKZ\((C^{\circ},-\beta)\) in the same way by
\[\Psi^{\gamma,\sigma}_{d}(x_{1},\ldots,x_{n})=\sum_{l\in L_{d,\gamma,\sigma;- \beta}}\prod_{i=1}^{n}\frac{x_{i}^{l_{i}}}{\Gamma(1+l_{i})}.\]
Note that in the case of generic \(\beta\) every solution of bbGKZ\((C^{\circ},-\beta)\) can be uniquely extended to solutions of bbGKZ\((C,-\beta)\). It is not hard to show that these \(\Phi_{c}\) and \(\Psi_{d}\) converge uniformly on compacts in the region (3.2) for an appropriate choice of \(\hat{\psi}\). Moreover, as \(\sigma\) and \(\gamma\) vary, we get bases of the space of solutions, with linear independence assured by them lying in different eigenspaces of the monodromy operators for small loops around \(x_{i}=0\).
Monodromy considerations imply that for the pairing \(\langle\cdot,\cdot\rangle\) of Section 2 we have \(\langle\Phi^{\gamma,\sigma},\Psi^{\gamma^{\prime},\sigma^{\prime}}\rangle=0\) unless \(\sigma=\sigma^{\prime}\) and \(\gamma=-\gamma^{\prime}\mod\sum_{i\in\sigma}\mathbb{Z}v_{i}\). In the latter case, the constant contribution will happen for \(l_{i}+l_{i}^{\prime}=0\) for \(i\not\in I\) and \(l_{i}+l_{i}^{\prime}=-1\) for \(i\in I\). If any of \(l_{i},l_{i}^{\prime}\) is a negative integer, then the corresponding term vanishes, due to a pole of \(\Gamma\), so we may assume that they are nonnegative for \(i\not\in\sigma\), which then implies that
\[I=\sigma;\ l_{i}+l_{i}^{\prime}=-1,\ \text{for}\ i\in\sigma;\ l_{i}=l_{i}^{ \prime}=0\ \text{for}\ i\not\in\sigma.\]
This implies that \(c=-\gamma\mod\sum_{i\in\sigma}\mathbb{Z}v_{i}\) and \(d=\gamma\mod\sum_{i\in\sigma}\mathbb{Z}v_{i}\).
We claim that for any \(\gamma\) there exists exactly one pair \((c,d)\) in \(\sigma\) satisfying this constraint and \(\xi_{c,d,\sigma}\neq 0\). The definition of the coefficients \(\xi\) of the pairing implies that we must also have \(c+d=\sum_{i\in\sigma}v_{i}\) with \(c+\varepsilon v\) and \(d-\varepsilon v\) in the corresponding cone \(\sum_{i\in\sigma}\mathbb{R}_{\geq 0}v_{i}\) for all small \(\varepsilon>0\). We can write \(\beta\), \(v\) and \(\gamma\) uniquely as
\[\beta=\sum_{i\in\sigma}\beta_{i}v_{i},\ v=\sum_{i\in\sigma}s_{i}v_{i},\ \gamma=\sum_{i}\gamma_{i}v_{i}\]
with \(\gamma_{i}\in[0,1)\). It is then easy to see that \(\xi_{c,d,\sigma}\) is nonzero if and only if
\[c =\sum_{\{i:\gamma_{i}\neq 0\}}(1-\gamma_{i})v_{i}+\sum_{\{i: \gamma_{i}=0,s_{i}<0\}}v_{i},\] \[d =\sum_{\{i:\gamma_{i}\neq 0\}}\gamma_{i}v_{i}+\sum_{\{i: \gamma_{i}=0,s_{i}>0\}}v_{i}.\]
Thus for \(\gamma_{i}\neq 0\) we have \(l_{i}=\beta_{i}-1+\gamma_{i},\ l_{i}^{\prime}=-\beta_{i}-\gamma_{i}.\) For \(\gamma_{i}=0\) and \(s_{i}>0\) we have \(l_{i}=\beta_{i},\ l_{i}^{\prime}=-1-\beta_{i}\) and for \(\gamma_{i}=0\) and \(s_{i}<0\) we have \(l_{i}=-1+\beta_{i},\ l_{i}^{\prime}=-\beta_{i}.\) In particular,
\[\deg(c)=-\deg(\gamma)+\text{rk}N-\#\{i:\gamma_{i}=0,s_{i}>0\}.\]
Therefore the pairing is given by
\[\langle\Phi^{\gamma,\sigma},\Psi^{-\gamma,\sigma}\rangle =(-1)^{\deg(c)}\operatorname{Vol}(\sigma)\prod_{\gamma_{i}\neq 0} \frac{1}{\Gamma(\beta_{i}+\gamma_{i})\Gamma(1-\beta_{i}-\gamma_{i})}\] \[\prod_{\gamma_{i}=0,s_{i}>0}\frac{1}{\Gamma(1+\beta_{i})\Gamma(- \beta_{i})}\prod_{\gamma_{i}=0,s_{i}<0}\frac{1}{\Gamma(\beta_{i})\Gamma(1- \beta_{i})}\] \[=(-1)^{\deg(c)}\operatorname{Vol}(\sigma)\prod_{\gamma_{i}\neq 0} \frac{\mathrm{e}^{2\pi\mathrm{i}(\beta_{i}+\gamma_{i})}-1}{2\pi\mathrm{i}\, \mathrm{e}^{\pi\mathrm{i}(\beta_{i}+\gamma_{i})}}\] \[\prod_{\gamma_{i}=0,s_{i}>0}\frac{\mathrm{e}^{2\pi\mathrm{i}( \beta_{i}+1)}-1}{2\pi\mathrm{i}\,\mathrm{e}^{\pi\mathrm{i}(\beta_{i}+1)}}\prod _{\gamma_{i}=0,s_{i}<0}\frac{\mathrm{e}^{2\pi\mathrm{i}\beta_{i}}-1}{2\pi \mathrm{i}\,\mathrm{e}^{\pi\mathrm{i}\beta_{i}}}\] \[=\frac{(-1)^{\deg(c)}\operatorname{Vol}(\sigma)}{(2\pi\mathrm{i}) ^{\mathrm{rk}N}}\mathrm{e}^{-\pi\mathrm{i}\sum_{i\in\sigma}(\beta_{i}+\gamma _{i})}\prod_{\gamma_{i}=0,s_{i}>0}\mathrm{e}^{\pi\mathrm{i}}\prod_{i\in\sigma} (\mathrm{e}^{2\pi\mathrm{i}(\beta_{i}+\gamma_{i})}-1)\] \[=\frac{\operatorname{Vol}(\sigma)}{(2\pi\mathrm{i})^{\mathrm{rk} N}}\mathrm{e}^{-\pi\mathrm{i}\deg(\beta)-2\pi\mathrm{i}\deg(\gamma)}\prod_{i\in \sigma}(1-\mathrm{e}^{2\pi\mathrm{i}(\beta_{i}+\gamma_{i})})\] \[=\frac{e^{-\pi\mathrm{i}\deg(\beta)}\operatorname{Vol}(\sigma)}{ (2\pi\mathrm{i})^{\mathrm{rk}N}}\prod_{i\in\sigma}(1-e^{2\pi\mathrm{i}(\beta_ {i}+\gamma_{i})}).\]
**Remark 5.1**.: An immediate consequence of the above calculation is that the pairing \(\langle\cdot,\cdot\rangle\) is non-degenerate for a generic \(\beta\).
**Further directions.** We conclude this section by stating some open problems related to our construction, in no particular order.
* Is the pairing of this paper nondegenerate for all \(\beta\)? We know this to be the case for \(\beta=0\) and \(\beta\) generic, and it seems likely to be always true.
* We would like to settle the analytic continuation conjecture of [1] to extend the main result of [4] to the better-behaved GKZ systems. One consequence of Theorem 4.2 is that it should be enough to just work with the usual \(K\)-theory and the compactly supported version should follow from duality.
* What is the HMS counterpart of our pairing from the point of view of Fukaya-Seidel categories for the mirror potential? Our formula for the pairing is quite simple, so presumably so should be the mirror version of it. We refer to [5], [11] for background.
* Solutions to bbGKZ systems come with a lattice structure inherited from the \(K\)-theory of \(\mathbb{P}_{\Sigma}\) (it is independent of \(\Sigma\)). Can this structure be locally defined outside of the region of convergence of any \(\Gamma\)-series?
|
2305.12199 | VNHSGE: VietNamese High School Graduation Examination Dataset for Large
Language Models | The VNHSGE (VietNamese High School Graduation Examination) dataset, developed
exclusively for evaluating large language models (LLMs), is introduced in this
article. The dataset, which covers nine subjects, was generated from the
Vietnamese National High School Graduation Examination and comparable tests.
300 literary essays have been included, and there are over 19,000
multiple-choice questions on a range of topics. The dataset assesses LLMs in
multitasking situations such as question answering, text generation, reading
comprehension, visual question answering, and more by including both textual
data and accompanying images. Using ChatGPT and BingChat, we evaluated LLMs on
the VNHSGE dataset and contrasted their performance with that of Vietnamese
students to see how well they performed. The results show that ChatGPT and
BingChat both perform at a human level in a number of areas, including
literature, English, history, geography, and civics education. They still have
space to grow, though, especially in the areas of mathematics, physics,
chemistry, and biology. The VNHSGE dataset seeks to provide an adequate
benchmark for assessing the abilities of LLMs with its wide-ranging coverage
and variety of activities. We intend to promote future developments in the
creation of LLMs by making this dataset available to the scientific community,
especially in resolving LLMs' limits in disciplines involving mathematics and
the natural sciences. | Xuan-Quy Dao, Ngoc-Bich Le, The-Duy Vo, Xuan-Dung Phan, Bac-Bien Ngo, Van-Tien Nguyen, Thi-My-Thanh Nguyen, Hong-Phuoc Nguyen | 2023-05-20T14:13:08Z | http://arxiv.org/abs/2305.12199v1 | # Vnhsge: VietNamese High School Graduation Examination Dataset for Large Language Models
###### Abstract
The Vnhsge (VietNamese High School Graduation Examination) dataset, developed exclusively for evaluating large language models (LLMs), is introduced in this article. The dataset, which covers nine subjects, was generated from the Vietnamese National High School Graduation Examination and comparable tests. 300 literary essays have been included, and there are over 19,000 multiple-choice questions on a range of topics. The dataset assesses LLMs in multitasking situations such as question answering, text generation, reading comprehension, visual question answering, and more by including both textual data and accompanying images. Using ChatGPT and BingChat, we evaluated LLMs on the Vnhsge dataset and contrasted their performance with that of Vietnamese students to see how well they performed. The results show that ChatGPT and BingChat both perform at a human level in a number of areas, including literature, English, history, geography, and civics education. They still have space to grow, though, especially in the areas of mathematics, physics, chemistry, and biology. The Vnhsge dataset seeks to provide an adequate benchmark for assessing the abilities of LLMs with its wide-ranging coverage and variety of activities. We intend to promote future developments in the creation of LLMs by making this dataset available to the scientific community, especially in resolving LLMs' limits in disciplines involving mathematics and the natural sciences.
GPT-3.5 GPT-4 ChatGPT Bing AI Chat large language models dataset Vietnamese high school graduation examination [https://github.com/Xdao85/Vnhsge](https://github.com/Xdao85/Vnhsge)
## 1 Introduction
Artificial intelligence (AI) has the potential to revolutionize the educational system. According to Chassignol et al. [1], four areas--customized educational content, cutting-edge teaching strategies, technology-enhanced evaluation, and communication between students and teachers--are where AI can revolutionize the educational environment. An overview of AI applications in higher education has been offered by Zawacki-Richter et al. [2], spanning profiling and prediction, evaluation and assessment, adaptive systems and personalization, and intelligent tutoring systems. Potential research subjects in AI applications for education have been suggested by Hwang et al. [3]. In order to enable effective administrative operations, content modification, and enhanced learning quality, Chen et al. [4] have concentrated on the use of AI in administration, instruction, and learning. The potential of generative AI in education to lessen workload and increase learner engagement in online learning has been highlighted by Dao et al. [5]. Finally, Nguyen et al. [6] have suggested a platform for online learning that incorporates a Vietnamese virtual assistant to help teachers present lectures to students and to make editing simple without the requirement for video recording.
AI can already understand and communicate with humans, thanks to recent advancements in large language models (LLMs), which open up opportunities for its use in education. LLMs have shown great potential in the fields of education, content development, and language translation. The two primary architectures of LLMs are BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer). In 2018, Google introduced BERT [7], which has excelled in various natural language processing (NLP) tasks. The GPT algorithm, developed by OpenAI [8], was trained on extensive unlabeled text datasets. Facebook's RoBERTa [9] continued Google's research, and Google released T5 [10] in 2019. In 2020, OpenAI created GPT-3 [11], which demonstrated exceptional performance in various NLP tasks. Recently, OpenAI developed GPT-4 [12], a text-to-text machine learning system capable of processing both text and image inputs. GPT-4 has shown human-level performance in many professional and academic criteria, although it may not perform as well as humans in other contexts.
With the introduction of ChatGPT by OpenAI and Bing AI Chat (BingChat) by Microsoft, which have been widely used in professions including marketing, medicine, law, and education, the popularity of AI has increased dramatically. Among the most frequent users of these applications are students. Although it is becoming more common, using LLMs in teaching certainly has drawbacks. Due to their widespread use, LLMs like ChatGPT and BingChat can quickly spread false information, with serious consequences [13]. Since LLMs must be used in daily life, countermeasures must be put in place [14]. A dataset can be created using real-world assessments, such as the Vietnamese National High School Graduation Examination (VNHSGE), to assess the capabilities of LLMs in the field of education. The evaluation's findings will give teachers a basis on which to build instructional plans for AI applications.
In this article, we present a VNHSGE dataset that is built from exams of the VNHSGE and other exams of a similar nature. Mathematics, literature, English, physics, chemistry, biology, history, geography, and civic education are among the nine subjects covered by this dataset. The VNHSGE dataset specifically consists of 300 essays on Literature and 19,000 multiple-choice questions on other topics. We hope that researchers and educators can benefit greatly from the VNHSGE dataset in training and assessing LLMs. Additionally, we will annually update the VNHSGE dataset to reflect the most recent and pertinent data.
## 2 Related Work
### Datasets for training large language models
Large datasets are needed for pre-training and fine-tuning LLMs in order to attain their excellent performance in NLP applications. GLUE [15] is made to assess the generalizability of language models while DecaNLP [16] is used to train multi-task NLP models. CLUE [17] is a comprehensive benchmark that includes tasks like named entity recognition and text classification. MMLU dataset [18] tests models in zero-shot and few-shot environments to assess their pretraining knowledge acquisition in 57 subjects in STEM, the humanities, social sciences, and more.
The accuracy and reasoning skills of LLMs are severely tested by datasets. SQuAD [19] provides a sizable dataset for automatic question-answering research, enhancing the precision of deep learning models with 100,000 labeled questions and answers from many domains. For NLP models, especially next-word prediction models, LAMBADA [20] presents challenging questions that call for in-depth comprehension of the text and next-word prediction to deliver an answer. The 96,000 reading comprehension and math problems in DROP [21] demand a mechanism to resolve references inside the question, necessitating a thorough comprehension of paragraph content. HellaSwag [22] assesses a model's capacity for deductive reasoning in scenarios that go against popular belief. WinoGrande [23] is a benchmark dataset created to assess how well NLP models can resolve complex pronoun references in context using common sense reasoning, which calls for a thorough comprehension of real language and reasoning.
The evaluation of machine comprehension and language understanding across many languages is supported by a number of datasets that concentrate on cross-lingual understanding. MLQA [24] assesses cross-lingual question-answering systems in seven different languages. XQuAD [25] is a parallel dataset in eleven languages that assesses cross-lingual question answering performance. MKQA [26] is an open-domain question answering evaluation set that consists of 10k question-answer pairings in twenty-six topologically different languages. XGLUE [27] is a multilingual benchmark that assesses how well language models perform on a range of cross-lingual understanding tasks such as text classification, question answering, and machine translation in eleven languages. These datasets offer a useful tool for assessing how well machine understanding models perform across languages. MKQA [26] and XGLUE [27] offer a wider variety of tasks for evaluating cross-lingual understanding, whereas MLQA [24] and XQuAD [25] concentrate on measuring cross-lingual question answering performance. The availability of datasets in several languages can help in the creation of more thorough machine comprehension models that can function in a variety of linguistic contexts, bringing up fresh directions for study and development in NLP.
LLMs have difficulties when dealing with a variety of datasets that cover many areas. CoQA [28] poses a unique challenge for big language models due to the conversational character of the questions and the responses, which can be free-form text and include texts from seven different domains. PILE [29] is a large dataset of approximately 800GB of text from many sources, including as books, online pages, and scientific publications. ScienceQA [30] is a great tool for creating machine comprehension models for scientific domains because it comprises a variety of natural science, language science, and social science.
For question answering systems, there are numerous datasets available, each concentrating on a distinct subject. MATH [31] contains 12,500 difficult competition math problems with detailed solutions that allow models to produce answer derivations and justifications. GSM-8K [32] focuses on grade-school mathematics and covers a range of mathematical topics. Questions about biomedical research and medical scientific papers are included in BioASQ [33] and are categorized by level of difficulty. TQA [34] combines the machine comprehension and visual question-answering paradigms for middle school science classes. SWAG [35] challenges the grounded commonsense inference, combining natural language inference and physically grounded reasoning. PIQA [36] was developed as a commonsense reasoning dataset to examine the physical knowledge of current NLP models. PROST [37] is intended to test both causal and masked language models in a zero-shot environment. JEC-QA [38] and CaseHOLD [39] are Chinese legal datasets.
In the discipline of NLP, large datasets are necessary for the development and assessment of machine learning models. The internet is a source of data for many question-answering datasets, particularly for websites like Wikipedia and search engines like Google. WebQuestions [40] are all defined as Freebase entities, with Freebase serving as the knowledge base. Each question in WikiQA [41] links to a possible related Wikipedia page, and lines from the summary part of the page are utilized as candidate answers. TriviaQA [42] consists of 950K question-answer pairs drawn from 662K publications on the web and in Wikipedia. Because the context for each question is quite lengthy, span prediction may not be able to reliably produce the answers. One million pairs of questions and passages drawn from actual search queries are provided by the MS MARCO [43], which is updated on a regular basis with fresh search queries. Real-world, user-generated queries from Google.com and related Wikipedia pages are included in the NQ dataset [44]. Although there may be potential mistakes and incompleteness of information presented, the accuracy and completeness of the Wikipedia pages determine how accurate and thorough the responses are in these datasets. These datasets offer researchers useful tools for creating and enhancing machine learning models for problem-solving, with a variety of difficulties and chances for advancement in the field.
Overall, these datasets provide valuable resources for evaluating LLMs in various tasks such as question answering, language modeling, text generation, reading comprehension, among others.
### Datasets from the exams for training large language models
LLMs are increasingly being used, hence it is critical to assess their dependability and performance. Due to the richness and diversity of language usage in these datasets, language model evaluation using test datasets have acquired significance. Due to the high cost of data generation by human experts, existing exam datasets like the NTCIR QA Lab [45], Entrance Exams task at CLEF QA Track [46], [47], and AI2 Elementary School Science Questions dataset [48], have not been adequate for training advanced data-driven machine reading models. As a result, larger and more varied exam datasets are essential for LLMs training and evaluation. RACE [49] is one such dataset that has drawn interest. RACE is a dataset for automated reading comprehension with RACE-M and RACE-H, two subgroups from middle school and high school tests, respectively.
Exam datasets are increasingly being used to evaluate LLMs, and the current datasets present interesting evaluation issues. The creation of novel test datasets, like the proposed Vietnamese High School Graduation Examination Dataset for LLMs, can improve the assessment of LLMs and guarantee their dependability in a variety of contexts. Using test datasets offers a demanding and varied evaluation of LLMs, which is essential for their usage in real-world applications. The creation of fresh test datasets can improve the evaluation procedure and increase the dependability of LLMs across a range of applications.
### Datasets from high school exams for training large language models
Despite the fact that there are few datasets that concentrate on using high school topic exams to assess LLMs, there are still some datasets that contain high school exam questions that can be utilized for this purpose. GeoS [50] intended for automatic math problem-solving. It includes SAT plane geometry questions from prior real SAT examinations and practice tests, each with a diagram and multiple-choice answers. Another dataset that includes multiple-choice questions from academic exams that range from grade 3 to grade 9 and need reasoning to answer is ARC [51]. The dataset was split into two parts, Easy and Challenge, with the latter comprising trickier problems. A supporting knowledge library of 14.3 million unstructured text passages is also included. SuperGLUE [52], a more difficult dataset with tasks involving
intricate thinking and common sense, contains many different jobs in it, some of which need you to respond to questions based on science passages from high school.
These high school datasets can still be utilized to assess language models' capacities to perceive and analyze natural language, despite the fact that there are few datasets explicitly created for testing LLMs using high school subject exams. Researchers can gain a deeper understanding of language models' strengths and limitations and create ways to enhance their performance by evaluating them against high school-level content. So that they can be used to assess LLMs, these datasets offer a variety of tasks and subject areas that are pertinent to high school education.
### Our proposed dataset
To begin with, we conducted a search for available datasets in the "texts" category that are relevant to question answering task, as well as datasets that support the Vietnamese language. Our search was carried out on Paperwithcode as well as in previous studies. Table 1 displays the available datasets. We found that the majority of datasets consist of English texts, with only a few supporting Vietnamese. The most popular subjects are English, mathematics, and physics, while other subjects have relatively fewer related datasets (see Appendix section A for further details).
It is essential to have datasets that contain questions at a high level of inference and cover a wide variety of topics because LLMs exhibit impressive human-level performance in several domains [12]. Utilizing real exam data from sources like MMLU [18] and ScienceQA [30] is one approach to create these datasets. However, there are just a few datasets available right now that are focused primarily on using actual examinations to assess LLMs. To assess LLMs' capacity to understand and reason about natural language and challenging high school-level problems, the authors of this article created the VNHSGE dataset from the Vietnam National High School Graduation Examination. The employment of LLMs in teaching strategies can be decided upon by educators with the use of this dataset.
There is a chance that LLMs could give students misleading or erroneous information [13], [59], or [60] as they become more prominent in our daily lives. To solve this, it is essential that educators have access to databases that can accurately assess LLM models' capabilities and help them decide whether to use or reject them in their teaching strategies [61]. The VNHSGE dataset is created with this goal in mind, ensuring that LLMs give students reliable and secure information.
## 3 Vietnamese National High School Graduation Examination
The official and illustrative exam questions from the VNHSGE exams are all included in the VNHSGE dataset, which was compiled by high school instructors and the Vietnamese Ministry of Education and Training (VMET). It includes test questions from 2019-2023, covering a wide range of disciplines like mathematics, literature, English, physics, chemistry, biology, history, geography, and civic education.
Based on Bloom's taxonomy, the VNHSGE tests have varying degrees of difficulty, from knowledge-based questions that test fundamental comprehension to high-application questions that gauge one's capacity for in-depth analysis and information synthesis in the context of solving challenging situations. Knowledge (easy), comprehension (intermediate),
\begin{table}
\begin{tabular}{|c|c|} \hline
**Subjects** & **Dataset** \\ \hline Mathematics & Mathematics [53], MMLU [18], MATH [31] and GSM8K [32] \\ \hline Literature & SCROLLS [54] and TAPE [55] \\ \hline English & RACE [49], MLQA [24], SuperGLUE [52], and DREAM [56] \\ \hline Physics & TQA [34], SWAG [35], PIQA [36], MMLU [18], PROST [37], and ScienceQA [30] \\ \hline Chemistry & SciQ [57], MMLU [18], and ScienceQA [30] \\ \hline Biology & BioASQ [33], SciQ [57], MMLU [18], and ScienceQA [30] \\ \hline History & MMLU [18] and ScienceQA [30] \\ \hline Geography & MMLU [18], GeoTSQA [58], and ScienceQA [30] \\ \hline Civic Education & JEC-QA [38], and CaseHOLD [39] \\ \hline Vietnamese & MLQA [24], XQuAD [25], and MKQA [26] \\ \hline \end{tabular}
\end{table}
Table 1: Related datasets
application (difficult), and high application (extremely tough) are the four levels of complexity. We may learn more about LLM's capabilities for complicated reasoning as well as its strengths and shortcomings in dealing with various high school levels by evaluating its performance over a range of difficulty levels.
The exam's three primary subjects--mathematics, literature, and English--as well as two combinations--the natural science combination of physics, chemistry, and biology, and the social science combination of history, geography, and civic education--make up the exam's framework.
Table 2 displays the multiple choice question subjects. Each exam contains 40 questions in each of the other topics in addition to 50 questions in mathematics and English. The dataset encompasses a wide range of disciplines and calls for a variety of abilities, from arithmetic to sophisticated reasoning.
A systematic assessment technique called a literature dataset is used to assess a student's reading and writing abilities. Reading comprehension is tested in Part I, while writing skills are tested in Part II. Four questions in Part I ask students to examine and interpret an essay or poem, including determining the genre and any words or phrases that have particular meanings. Their own view on the text must be expressed in the final question, or it must be evaluated. Two essay questions are included in Part II, one on how to write a social argumentative essay and the other on how to write a literary argumentative essay. The essay questions test a student's ability to create a coherent and concise argument, back it with evidence, and analyze and interpret literary materials in order to develop a well-supported argument. The literature dataset offers a thorough assessment of a student's writing and reading comprehension abilities.
The score distribution is an indicator to show how candidates scored in exams. Every year, VMET publishes the score distribution, which is shown as a chart for each subject. The distribution of scores is used to evaluate the competency of candidates and to assess exams according to their degree of difficulty, so assessing the level of competency of the applicants. Score distributions from 2019 to 2022 were gathered. We can assess the capability of LLMs by contrasting their outcomes with those of Vietnamese students (see Appendix section D for a detailed breakdown of the score distribution and a comparison of LLMs' performance). The average score (AVS) and most reached score (MVS) of the Vietnamese students are presented in Table 3 for a simpler comparison of the LLMs' performance. For instance, in 2019 the AVS and MVS for mathematics are 5.64 and 6.4, respectively.
\begin{table}
\begin{tabular}{|c|p{284.5pt}|} \hline
**Subjects** & **Topics** \\ \hline Mathematics & spatial geometry, number series (arithmetic progression, geometric progression), combinations and probability, derivatives and applications, exponential and logarithmic functions, primitives and integrals, complex numbers, polyhedrons, rotating blocks, and Oxyz spatial calculus \\ \hline English & pronunciation and stress, grammar, vocabulary, communication, reading fill-in-the-blank, reading comprehension, and writing skills \\ \hline Physics & mechanical oscillations, mechanical waves, alternating current, electromagnetic oscillations and waves, light waves, quantum light, atomic nucleus, electric charge and field, direct current, electromagnetic induction, and light refraction \\ \hline Chemistry & theory of metals, alkali metals, alkaline-earth metals, aluminum, iron, inorganic and organic compounds, esters, lipids, amines, amino acids, and proteins, carbohydrates, polymers, and polymer materials \\ \hline Biology & mechanisms of inheritance and mutation, laws of genetics, population genetics, applications of genetics, human genetics, evolution, ecology, plant organismal biology, and animal organismal biology \\ \hline History & World histories: Soviet Union, Eastern European, Russian Federation; Asian, African, and Latin American; United States, Western Europe, and Japan; international relations, scientific and industrial globalization networks; new world order after World War II. Vietnamese histories: 1884-1914, 1919-1930, 1930-1945, 1945-1954, 1954-1975, and 1975-2000 periods. \\ \hline Geography & geographical skills: atlas use, data table interpretation, and chart analysis; geographical theory: natural geography, population geography, economic sector geography, economic zone geography, sea geography, and island geography. \\ \hline Civic Education & legal frameworks and regulations, fundamental rights of citizens, democratic principles and concepts, as well as case studies \\ \hline \end{tabular}
\end{table}
Table 2: Subjects use multiple-choice questions
## 4 Collection Methods
Any research project must start with the gathering of raw data, and for this study, we obtained our data from free public websites in Vietnam. We painstakingly selected and arranged the gathered information into a brand-new dataset of questions from VNHSGE and similar exams. We specifically used the illustrated exam questions that VMET publishes every year. To give students and teachers a general idea of the content and structure of the official exam, these exam questions are made available to them. We gathered the official exam questions from VMET in addition to the illustrated exam questions. VMET produced a brief answer key following the exam, and the teachers then supplied more thorough responses. Additionally, we have included similar exam questions that are created by instructors and high schools around Vietnam in our data collection. This strategy guarantees that our dataset has a wide variety of questions that cover a wide range of subjects and degrees of difficulty. Our dataset contains exam questions, answers, and thorough step-by-step explanations (see Appendix section B.1 for a raw data example) that have all been meticulously examined and validated by our team of subject matter experts. Instead of employing Amazon Mechanical Turk, as some earlier datasets did, detailed explanations are given by qualified teachers.
The extensive dataset gathered for this study offers a great chance to assess how well LLMs complete Vietnamese national tests. Our dataset's vast variety of themes and levels of difficulty provide a thorough assessment of the LLMs' accuracy and deductive reasoning abilities when responding to various questions. We may learn important lessons about the benefits and drawbacks of LLMs in handling actual tests by utilizing this dataset, which can guide further study and advancement in this area.
## 5 Dataset
The dataset is available in Word format and JSON format. In addition, we provide the dataset in Vietnamese and English (VNHGE-V and VNHSGE-E). The dataset was originally written in Vietnamese. Using GPT-4/ChatGPT, the dataset is translated into English, similar to how OpenAI tests the capability of GPT-4 [12] in other languages by using Azure Translate to translate the MMLU benchmark [18] into another language. Language models can handle several languages, as is well recognized. However, if LLMs do not support multilingualism they can use the English VNHSE version. We may also employ comparable strategies for additional languages by using GPT-4/ChatGPT, BingChat/Azure Translate, and Google Translate.
### Format
In the VNHSGE dataset, we convert formulas, equations, tables, images, and charts from raw text formats like Word, Pdf, and HTML into a text-only format and an image folder including steps: (1) collecting raw data and convert them into Word format, (2) transforming symbols, formulas, and equations into Latex format, (3) converting Word format to JSON format (see Appendix section B for more details of a step-by-step conversion).
#### 5.1.1 Word format
We transform the symbols, equations, and formulas into text using the Latex format so that it is compatible with LLMs transformed BERT or GPT. For those who lack programming skills, we also offer a text format in the form of a Word file for evaluating the performance of LLMs. In this situation, the VNHSGE dataset can be thought of as a question bank for assessing LLMs over a range of subjects. However, full language models like ChatGPT and BingChat are typically more appropriate in this situation. It is vital to keep in mind that symbols, formulas, and equations were converted to text format while utilizing a text format in a Word file; we only ask questions of LLMs and receive responses.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Math**} & \multicolumn{2}{c|}{**Lit**} & \multicolumn{2}{c|}{**Eng**} & \multicolumn{2}{c|}{**Phy**} & \multicolumn{2}{c|}{**Che**} & \multicolumn{2}{c|}{**Bio**} & \multicolumn{2}{c|}{**His**} & \multicolumn{2}{c|}{**Geo**} & \multicolumn{2}{c|}{**Civ**} \\ \hline
**2019** & 5.64 & 6.4 & 5.49 & 6 & 4.36 & 3.2 & 5.57 & 6.25 & 5.35 & 6 & 4.68 & 4.5 & 4.3 & 3.75 & 6 & 6 & 7.37 & 7.75 \\ \hline
**2020** & 6.67 & 7.8 & 6.61 & 7 & 4.58 & 3.4 & 6.72 & 7.75 & 6.71 & 7.75 & 5.6 & 5.25 & 5.19 & 4.5 & 6.78 & 7.25 & 8.14 & 8.75 \\ \hline
**2021** & 6.61 & 7.8 & 6.47 & 7 & 5.84 & 4 & 6.56 & 7.5 & 6.63 & 7.75 & 5.51 & 5.25 & 4.97 & 4 & 6.96 & 7 & 8.37 & 9.25 \\ \hline
**2022** & 6.47 & 7.8 & 6.51 & 7 & 5.15 & 3.8 & 6.72 & 7.25 & 6.7 & 8 & 5.02 & 4.5 & 6.34 & 7 & 6.68 & 7 & 8.03 & 8.5 \\ \hline \end{tabular}
\end{table}
Table 3: Average score and Most reached score of Vietnamese students
**Question**: Let Sys=f(x)S be a cubic function with the graph shown in the picture.
The number of real solutions of the equation Slf(x*3-3 x)l=frac[2]{3}S is:
### B. 10 C. 3 D. 9
**Solution**: From the graph of the function Sys=f(x)S, we deduce that the graph of the function Sys=lf(x)S is:
#### 5.1.2 JSON format
We adopt the JSON format for the VNHSEG dataset because it is ideal for LLMs training, testing, and evaluation. Because it makes both accessing and processing textual information linked to syntactic structure and content-related information simple, the JSON format is especially well suited for LLM inputs. A variety of text data, including formulas, equations, tables, and images, can be stored and represented in a flexible and expandable manner using the JSON format. In general, the usage of JSON format makes the VNHSEG dataset compatible with a variety of LLMs and makes it easier to train, test, and evaluate LLMs.
{
"ID": "Q1",
"IQ": "Math/Q1_1.png",
"Q": "Let Sys=f(x)S be a cubic function with the graph shown in the picture. unThe number of real solutions of the equation Slf(x*3-3 x)l=frac{2}{3}S is:unA. 6. unB. 10. unC. 3. unD. 9.",
"C": "B",
"IE": "Math/Q1_2.png, Q1_3.png",
"E": "From the graph of the function Sys=f(x)S, we deduce that the graph of the function Sys=lf(x)S is:unSetting S=x*3-3xS, we have Slf(x*3-3x)l=frac{2}{3} Left-rightarrow lf(t)=ffrac{2}{3}S. unFrom the above graph, we conclude that the equation Slf(t)=ffrac{2}{3}S has six distinct solutions S=t_{i}S (with S=overline{1,6}S and S(t_{1}<-2; 2<t_{2}, t_{3}<2; t_{4}, t_{5}, t_{6}>2)S. Considering the function S(x)=x*{3}-3x, we have Sf^{\(\wedge\)}(prime)(x)=0 Leftrightarrow x= \(\pm\)pm1S. The sign variation table of S(x)S is:
\begin{tabular}{|c|c c c c c|} \hline \(x\) & \(-\infty\) & \(-1\) & \(1\) & \(+\infty\) \\ \hline \(f^{\prime}(x)\) & & \(+\) & \(\dot{0}\) & \(-\) & \(\dot{0}\) & \(+\) \\ \hline \(f(x)\) & & & \(2\) & \(0\) & \(+\infty\) \\ \hline \end{tabular} Based on the table of variations, we have:
* The equation Sx*{3}-3x=t_{1}S has one solution (since S(t_{1}<-2)S).
* Each equation Sx*{3}-3x=t_{2}, x*{3}-3x=t_{3}S has three distinct solutions (since S-2<t_{2}, t_{3}<2S).
* Each equation Sx*{3}-3x=t_{4}, x*{3}-3x=t_{5}, x*{3}-3x=t_{6}S has one solution (since S{t_{4}, t_{5}, t_{6}>2S).
The equation Slf(x*{3}-3x)l=frac{2}{3}S has 10 solutions. Therefore, the answer is B. 10",
\begin{tabular}{|c|c c c c c|} \hline \(\{\)t_{1}<-2; -2<t_{2}, t_{3}<2; t_{4}, t_{5}, t_{6}>2\(\}\)S. \\ \hline \end{tabular} Considering the function S(t)=x*{3}-3xS, we have Sf^{\(\wedge\)}(prime)(x)=3x*{2}-3; t^{\(\wedge\)}(prime)(x)=0 Leftrightarrow x= \(\pm\)pm1S.The sign variation table of S(x)S is:unBased on the table of variations, we have:unbegin{tabular}{|c|c c c c|} \hline \(\{\)t_{1}<-2; -2<t_{2}, t_{3}<2; t_{4}, t_{5}, t_{6}>2\(\}\)S. \\ \hline \end{tabular} Considering the function S(t)=x*{3}-3xS, we have Sf^{\(\wedge\)}(prime)(x)=3x*{2}-3; t^{\(\wedge\)}(prime)(x)=0 Leftrightarrow x= \(\pm\)pm1S.The sign variation table of S(x)S is:unBased on the table of variations, we have:unbegin{tabular}{|c|c c c c|} \hline \(x\) & \(-\infty\) & \(-1\) & \(1\) & \(+\infty\) \\ \hline \(f^{\prime}(x)\) & & \(+\) & \(\dot{0}\) & \(-\) & \(\dot{0}\) & \(+\) \\ \hline \(f(x)\) & & & \(2\) & \(0\) & \(+\infty\) \\ \hline \end{tabular} Based on the table of variations, we have:
* The equation Sx*{3}-3x=t_{1}S has one solution (since S(t_{1}<-2)S).
* Each equation Sx*{3}-3x=t_{2}, x*{3}-3x=t_{3}S has three distinct solutions (since S(t_{1}<-2)S.
* Each equation Sx*{3}-3x=t_{4}, x*{3}-3x=t_{5}, x*{3}-3x=t_{6}S has one solution (since S(t_{4}, t_{5}, t_{6}>2S).
* Each equation Sf(x*{3}-3x)l=frac{2}{3}S has 10 solutions. Therefore, the answer is B. 10",
### Language
Vietnamese and English were used in the construction of the VNHSGE dataset. VNHSGE-V is in Vietnamese and VNHSGE-E is in English. GPT-4/ChatGPT was used to translate VNHSGE-V into VNHSGE-E. According to earlier research [12], [62], and [63], GPT-4/ChatGPT can successfully serve as the appropriate translation engine in this circumstance. It should be noted that ChatGPT or BingChat were used to translate the illustrative examples for the dataset presented in this work from Vietnamese to English.
### Subdataset
Table 4 shows the VNHSGE dataset structure. The dataset for mathematics and English consists of 2500 multiple-choice questions per subject, while the other multiple-choice subjects have 2000 questions. Literature has 50 exams with 300 essay questions. The dataset contains a large number of questions spanning various topics, ranging from recall-level knowledge to complex multi-step reasoning requirements (see Appendix section C for more details of examples).
#### 5.3.1 Mathematics
In contrast to a number of earlier mathematics datasets, including the Mathematics dataset [53], MATH dataset [31], GSM8K dataset [32], and ScienceQA dataset [30], the VNHSGE mathematics dataset covers a wide range of topics, including spatial geometry, number series (arithmetic progression, geometric progression), combinations and probability, derivatives and applications, exponential and logarithmic functions, primitives and integrals, complex numbers, polyhedrons, rotating blocks, and Oxyz spatial. To help models learn how to provide answer derivations and explanations, the dataset includes questions and related solutions, which are supplied in a complete step-by-step solution (C.1). The VNHSGE mathematics dataset also includes straightforward to complicated questions, necessitating strong mathematical reasoning skills from LLMs in both question answering and visual question answering tasks.
First, the knowledge level question (C.1.1) has been created such that LLMs can quickly and simply solve it using their fundamental understanding. We need 1-2 steps to solve this kind of question. The mathematical calculation skills of LLMs may be put to the test by questions like (\(+\ -\ \times\ \div\ \int\ \frac{\mathrm{d}}{\mathrm{d}x}\)). In order to answer the comprehension level questions (C.1.2), LLMs must then infer a few steps to arrive at the appropriate answer. LLMs' capacity for reasoning is put to the test by this kind of question at the level of an average student. Further complicating matters for LLMs is the fact that these kinds of application level problems (C.1.3) mix several different mathematical ideas and need multiple complicated reasoning steps. These inquiries may assess a model's capacity for rational thinking and mathematical knowledge synthesis. Last but not least, the high application level questions (C.1.4) frequently feature unique solutions based on advanced mathematical reasoning and practical problem-solving techniques. LLMs need to have very strong deductive reasoning skills and expertise in solving difficult mathematical problems in order to answer these kinds of inquiries.
The VNHSGE mathematics dataset is a thorough collection that addresses a variety of mathematical topics. The dataset was created to evaluate LLMs' capacity for mathematical reasoning on a range of levels, including knowledge,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Subject** & **Exam Type** & **Number of questions per exam** & **Number of exams** & **Question Total** \\ \hline Mathematics & Multiple choice & 50 & 50 & 2500 \\ Literature & Essay & 6 & 50 & 300 \\ English & Multiple choice & 50 & 50 & 2500 \\ Physics & Multiple choice & 40 & 50 & 2000 \\ Chemistry & Multiple choice & 40 & 50 & 2000 \\ Biology & Multiple choice & 40 & 50 & 2000 \\ History & Multiple choice & 40 & 50 & 2000 \\ Geography & Multiple choice & 40 & 50 & 2000 \\ Civic Education & Multiple choice & 40 & 50 & 2000 \\ \hline
**Total** & \multicolumn{3}{c|}{**19000 multiple-choice questions and 300 essay questions**} \\ \hline \end{tabular}
\end{table}
Table 4: VNHSGE dataset structure
comprehension, application, and high application. The questions in the dataset range in complexity from simple to complicated, therefore the models must have strong inference and reasoning skills. The dataset includes questions that may be answered in one or two steps using fundamental information, as well as problems that call for several steps and knowledge synthesis. The VNHSGE mathematics dataset is an excellent resource for developing and assessing LLMs' mathematical reasoning and inference skills since it presents a strong challenge to their mathematical aptitude in both breadth and depth.
#### 5.3.2 Literature
The literary exam, a structured assessment tool used to evaluate a student's reading comprehension and writing abilities, serves as the foundation for the VNHSGE literature dataset. This dataset can be deployed for the training and evaluation of LLMs for a variety of language understanding tasks, including essay writing, writing proficiency, and reading comprehension. The dataset is divided into two parts: the question and the answer (C.2). The question section (C.2.1) is divided into two parts. Four questions in Part I's reading comprehension assessment ask students to analyze and understand a paragraph or poetry. The questions ask one to identify the genre and any words or phrases with unique meanings before you analyze their significance. Students must give their own personal opinion of the text or assess another person's personal view of the text for the final question. Writing abilities are the main topic of Part II, which also contains two essay challenges, one on how to write an arguing social essay and the other on how to write an argumentative literary essay. The essay questions test a student's ability to formulate a coherent and succinct argument, back it with evidence, and analyze and interpret literary materials in order to develop a well-supported argument. The answer suggestions and grading guidelines are included in the solution (C.2.1). The scoring criteria are written down in great depth in the grading instructions (C.2.2). The suggested answers are given in accordance with the evaluation criteria.
The dataset created based on the answer key with grading guidelines and answer recommendations can assist LLMs in strengthening their capacity to respond to inquiries and offer pertinent justifications based on certain rating metrics. Language models can become more accurate and efficient at answering queries by being trained on this dataset to better grasp and adhere to grading requirements. This dataset offers a thorough assessment of a student's reading comprehension and writing abilities in high school literature, thereby providing a valuable tool for developing and testing LLMs for a variety of language understanding tasks, including sentiment analysis, question answering, text generation, and text summarization. Moreover, the VNHSGE literature dataset is built in Vietnamese, which challenges the ability of LLMs in NLP as Vietnamese is one of the languages with many layers of meaning. Additionally, because Vietnamese is one of the languages with multiple layers of implications, the VNHSGE literature dataset challenges LLMs' proficiency in NLP.
#### 5.3.3 English
For datasets involving question-answering, there are plenty of options. For instance, the DREAM dataset [56] focuses on reading comprehension for dialogue while the RACE dataset [49] exclusively considers paragraph reading comprehension. Another dataset that covers eight tasks is SuperGLUE [52]. These datasets have performed admirably for the intended purposes, but they do not provide a comprehensive examination of the LLMs' general language processing abilities.
The VNHSGE English dataset contains an assortment of exam questions from high school exams that cover a variety of topics and demand a variety of linguistic abilities (C.3). In the dataset's pronunciation and stress questions (C.3.1), LLMs are asked to choose the word whose underlined portion is pronounced differently from the other three. LLMs are also required to select the proper response from a list of alternatives for questions on vocabulary and grammar (C.3.2), identify terms with opposite or similar meanings, choose the closest-meaning sentence, and fix underlined parts. In order to pass the communication skills test (C.3.3), LLMs are required to select the appropriate response for each conversation. LLMs fill in each of the numbered blanks in the reading fill-in-the-blank questions (C.3.4) by choosing the appropriate word or phrase. Furthermore, LLMs are required to read passages in order to respond to questions about reading comprehension (C.3.5). At the human level, the dataset encompasses an extensive variety of topics and activities. The dataset is also made up of questions and answers, where the answers are explained in great depth in the solutions. This aids in teaching LLMs how to think critically.
The VNHSGE English dataset is a useful tool for LLMs to enhance their proficiency in a range of topics and abilities connected to English language comprehension at the human-level performance. These models can perform better in a variety of language-related tasks, including question answering, language modeling, text generation, reading comprehension, text summarization, etc. by being trained on this dataset, which may assist these models comprehend and process natural language effectively.
#### 5.3.4 Physics
In the previous physics datasets, the TQA dataset [34] concentrates on life, earth, and physical sciences and includes both text and pictures for machine comprehension and visual question answering. Although the TQA dataset is intended for middle school students, it appears to be simple enough for LLMs in use today. The PIQA dataset [36] tests the LLMs' capacity for physical reasoning, it is suited for honing their capacity for inference and leaves out the computationally demanding physics problems that they must be able to answer. Physics-related topics such as materials, magnets, velocity, and forces, force and motion, particle motion and energy, heat, and thermal energy, states of matter, kinetic and potential energy, and mixtures are covered in the ScienceQA dataset [30]. Although ScienceQA covers a wide range of topics this is merely elementary physics. On the other hand, the VNHSGE physics dataset is geared toward high school students. The VNHSGE physics dataset also focuses on more complicated topics like electromagnetic oscillations and waves, light waves, quantum light, atomic nuclei, direct current, electromagnetic induction, and light refraction (C.4). The prior datasets can be difficult for LLMs since they demand one to comprehend and make connections between a wide range of scientific principles and notions. The VNHSGE physics dataset, however, may present a bigger challenge for language models because it deals with more complex and specialized physics topics and necessitates a higher level of scientific understanding and reasoning abilities to accurately respond to the questions.
\(50\%\) of the questions in the VNHSGE physics dataset are theoretical, and \(50\%\) are practical and applied. Most theoretical problems fall under the knowledge level (C.4.1), which calls for both inference and a firm comprehension of theoretical knowledge. For questions at the comprehension level (C.4.2), there is a higher degree of inference about knowledge and mathematical abilities. The application level questions (C.4.3) come next, which have a high categorization and draw on complex physics concepts like understanding of practice and application. The high application level questions (C.4.4) are the last type. These include experimental questions as well as questions that make use of graphs related to mechanical oscillations and alternating currents. These inquiries demand a very high degree of inference, and the unique solutions call for in-depth knowledge of high school physics challenges.
Physical concepts like mechanical oscillations, waves, quantum mechanics, and atomic nuclei might be difficult for LLMs to understand and rationalize when presented with physical information from the VNHSGE. In addition to demanding the ability to retain information, the datasets additionally inquire about the ability to draw conclusions, apply ideas to concrete circumstances, and even solve challenging issues. It is a difficult undertaking for any LLMs because the high application-level questions in the dataset demand specialized knowledge and experience in addressing physics issues at the high school level.
#### 5.3.5 Chemistry
There aren't many datasets in the field of chemistry that are specifically focused on tackling questions. The SciQ dataset [57] tests LLMs on their knowledge of chemistry with multiple-choice questions. It rates the model's comprehension and deductive reasoning skills in regard to chemistry-related scientific ideas and concepts. The chemistry dataset in [18] focuses on the LLMs' accuracy in chemistry subjects from high school, including chemical reactions, ions, acids, and bases, to college, like analytical, organic, inorganic, and physical. However, there are only a few chemistry questions. Understanding and responding to questions about chemistry subjects like solutions, physical and chemical changes, atoms and molecules, and chemical reactions are the main objectives of ScienceQA dataset [30]. The VNHSGE chemistry dataset, on the other hand, presents difficulties for LLMs in understanding and responding to questions regarding a variety of chemistry topics, including metals, inorganic and organic molecules, polymers, and more (C.5). It rates the model's comprehension and deductive reasoning skills with regard to a variety of chemistry concepts and principles.
The VNHSGE chemistry dataset is made up of \(30\%\) computational tasks and \(70\%\) theoretical questions. Usually, theoretical problems require knowledge and comprehension. The knowledge-level questions are typically brief and demand information-retrieval-level knowledge (C.5.1). Subsequently, the computations in the comprehension level (C.5.2) section are rather straightforward, requiring only 1 or 2 operations for problems. Next, the high-level reasoning and the synthesis of several concepts are required to answer the application-level questions (C.5.3). Finally, the high-application questions (C.5.4) require in-depth knowledge, logical reasoning, and the synthesis of several chemical reaction equations.
The VNHSGE chemistry dataset evaluates LLMs' high-level reasoning and problem-solving abilities as well as their comprehension of chemistry principles across a variety of topics and levels of difficulty. The dataset necessitates that the models have an adequate knowledge of chemical principles and be able to implement that understanding in challenging contexts, such as the synthesis and analysis of chemical reactions.
#### 5.3.6 Biology
Similar to chemistry, there aren't many biology datasets created expressly for question answering tasks. BioASQ [33] concentrates on medical fields rather than biological ones. The SciQ [57] dataset makes it difficult for LLMs to correctly respond to Biology-related multiple-choice questions on science exams. The dataset evaluates how well the model can understand and justify biological science principles and notions. The MMLU dataset [18] assesses LLMs' accuracy in subjects from high school and college biology, including natural selection, heredity, cell cycle, and more. The ScienceQA dataset [30], on the other hand, focuses on understanding and responding to questions about molecular and cellular biology. Because of its extensive coverage of topics including genetic laws, population genetics, applications of genetics, human genetics, evolution, ecology, plant organismal biology, and animal organismal biology, the VNHSGE biology dataset presents a significant challenge to LLMs (C.6).
The questions in the VNHSGE biology dataset are highly challenging and complicated, and in order to accurately respond to them, one must have a thorough understanding of all aspects of biology. According to the dataset's design, there should be \(75\%\) theoretical questions and \(25\%\) exercises, with \(70\%\) of the questions being at the knowledge and comprehension levels and \(30\%\) of the questions focusing on application and higher-order thinking skills. The dataset, which includes questions of varying complexity, focuses on the capacity for calculation and inference. The knowledge level questions (C.6.1) demand a comprehensive understanding of biology to answer correctly, while the comprehension level questions (C.6.2) require one to three steps of deductive reasoning to find the answer. The application level questions (C.6.3) focus on areas including rules of genetics, human genetics, population genetics, and mechanisms of inheritance and mutation and call for the capacity to synthesize knowledge. The high application level questions (C.6.4) require sophisticated analysis and problem-solving skills.
The VNHSGE biology dataset is a substantial challenge for LLMs since it calls for a mix of in-depth knowledge and sophisticated reasoning abilities in order to correctly understand and respond to questions about a wide range of biology topics.
#### 5.3.7 History
Both the MMLU dataset [18] and ScienceQA dataset [30] evaluate how well LLMs perform when answering questions about historical events. While the MMLU dataset [18] assesses LLMs' accuracy in high school histories concepts like High School US History, High School European History, and High School World History, the ScienceQA dataset [30] focuses on understanding and responding to questions about American and global history.
The purpose of the VNHSGE history dataset is to assess LLMs' knowledge of historical events and milestones as well as to give correct analysis of historical events (C.7). The dataset contains \(80\%\) questions at the knowledge and comprehension levels covering a wide range of topics including Vietnamese and global histories (C.7.1 and C.7.2). To answer these kinds of inquiries, one must not only accurately record the facts but also use historical reasoning. Across topics in Vietnamese history from 1919 to 1975, the dataset contains \(20\%\) of questions that require application and high application levels (C.7.3 and C.7.4). The majority of the questions concern comparison essays, connections between topics, links between Vietnamese history and world history, or commentary and summaries of historical periods to identify key characteristics or the substance of historical events. The capacity to analyze, contrast, and comment on historical events is necessary for these kinds of issues.
The VNHSGE history dataset is utilized for evaluating how well LLMs can recall and comprehend historical events as well as their timeframes. The questions in the dataset range from simple to complex, requiring varying degrees of deductive reasoning and inference skills. To correctly respond to the questions in the dataset, LLMs must be able to interpret and analyze complicated historical events, appreciate the relationships between them, and draw inferences from them.
#### 5.3.8 Geography
Few specialized datasets are available for geography question-answering tasks. The MMLU dataset [18] includes a few inquiries about high school geography concepts including population movement, rural land use, and urban processes. While the ScienceQA dataset [30] focuses on questions about state capitals, geography, maps, and more. Additionally, the geography dataset in [64] includes 612 Bulgarian multiple-choice questions for the matriculation exam for the 12th grade. The GeoTSQA dataset [58], which was compiled from high school exams in China, has 1,000 actual questions in the geography domain that are contextualized by tabular scenarios. The VNHSGE geography dataset is intended to assess LLMs' knowledge of geographical concepts such as natural geography, population geography, economic sector geography, economic zone geography, sea geography, and island geography as well as geographical skills such as atlas use, data table interpretation, and chart analysis.
The questions in the VNHSGE geography dataset are ordered in order of increasing complexity, with \(80\%\) of the questions falling into the basic category (knowledge and understanding) and \(20\%\) falling into the advanced category (\(10\%\) application and \(10\%\) high-level application) (C.8). \(50\%\) of the exam's questions, such as chart analysis (C.8.1), data table interpretation (C.8.2), and atlas use (C.8.3), involve geographic knowledge. LLMs must be able to solve problems in order to master these skills. Additionally, LLMs must be able to think logically, have a broad understanding of society, be adept at solving problems, and have a high degree of critical thinking to complete the diversified questions (C.8.4).
Questions in the VNHSGE geography dataset call for a variety of abilities, such as data analysis, chart interpretation, and atlas use, which can assist in training LLMs to comprehend and process complicated material in these fields. The dataset also contains questions that call for reasoning, problem-solving, and critical thinking, which can aid in the development of more sophisticated language skills in language models.
#### 5.3.9 Civic Education
There have been numerous attempts to construct datasets connected to the legal profession and ethics, which has recently received special attention. While the JEC-QA dataset [38] contains questions connected to the national judicial examination in China, the CJRC dataset [65] comprises documents and questions relating to legal knowledge in China. The CaseHOLD dataset [39], which focuses on finding the critical components in a legal case, is a novel and difficult dataset in the subject of law. While the PolicyQA dataset [66] focuses on comprehending the privacy policies of websites, the PrivacyQA dataset [67] focuses on queries regarding the privacy policies of mobile applications. To guarantee the accuracy of the replies, both databases offer questions that have been reviewed by experts. The Vietnamese transportation law dataset [68] and the Vietnamese law dataset [69] both concentrate on questions pertaining to law, but the Vietnamese transportation law dataset is more concerned with traffic law and the Law dataset is more concerned with broad legal issues. Additionally, MMLU dataset [18] has a few questions about professional law as well as questions about international law including torts, criminal law, contracts, etc. Focused on questions about civics subjects like social skills, governance, and the constitution is the ScienceQA dataset [30]. While the VNHSGE civic education dataset is intended to provide LLMs with civic education and legal training, it also focuses on case studies and multiple-choice questions on topics such as legal frameworks and regulations, fundamental civil rights, democratic principles, and case studies.
The purpose of VNHSGE civic education dataset is to evaluate LLMs' understanding of and ability to apply legal concepts (C.9). \(70\%\) of the exam's questions are knowledge and comprehension level questions (C.9.1 and C.9.2). \(30\%\) of the questions are application and high application levels, focused on topics like Citizens' fundamental rights; types of legal infractions; and equal rights in certain areas of social life. There is a lot of confusion in the answer choices for questions at the application level (C.9.3), making it difficult to accurately assess and choose the right response. Complex case studies with several plotlines and characters are offered for questions at the high level (C.9.4), and it needs a thorough comprehension of legal theory to properly examine the nature of the characters' violations.
For LLMs to assess their understanding of and ability to apply legal information, particularly in the context of civic education and legal training, the VNHSGE civic education dataset is employed. The dataset includes case studies together with multiple-choice questions on topics like legal frameworks and regulations, fundamental citizen rights, democratic principles, and notions. LLMs can gain a better understanding of legal ideas and how to apply them in practical scenarios by training on this dataset, which can be helpful for a range of applications like legal research, automated legal document analysis, and legal chatbots.
## 6 Experiments
### ChatGPT and BingChat responses
**Response format**: When posing questions to LLMs, we can receive answers in various formats. To standardize response formats and simplify result processing, we request that LLMs provide replies in a specific structure. Figure 1 demonstrates an example of the required structure for LLM responses. To achieve this, we used the Explanation and Choice approach and include a "pre-question" prompt before the actual question. This prompt combines the content of the original question with instructions for the desired response format. Standardizing the format of LLM answers is crucial for several reasons. Firstly, it enables quicker and more accurate processing of model responses. Secondly, it facilitates impartial comparison and evaluation of the performance of different LLMs. Additionally, it ensures that the solutions provided by LLMs are easy to understand and applicable for further applications. By giving LLM responses a clear and consistent structure, we can effectively harness their abilities to enhance various NLP tasks.
**Question (Word format)**:
\begin{tabular}{|l|l|l|l|l|l|} \hline ID & IQ & Q & C & IA & E \\ \hline & & 1) The volume of a cube with edge 2a is: & & & \\ & & A. 8a\({}^{\wedge}\)3 & & & The volume of a cube \\
1 & & B. 2a\({}^{\wedge}\)3. & A & & with edge 2a is: \\ & & C. a\({}^{\wedge}\)3 & & & V=(2a)\({}^{\wedge}\)3=8a\({}^{\wedge}\)3. \\ & & D. 6a\({}^{\wedge}\)3. & & & \\ \hline \end{tabular}
**Question (JSON format)**: { "ID": "Q1", "IQ": " ", "Q": "1} The volume of a cube with edge 2a is:\(\backslash\)nA. 8a\({}^{\wedge}\)3.l\(\backslash\)nB. 2a\({}^{\wedge}\)3.t\(\backslash\)nC. a\({}^{\wedge}\)3.t\(\backslash\)nD. 6a\({}^{\wedge}\)3.", "C": "A", "IA": " ", "E": "The volume of a cube with edge 2a is: V=(2a)\({}^{\wedge}\)3=8a\({}^{\wedge}\)3.", }
**Pre-question (JSON format)**: "I want you to answer the question in the following structure: " "nChoice: "A" or "B" or "C" or "D" "nExplanation: Explain the answer" "nThe question is: {}"
**New Question (Prompt)**: I want you to answer the question in the following structure:
Choice: "A" or "B" or "C" or "D"
Explanation: Explain the answer
The question is: 1) The volume of a cube with edge 2a is: A. 8a\({}^{\wedge}\)3. B. 2a\({}^{\wedge}\)3. C. a\({}^{\wedge}\)3. D. 6a\({}^{\wedge}\)3."
**Response (JSON format)**:
{ "ID": "1", "IQ": " ", "Q": "1} The volume of a cube with edge 2a is:\(\backslash\)nA. 8a\({}^{\wedge}\)3.t\(\backslash\)nB. 2a\({}^{\wedge}\)3.t\(\backslash\)nC. a\({}^{\wedge}\)3.t\(\backslash\)nD. 6a\({}^{\wedge}\)3.", "C": "A", "IA": ", "E": "The volume of a cube with edge 2a is: V=(2a)\({}^{\wedge}\)3=8a\({}^{\wedge}\)3.", "CC": "A", "CE": "The formula for the volume of a cube is V = s\({}^{\wedge}\)3, where s is the length of one of its sides. Therefore, the volume of the cube with a side length of 2a is: V = (2a)\({}^{\wedge}\)3 = 8a\({}^{\wedge}\)3", }
**Response (Word format)**:
\begin{tabular}{|l|l|l|l|l|l|} \hline ID & IQ & C & IA & E & CC & CE \\ \hline & & 1) The volume of a cube & & & & The formula for the volume of \\ & & with edge 2a is: & & & & a cube is V = s\({}^{\wedge}\)3, where s is \\ & & A. 8a\({}^{\wedge}\)3 & & & The volume of a cube & & the length of one of its sides. \\ & & B. 2a\({}^{\wedge}\)3. & & & with edge 2a is: & A & Therefore, the volume of \\ & & C. a\({}^{\wedge}\)3 & & & V=(2a)\({}^{\wedge}\)3=8a\({}^{\wedge}\)3. & & the cube with a side length \\ & & D. 6a\({}^{\wedge}\)3. & & & & of 2a is: V = (2a)\({}^{\wedge}\)3 = 8a\({}^{\wedge}\)3 \\ \hline \end{tabular}
We conducted experiments using two state of the art language models, ChatGPT (based on GPT-3.5) and BingChat (based on GPT-4)1, to evaluate the performance of our dataset. We assessed each model based on accuracy and provided examples of both successful and poor responses (see Appendix section C for further details of examples).
Footnote 1: [https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4](https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI%E2%80%99s-GPT-4)
In the following sections, we compared the performance of ChatGPT and BingChat using five tests for each subject, including 30 literary essays and 1700 multiple-choice questions in others. LLMs like ChatGPT and BingChat have
Figure 1: Formatted question and LLMs response.
been trained to predict the next word in a text based on the preceding words. However, these models have limitations when it comes to handling complex computational problems or requiring multi-step reasoning, even though they are capable of responding to basic questions. These LLMs may also struggle to comprehend texts with intricate contexts and may encounter difficulties in certain situations, particularly when processing the Vietnamese language. They might misinterpret certain contexts and occasionally confuse words with homonyms or antonyms.
**Mathematics**: ChatGPT and BingChat can handle knowledge and comprehension level questions (C.1.1) and (C.1.2). However, they struggle with complex calculations and logical reasoning that require advanced mathematical skills or multi-step deductive reasoning (C.1.3). These models often provide inaccurate explanations and answers and are unable to provide appropriate solution instructions for high application level problems (C.1.4).
**Literature**: ChatGPT and BingChat are capable of responding to literary queries and generating essays due to their extensive training in various domains, including literature and journalism. They have a good grasp of natural language structure and can synthesize new responses and paragraphs based on learned knowledge and input data. However, ChatGPT and BingChat still have limitations in reasoning abilities and understanding complex language and context, particularly in languages like Vietnamese. As a result, their responses may not always be entirely accurate or suitable for the context or purpose of the question (C.2.1). ChatGPT is more suitable for language-related topics and tends to provide more relevant and emotive responses compared to BingChat, a search engineer (C.2.2).
**English**: ChatGPT and BingChat are unable to respond to questions on pronunciation and stress (C.3.1) even though they both score well on other English languages topics like grammar and vocabulary (C.3.2), communication (C.3.3), reading fill-in-the-blank (C.3.4), and reading comprehension (C.3.5). Both ChatGPT and BingChat have been taught the rules and patterns of the English language, including grammar and vocabulary, through training on large English text data. Additionally, they receive instruction on how to comprehend and produce natural language, which involves reading fill-in-the-blank passages and reading comprehension. Though it's possible that neither BingChat nor ChatGPT received adequate training in pronunciation and stress.
**Physics**: ChatGPT and BingChat can solve physics questions at the knowledge and comprehension levels (C.4.3 and C.4.2) which are relatively simple questions about physics topics. However, they are unable to answer questions at the application and high application levels (C.4.3 and C.4.4), which frequently call for substantial knowledge and skills in understanding and applying concepts to solve problems.
**Chemistry**: ChatGPT and BingChat can respond to questions at the knowledge level (C.5.1) by memorizing facts. They often fail to generate the right response to questions at the comprehension level (C.5.2). Neither ChatGPT nor BingChat typically can provide accurate answers for challenging questions at the application level (C.5.3) and high application level (C.5.4) because these types of questions demand the capacity to infer from multiple chemical reactions and high-level synthesis knowledge.
**Biology**: Both ChatGPT and BingChat are capable of providing responses to questions at the knowledge and comprehension levels (C.6.1 and C.6.2), similar to subjects like mathematics, physics, and chemistry that require both calculation and reasoning skills. However, ChatGPT and BingChat have a very limited likelihood of correctly determining the answers to questions requiring complex thinking and information processing in diagrams at the application and high application levels (C.6.2 and C.6.4). These types of questions demand a deeper understanding of biology concepts and the ability to apply them in complex scenarios.
**History**: ChatGPT and BingChat do reasonably well when answering questions in the field of history at the knowledge and comprehension levels (C.7.1 and C.7.2). However, both ChatGPT and BingChat often struggle to provide accurate responses to the application and high application questions (C.7.3 and C.7.4). These types of questions require higher-order thinking skills and a deep understanding of the historical context as well as demand the ability to compare, analyze, and express a judgment on historical events and characters.
**Geography**: ChatGPT is able to respond to questions about charts without requesting data from the chart, BingChat does not support these questions (C.8.1). The result is that both they cannot answer questions related to charts or images. Both ChatGPT and BingChat can provide precise responses to questions about the information in a table (C.8.2) and queries related to the use of the Atlas (C.8.3). However, when it comes to questions that require analysis and interpretation at the application and high application levels (C.8.4), both ChatGPT and BingChat often struggle to give precise responses. These types of questions necessitate the ability to analyze and interpret geographical data and concepts, which the models may find challenging.
**Civic Education**: At the knowledge and comprehension levels (C.9.1 and C.9.2), ChatGPT and BingChat can provide accurate answers. However, ChatGPT often produces inaccurate responses for questions at the application level (C.9.3), while BingChat performs better. Both ChatGPT and BingChat often fail to provide precise responses when analyzing character behavior in scenario-based questions at the high application level (C.9.4).
### ChatGPT and BingChat performances
Table 5 displays ChatGPT and BingChat's performance. We can see that for subjects requiring complex computation and reasoning, such as mathematics, physics, chemistry, and biology, their performance ranges from \(48\%\) to \(69\%\). The performance of ChatGPT and BingChat is between \(56.5\%\) and \(92.4\%\) for subjects that predominantly depend on languages, such as literature, English, history, geography, and civic education. LLMs such as ChatGPT and BingChat have been trained on vast amounts of text covering a wide range of fields. However, these models lack subject-matter expertise. Mathematics, physics, chemistry, and biology often demand profound knowledge and advanced computational abilities, which may not be possessed by language models like ChatGPT and BingChat for solving such challenging problems. On the other hand, subjects like literature, English, history, geography, and civic education frequently require strong language skills and the ability to comprehend complex texts, areas in which language models like ChatGPT and BingChat may have sufficient capabilities to handle.
The performance comparison between ChatGPT and BingChat is depicted in Figure 2. BingChat performs better than ChatGPT in all categories except for literature. There is not much difference between BingChat and ChatGPT in subjects like mathematics, physics, and chemistry, which require extensive computation and reasoning. However, ChatGPT surpasses BingChat in terms of performance in the literature category. This is because BingChat is a search engine, and its results may not be suitable for the literature subject, which often involves writing extensive essays. BingChat outperforms ChatGPT in the remaining topics. It should be noted that BingChat is based on GPT-4 while ChatGPT is based on GPT-3.5. Furthermore, BingChat may find accurate answers when the questions and answers are publicly available online.
### ChatGPT, BingChat, and Vietnamese Students
This section compares the effectiveness of BingChat and ChatGPT with Vietnamese students. Our aim is to determine whether LLMs possess abilities comparable to those of humans, although this comparison is challenging due to the dissimilar settings. By conducting this comparison, we can evaluate whether LLMs can serve as effective tools for Vietnamese students in various subject areas (see Appendix section D for more details of spectrum comparisons).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & **Mathematics** & **Literature** & **English** & **Physics** & **Chemistry** & **Biology** & **History** & **Geography** & **Civic Education** \\ \hline
**2019** & 52 & 56 & 75 & 52.75 & 76 & 92 & 60 & 55 & 40 & 55 & 60 & 67.5 & 42.5 & 82.5 & 50 & 75 & 60 & 75 \\ \hline
**2020** & 66 & 56 & 68.9 & 51.25 & 86 & 96 & 62.5 & 67.5 & 42.5 & 57.5 & 60 & 72.5 & 47.5 & 85 & 52.5 & 70 & 70 & 87.5 \\ \hline
**2021** & 60 & 66 & 75 & 60.25 & 76 & 86 & 60 & 67.5 & 62.5 & 50 & 52.5 & 67.5 & 55 & 90 & 75 & 82.5 & 62.5 & 92.5 \\ \hline
**2022** & 62 & 60 & 56.3 & 70 & 80 & 94 & 65 & 67.5 & 47.5 & 47.5 & 57.5 & 72.5 & 60 & 92.5 & 62.5 & 85 & 82.5 & 90 \\ \hline
**2023** & 54 & 62 & 64.8 & 49.75 & 78 & 94 & 57.5 & 72.5 & 47.5 & 52.5 & 60 & 65 & 77.5 & 92.5 & 67.5 & 85 & 77.5 & 82.5 \\ \hline
**AVG** & **58.8** & **60** & **68** & **56.8** & **79.2** & **92.4** & **61** & **66** & **48** & **52.8** & **58** & **69** & **56.5** & **88.5** & **61.5** & **79.5** & **70.5** & **85.5** \\ \hline \end{tabular}
\end{table}
Table 5: ChatGPT and BingChat performances on VNHSGE dataset
Figure 2: Comparison of ChatGPT and BingChat performances on VNHSGE dataset.
Figure 3 illustrates a comparison of the performance among ChatGPT, BingChat, and Vietnamese students in three core subjects: mathematics (D.1), literature (D.2), and English (D.3). These subjects are integral parts of the exam and are required for all students.
**Mathematics**: According to the findings, ChatGPT and BingChat are unable to match the performance of human students in Vietnam's high school mathematics curriculum. Despite being trained on vast amounts of textual data from the internet, they struggle with complex mathematical problems, although they can handle simpler mathematical concepts. The high school mathematics questions require reasoning, logical thinking, analytical skills, and the ability to apply knowledge in practical situations. To achieve performance on par with humans in high school mathematics, ChatGPT and BingChat's mathematical abilities need substantial improvement.
**Literature**: Both ChatGPT and BingChat have been extensively trained on large Vietnamese language datasets, enabling them to analyze and generate essays with considerable proficiency. In terms of high school literature, the performance of LLMs such as ChatGPT and BingChat is human-like level. However, it should be emphasized that ChatGPT and BingChat are unable to write emotionally rich essays or conduct in-depth literary analyses. In summary, ChatGPT can be considered a tool to support Vietnamese students in studying literature.
**English**: According to the results, ChatGPT and BingChat performed better in high school English compared to Vietnamese students. It should be mentioned that Vietnamese students' English proficiency is not very high compared to the global average. ChatGPT and BingChat are effective tools that Vietnamese students can utilize to study foreign languages.
Figure 3: Comparison in core subjects.
Figure 4 depicts a comparison of the performance among ChatGPT, BingChat, and Vietnamese students in the natural combination, including physics (D.4), chemistry (D.5), and biology (D.6), respectively.
**Physics**: The performance of ChatGPT and BingChat is comparable to the average score of Vietnamese students in physics. However, they are still less than the score achieved by most Vietnamese students. With thorough training in the field of physics, LLMs can provide accurate answers and insightful explanations to assist students in understanding physics. The models, however, still require development, particularly for physics issues that call for intricate computations and reasoning.
**Chemistry**: ChatGPT and BingChat still do not possess the same level of proficiency in chemistry as Vietnamese high school students do. While these LLMs can provide relevant knowledge and solutions in the field of chemistry, they lack the expertise required to solve complex chemistry problems that demand advanced levels of analysis and reasoning. However, in terms of delivering theoretical knowledge and information, it is certainly possible for LLMs to become useful tools for Vietnamese students in high school chemistry.
**Biology**: The findings indicate that ChatGPT and BingChat outperform Vietnamese students in biology. It is important to note that biology is considered a less prioritized subject for many Vietnamese students compared to mathematics, physics, and chemistry. The biology score of Vietnamese students is less in mathematics, physics, and chemistry. LLMs are capable of addressing basic questions in biology, such as definitions, concepts, simple problem-solving, and specific examples. Therefore, LLMs can serve as helpful resources for high school students to comprehend fundamental biology concepts and problems.
Figure 4: Comparison in natural combination
Figure 5 presents a comparison of the performance among ChatGPT, BingChat, and Vietnamese students in the social combination, including history (D.7), geography (D.8), and civic education (D.9), respectively.
**History**: While BingChat performs better, ChatGPT's results are comparable to those of Vietnamese students. With extensive and diverse training datasets, ChatGPT and BingChat are able to understand and process different types of historical questions and provide logical and useful responses. Although ChatGPT and BingChat may still encounter challenges with complex questions, they can be valuable resources for high school students in history.
**Geography**: While BingChat achieves higher scores, ChatGPT performs at a similar level to Vietnamese students. The results indicate that both ChatGPT and BingChat are capable of understanding and responding to high school-level geography questions. They can effectively teach geography concepts and terminology, enhancing students' learning in high school geography. However, they may still face limitations when dealing with complex and in-depth inquiries that require advanced critical thinking.
**Civic Education**: BingChat and ChatGPT showcase human-like abilities in the field of civic education. With their training in civic education and law-related subjects, they possess the expertise to provide high school-level knowledge in areas such as politics, law, citizen rights and responsibilities, and other social issues. Therefore, as reference tools, ChatGPT and BingChat can be highly valuable for Vietnamese students studying civic education.
### Vhhsge dataset and other datasets
In Figure 6, the performance of ChatGPT and BingChat on the VhHSGE dataset is compared to other datasets in the GPT-4 Report [12]. The results show that ChatGPT's performance on the VhHSGE dataset is comparable to
Figure 5: Comparison in social combination.
that of GPT-3.5 across subjects ranging from AP Statistics to AP Psychology. BingChat improves its performance in text-based subjects such as history, geography, civic education, and English. However, BingChat's performance does not significantly outperform ChatGPT in subjects like mathematics, physics, chemistry, and biology, which require complex computation and reasoning. On the other hand, GPT-4 exhibits better performance than GPT-3.5 in tasks of similar nature. This could be due to the structure of questions in these subjects from the VNHSGE dataset, which presents challenges for BingChat, particularly at the application and high application levels.
## 7 Conclusion
In this paper, we present the VNHSGE dataset, which is intended to evaluate and train LLMs' multitask abilities such as question answering, text generation, reading comprehension, visual question answering, and more. The dataset covers nine subject areas from the Vietnamese National High School Graduation Examination, including social and language subjects such as literature, English, history, geography, and civic education, as well as calculation and inference subjects like mathematics, physics, chemistry, and biology. The dataset encompasses a wide range of question types, spanning from basic recall to complex calculation and reasoning questions. The VNHSGE dataset serves as a valuable resource for training LLMs, offering a diverse set of challenges at the human level. The dataset helps researchers identify critical flaws in models, thereby facilitating the improvement of LLMs' abilities. The VNHSGE dataset has various benefits for developing LLMs, including:
* Comprehensive coverage: The dataset provides thorough coverage of a wide range of topics in nine high school subjects. This enables more thorough training of language models across diverse computing and inference domains.
Figure 6: Performance of ChatGPT, BingChat on VNHSGE dataset and GPT-3.5, GPT-4 on other datasets.
* Various question types: The dataset contains a wide range of question types, from straightforward knowledge-based inquiries to intricate application-based inquiries requiring extensive investigation and evaluation. This offers a wide range of learning challenges for language models.
* Different difficulty levels: The VNHSGE dataset contains questions that range in complexity from simple to sophisticated, making it possible to train models that can handle a variety of question challenges.
* Vietnamese language: Given that the dataset is in Vietnamese, it is possible to train language models in a language other than English, enhancing their adaptability and global applicability.
The state of the art of LLMs, ChatGPT and BingChat, tested on the VNHGE dataset showed that the VNHSGE dataset is perfectly suited for LLMs. This outcome not only demonstrates the models' abilities but also presents chances and difficulties for LLMs deploying in the field of education.
The VNHSGE dataset demonstrates the advantages and disadvantages of LLMs and offers information about possible instructional applications for these models. Additionally, it poses a challenge for LLMs to enhance their abilities to handle challenging, high-level application questions.
|
2307.04182 | Review and Outlook of Solar-Energetic-Particle Measurements on
Multispacecraft Missions | The earliest evidence on spatial distributions of solar energetic particles
(SEPs) compared events from many different source longitudes on the Sun, but
the early Pioneers provided the first evidence of the large areas of equal SEP
intensities across the magnetically-confined "reservoirs" late in the events.
More-detailed measurements of the importance of self-generated waves and
trapping structures around the shock waves that accelerate SEPs were obtained
from the Helios mission plus IMP 8, especially during the year when the two
Voyager spacecraft also happened by. The extent of the dozen widest SEP events
in a solar cycle, that effectively wrap around the Sun, was revealed by the
widely separated STEREO spacecraft with three-point intensities fit to
Gaussians. Element abundances of the broadest SEP events favor average coronal
element abundances with little evidence of heavy-element-enhanced "impulsive
suprathermal" ions that often dominate the seed population of the shocks, even
in extremely energetic local events. However, it is hard to define a
distribution with two or three points. Advancing the physics of SEPs may
require a return to the closer spacing of the Helios era with coverage mapped
by a half-dozen spacecraft to help disentangle the distribution of the SEPs
from the underlying structure of the magnetic field and the accelerating shock. | Donald V. Reames | 2023-07-09T14:14:09Z | http://arxiv.org/abs/2307.04182v2 | **Review and Outlook of Solar-Energetic-Particle Measurements**
**Review and Outlook of Solar-Energetic-Particle Measurements**
**on Multispacecraft Missions**
_Donald V. Reames_
Institute for Physical Science and Technology, University of Maryland, College Park, MD, USA [email protected]
**Abstract** The earliest evidence on spatial distributions of solar energetic particles (SEPs) compared events from many different source longitudes on the Sun, but the early _Pioneers_ provided the first evidence of the large areas of equal SEP intensities across the magnetically-confined "reservoirs" late in the events. More-detailed measurements of the importance of self-generated waves and trapping structures around the shock waves that accelerate SEPs were obtained from the _Helios_ mission plus IMP 8, especially during the year when the two _Voyager_ spacecraft also happened by. The extent of the dozen widest SEP events in a solar cycle, that effectively wrap around the Sun, was revealed by the widely separated STEREO spacecraft with three-point intensities fit to Gaussians. Element abundances of the broadest SEP events favor average coronal element abundances with little evidence of heavy-element-enhanced "impulsive suprathermal" ions that often dominate the seed population of the shocks, even in extremely energetic local events. However, it is hard to define a distribution with two or three points. Advancing the physics of SEPs may require a return to the closer spacing of the _Helios_ era with coverage mapped by a half-dozen spacecraft to help disentangle the distribution of the SEPs from the underlying structure of the magnetic field and the accelerating shock.
**Keywords: solar energetic particles, shock waves, coronal mass ejections, solar jets, solar system abundances, multispacecraft missions, heliosphere.**
## 1 Introduction
The spatial distribution of solar energetic particles (SEPs), and its variation with time, particle species, and energy, is fundamental to an understanding of the physics of particle acceleration and transport in the heliosphere. How much of the variation we see at a single spacecraft is a true time variation and how much is spatial variation being convected past? Is the SEP source itself broadly extended in space or do SEPs somehow diffuse out of a limited source? Does a shock source sample different abundances of seed ions from different places or, for example, does Fe simply scatter less than O, to produce early enhancements and later suppressions in Fe/O? Multispacecraft comparisons can be a key to distinguishing the physical effects dependent upon space and time.
### SEP History and Context
Multispacecraft measurements, and the perceived need for them, generally followed the study of solar energetic particles (SEPs) on or near Earth. The SEP events observed first (Forbush 1946) were rare, large, energetic "ground-level enhancements" (GLEs) where GeV protons produce a nuclear cascade through the atmosphere to ground level that enhances the continuous signal produced similarly by the galactic cosmic rays (GCRs). Since solar flares were found to accompany these early SEP events, flares were considered a possible source. However, the spatial span of these flares, stretching from the east on the Sun to behind the western limb, raised a significant problem: how could the SEPs cross magnetic field lines radiating out from the Sun to find their way to Earth?
Meanwhile, solar radio astronomers, also using ground-based instruments, had identified different sources triggered by energetic solar electrons. Radio emission, excited at the local plasma frequency depends upon the square root of the local electron density which decreases with distance from the Sun. Wild et al. (1963) described radio type III bursts where frequencies decreased rapidly, excited by 10 - 100 keV electrons that streamed out from a source near the Sun. There is also type II bursts where the source moved at the slower speed of a \(\sim\)1000 km s\({}^{\shortminus 1}\) shock wave. Wild et al. (1963) suggested two types of SEP sources: point sources of mostly electrons near the Sun and fast shock waves that could accelerate energetic protons, like those that produce GLEs. Even though shock acceleration was well known in other contexts, like the supernova sources of GCRs, Wild et al. (1963) were far ahead of their time in solar physics. Twenty years later, the clear 96% association of large SEP events with shocks driven by fast, wide coronal mass ejections (CMEs) was established by Kahler et al. (1984). Yet ten years after that, Gosling (1993, 1994) still needed to point out the error of the "Solar Flare Myth".
Parker (1965) explained particle transport in terms of pitch-angle scattering as SEPs followed magnetic field lines out from the Sun. Diffusion theory is an important tool when there is actually a physical mechanism, like pitch-angle scattering, to produce the random walk. There is also a random walk of the magnetic field footpoints (Jokipii and Parker 1969; Li and Bian 2023) prior to events, that can produce an effective random walk perpendicular to the mean magnetic field, but it is independent of SEP-event parameters and is completely inadequate to explain a huge spread of SEPs far from a presumed source longitude near a flare. Other schemes such as the "birdcage" model (Newkirk and Wenzel 1978) were also invented to spread SEPs from a flare source (see review, Sec. 2.3 in Reames 2021a). Reinhard and Wibberenz (1974) envisioned a mysterious "fast propagation region" extending 60\({}^{\circ}\) from the flare to spread the SEPs prior to their slower interplanetary journey. Could this region actually match the surface of a shock? After decades of resistance, a flare source has mainly been abandoned for the largest SEP events that are now generally attributed to spatially extensive CME-driven shock waves (Mason et al 1984; Gosling 1993, Reames 1995b, 1999, 2013, 2021b; Zank et al. 2000, 2007; Lee et al. 2012; Desai and Giacalone 2016), especially for GLEs (Tylka and Dietrich 2009; Gopalswamy et al. 2012, 2013; Mewaldt et al. 2012; Raukunen et al. 2018). Observations (e.g. Kahler et al. 1984) did replace the "fast propagation region" with the surface of a CME-driven shock wave that actually accelerates the particles, beginning at 2 - 3 solar radii (Tylka et al 2003; Reames 2009a, 2009b; Cliver et al. 2004), and continuing far out into the heliosphere. We will soon see from STEREO observations (e.g. Figure 5) that shocks easily wrap around the Sun expanding widely across magnetic field lines where SEPs alone cannot go.
As the element and isotope abundances in SEPs began to be measured they would present new evidence for two different physical sources of SEPs. The earliest measurements, during large SEP events with nuclear emulsions on sounding rockets, extended element abundances up to S (Fichtel and Guss 1961) then to Fe (Bertsch et al. 1969). Later studies showed that average SEP element abundances in large events were a measure of coronal abundances that differed from photospheric abundances as a simple function of the elements' first ionization potential (FIP; Meyer 1985; Reames 1995, 2014, 2021a, b). The FIP-dependence of SEPs differs fundamentally from that of the solar wind (Reames 2018a; Laming et al. 2019), probing the physics of formation of the corona itself.
However, early measurements in space soon identified a completely new type of event, distinguished by extremely high abundances of \({}^{3}\)He in some events, such as \({}^{3}\)He/\({}^{4}\)He =1.52\(\pm\)0.1 (e.g. Serlemitsos and Balasubrahmanyan 1975), vs. a solar value of \(\sim\)5 \(\times\) 10\({}^{4}\). Production of \({}^{3}\)He from \({}^{4}\)He by nuclear fragmentation was ruled out by the lack of \({}^{2}\)H and by lack of Li, Be, and B fragments from C and O, e.g. Be/O and B/O \(<\) 2 \(\times\) 10\({}^{-4}\) (McGuire et al., 1979; Cook et al., 1984). The huge enhancements of
\({}^{3}\)He were produced by new physics, involving a wave-particle resonance (e.g. Fisk 1978; Temerin and Roth 1993) with complex spectra (Mason 2007; Liu et al. 2004, 2006). The \({}^{3}\)He-rich events were associated with the beamed non-relativistic electrons (Reames et al 1985) and their type III radio bursts (Reames and Stone 1986) studied by Wild et al. (1965). Element abundances of these "impulsive" \({}^{3}\)He-rich events were also different from those of the large "gradual" shock-associated events (Mason et al. 1986; Reames 1988); here the enhancements increased as a power of the ion mass-to-charge ratio _A/Q_ (Reames et al. 1994), which was especially clear when it became possible to measure elements above Fe with element groups resolved as high as Au and Pb (Reames 2000; Mason et al. 2004; Reames and Ng 2004). Average enhancements varied as _(A/Q)\({}^{3.6}\)_ with \(Q\) value determined at a temperature of \(\sim\)3 MK (Reames et al. 2014a, b). In fact, this power law can be used in a best-fit method to determine source plasma temperatures (Reames 2016, 2018b). Impulsive SEP events have been associated with magnetic reconnection (Drake et al. 2009) on open field lines in solar jets (Bucik 2020) which also eject CMEs that sometimes drive shocks (Kahler et al 2001; Nitta et al. 2006, 2015; Wang et al. 2006; Bucik et al. 2018a, 2018b, 2021) fast enough to reaccelerate the enhanced \({}^{3}\)He and heavy ions along with ambient protons and ions (Reames 2019, 2022b). Derived abundances from \(\gamma\)-ray lines suggest that flares involve similar physics (Mandzhavidze et al. 1999; Murphy et al. 1991, 2016), but those accelerated particles are trapped on loops, losing their energy to \(\gamma\)-rays, electron bremstrahlung, and hot, bright plasma. The opening magnetic reconnections that drive jets also must close neighboring fields to form flares (see e.g. Figure 4 in Reames 2021b).
Shock waves can also accelerate residual suprathermal impulsive ions that can pool to provide a seed population (Mason et al. 1999, Tylka et al. 2001, 2005; Desai et al. 2003; Tylka and Lee 2006). The combination of two fundamental physical acceleration mechanisms, magnetic reconnection and shock acceleration, and two distinctive element abundance patterns, led Reames (2020, 2022b) to suggest four distinguishable SEP-event abundance pathways:
SEP1: "Pure" impulsive magnetic reconnection in solar jets with no fast shock.
SEP2: Jets with fast, narrow CMEs driving shocks that reaccelerate SEP1 ions plus ambient coronal plasma. Pre-enhanced SEP1 ions dominate high \(Z\), ambient protons dominate low \(Z\).
SEP3: Fast, wide CME-driven shocks accelerate SEP1 residue from active-region pools from many jets, plus ambient plasma. Again the SEP1 seed ions dominate high \(Z\).
SEP4: Fast, wide CME-driven shocks accelerate ions where ambient plasma completely dominates.
Persistent pools of SEP1-residual seed ions available for reacceleration in SEP3 events have now been widely observed and reported (Richardson et al. 1990; Desai et al. 2003; Bucik et al. 2014, 2015; Chen et al. 2015; Reames 2022a) and may be fed by numerous impulsive events too small to be distinguished individually.
Large gradual SEP events are frequently accompanied initially by type-III bursts. In principle, these impulsive jets could inject a SEP1 seed population. However, any such injection of SEP1 ions seems to be swamped by coronal seed ions in the many large SEP4 events. The extremely abundant type-III electrons can be distinguished from shock-accelerated electrons by their proximity to the event source; the latter emerge only in poorly-connected events (Cliver and Ling 2007).
The spatial extent of these features and the underlying physics is of considerable interest. Can we find spatial differences in the seed populations sampled by shocks? Unfortunately, each point on a shock moving radially across Parker-spiral fields will have spread particles over 50\({}^{\rm o}\) - 60\({}^{\rm o}\) by the time it reaches 1 AU (Reames 2022a). This tends to blur any initial spatial variations in abundances.
Before we could monitor a single SEP event at multiple locations we could study multiple events at Earth from different source longitudes on the Sun. Using data from IMP 4, 5, 7, 8 and ISEE 3 over nearly 20 years, Cane et al. (1988) ordered the 20 MeV proton profiles of 235 large SEP events as a function of solar source longitude. This study gave the typical extent and time profiles of an "average" SEP event, sliced as a function of longitude. Recently, Reames (2023) revisited this study adding examples that better illustrate the evolving role of the shock source as it propagates outward from the corona. A collection of sample time distributions of protons vs. longitude from this work is shown in **Figure 1**. While the SEP events in **Figure 1** are all different, we expect that slices of a single event at different longitudes could show similar features.
Historically, the intensity peaks at the shock seen on several of the profiles in **Figure 1** were called "energetic storm particle" (ESP) events. During their acceleration, particles are trapped near the shock by self-generated Alfven waves (Bell 1978a, b; Lee 1983; 2005) creating an autonomous structure that can propagate onto new field lines where earlier accelerated particles may be absent, as in **Figure 1e**(Reames 2023).
## 3 Multispacecraft Observations
### Pioneer
Some of the early _Pioneer_ spacecraft were launched into Earth-like solar orbits, although coverage was only hours per day. When there was no spacecraft near Earth, this led to the awkward comparison of 15 MeV proton data from _Pioneer_ 6 and 7 with GeV ground-level neutron-monitor measurements at Earth (Bukata et al. 1969). Fortunately, McKibben (1972) was able to include data from IMP 4 near Earth. While most of these spatial distributions were analyzed in terms of adjustable coefficients in the fashionable perpendicular diffusion, rather than shock acceleration, McKibben also noted that late in SEP events, proton intensities could be identical over large spans of longitude. Later named "reservoirs" by Roelof et al. (1992) these regions involved particles quasi-trapped magnetically behind the shock; as the volume of this magnetic bottle expands adiabatic deceleration (e.g. Kecskemety et al. 2009) decreases all intensities, preserving spectral shapes (Reames et al. 1997; Reames 2013). Extreme SEP scattering, once used to explain this slow decay, is actually found to be negligible in reservoirs since particles from new \({}^{3}\)He-rich events travel scatter-free across them (Mason et al. 1989; Reames 2021a).
### Ulysses
Whenever there was a dependable spacecraft stationed near Earth, like IMP 8, any traveling spacecraft, such as the solar-polar-orbiting _Ulysses_, allowed two-spacecraft comparisons. The reservoir comparisons of Roelof et al. (1992) found uniform intensities behind the shock, extending radially over 2.5 AU from IMP 8 to _Ulysses_. In fact, Ulysses observed reservoirs at heliolatitudes up to \(>\)70\({}^{\bf{0}}\), N and S (Lario 2010), and in other electron observations (Daibog et al. 2003).
### Helios, IMP, and Voyager
The two _Helios_ spacecraft followed neighboring solar orbits from 0.3 to 1.0 AU beginning in 1974. Beeck et al. (1987) used data from these spacecraft to study the radial and energy dependence of diffusive scattering of protons, while, in a larger study, Lario et al. (2006) fit the peak intensities and the fluence of events to the form \(R^{n}\)_exp_[- \(k(\phi\) - \(\phi_{0})^{2}\)] where \(n\) and \(k\) are constants, \(R\) is the observers radius in AU, \(\phi\) is the solar longitude of the observer and \(\phi_{0}\) is that of the event centroid. For a Gaussian distribution, \(k=(2\sigma^{2})^{-1}\). Lario et al. (2006, 2007) also considered the time variation of the point where the observer's field line intercepts the expanding shock source, an important feature of shock acceleration defined by Heras et al (1995).
_Helios_ provided an excellent opportunity to study spatial distributions of shock-accelerated particles as well as their reservoirs, especially during 1978 when the _Voyager_ spacecraft were nearby (Reames et al. 1996, 1997, 2013; Reames 2023). An especially interesting event is shown in **Figure 2** where the intensities of SEPs at _Voyager 2_**(Figure 2a or 2e)** do not begin to increase until after the shock passes S1 **(Figure 2b),** presumably because this is where it first intercepts the field line to _Voyager_. Proton intensities then slowly rise as _Voyager_ becomes connected to stronger and stronger regions of the approaching shock until the ESP structure finally arrives at S4 on 6 January. Notice in **Figure 2a**
that the peak intensities near the shock are similar at all four spacecraft. The ESP structure forms as protons streaming away from the shock generate resonate waves of wave number \(k\approx B/\mu P\) where \(B\) is the field strength, \(P\) is the proton rigidity and \(\mu\) is the cosine of its pitch angle (Lee 1983, 2005; Ng and Reames 2008; Reames 2023). These waves trap particles in ESP structures. As the structure moves out to lower \(B\), the resonance shifts so that high energies preferentially leak away early, as also seen in **Figure 1e**. _Voyager_ sees the pure "naked" ESP event with few of the streaming protons that created it. The shock that eventually arrives at _Voyager_ generated the intense early streaming protons seen early by _Helios 1_ and an intermediate structure as it passed _Helios 2_ and IMP 8.
The spacecraft distribution for the event shown in **Figure 2** is unique, in that spacecraft sample the SEPs produced as the source shock moves radially. The width of this CME is limited, but the spacecraft are positioned to follow the evolution of the event: (1) well-connected _Helios 1_ samples its central production near the Sun, (2) _Helios 2_ and IMP 8 sample its production near 1 AU, and (3) _Voyager 2_ scans its western flank then samples its central strength as it passes 2 AU. A fortuitous occurrence: NASA did not design _Voyager_ as a complement to _Helios_.
Another fortuitous observation with these spacecraft occurred in September 1978 and is shown in **Figure 3**. In **Figure 3a**, well-connected IMP 8 shows a fast rise in intensities and a peak near the time of shock passage, while _Helios 1_ and \(2\), far around the west flank, rise more slowly then join IMP 8 in a reservoir behind the shock on 25 September. Distant _Voyagers_ show a slow SEP rise to a
Figure 2: In **(a)**, intensities of 6–11 MeV protons are compared for _Helios 1, Helios 2_, IMP 8, and _Voyager 2_ during the 1 January 1978 SEP event, while **(b)** shows the spatial configuration of the spacecraft on their initial field lines and stages in the expansion of the CME-driven shock at S1, S2, etc. are sketched. Intensity-time profiles for a full list of energy intervals are shown for **(c)**_Helios 1_, **(d)** IMP 8, and **(e)**_Voyager 2_. MC is a magnetic cloud from the original CME (Burlaga et al. 1981). Onset time of the event is flagged by E6 and shock passage at each spacecraft by H1, H2, I8, and V2 (Reames 2023).
plateau near the time S2 where IMP 8 joins the same intensity once it is no longer constrained by the east flank of the shock. Then at S3, the _western_ flank of the shock strikes field lines that send particles sunward to IMP 8 and outward to _Voyager 2_, then to _Voyager 1_. This second SEP peak is clearly seen at energies above 40 MeV in **Figures 3c** and probably even in **3d**. A "left" and then a "right" from the same shock; at _Voyager_ the two peaks are comparable in size.
Note that the second peak in the IMP 8 data in **Figure 3c** shows significant velocity dispersion corresponding to the \(\sim\)6-AU path inward from the new source (actually a larger delay than in the first peak) while none is seen at Voyager 2 (**Figure 3d**) near this source. Also the intensities at IMP 8 and Voyager are quite similar in the second peak; any new injection from the Sun would have produced a huge difference in intensities like that seen in the first peak because of the great difference in radial distances since IMP 8 and the Voyagers seem to be on similar field lines. At the second peak, the shock has filled these field lines, forming a reservoir with similar intensities at IMP and Voyager.
Another parameter that can depend upon solar longitude is the solar particle release (SPR) time derived from velocity dispersion (Tylka et al. 2003; Reames 2009a, 2009b). If the first particles of each energy to arrive at the spacecraft have been released at nearly a single time, the SPR time, and
Figure 3: **(a)** intensities of 3-6 MeV protons are compared for IMP 8 (**blue**), _Helios 1_ (**green**), _Helios 2_ (**yellow**), _Voyager 1_ (**red**), and _Voyager 2_ (**violet**) in the 23 September 1978 SEP event, while **(b)** maps the configuration of the spacecraft on their initial field lines and shows the expansion of a CME-driven shock at S1, S2, and S3. Onset time of the event is flagged as W50 and shock passage at each spacecraft as H1, H2, 18, V1, and V2. In **(b)**, the **western** flank of the shock S3 intercepts the **blue** and **red** fields, where arrows direct particles accelerated sunward to IMP 8, then outward to _Voyager 2_, respectively, and later to _Voyager 1_. Intensity-time profiles for a full lists of energy intervals are shown for **(c)** IMP 8, and **(d)**_Voyager 1_ (Reames 2023).
have scattered little, their travel time will be \(dt=L/\nu\) where L is the field line length from source to observer and \(v\) in the particle velocity. Plotting the onset times vs. \(v^{1}\) will yield a linear fit with slope \(L\) and intercept at the SPR time. **Figure 4b** shows such a plot from Reames and Lal (2010) for the spacecraft distributed as shown in **Figure 4a** during the GLE of 22 November 1977. The parabolic fit for the height in **Figure 4d**, not from this event, is a fit of 26 GLEs observed from Earth (Reames 2009b); presumably it is a first-order correction for weaker, more slowly evolving, shock flanks. Gopalswamy et al. (2013) fit the SPR height for GLEs at Earth directly to the source longitude (uncorrected for foot-point motion). Type II bursts begin near \(\simeq\)1.3 R\({}_{\rm S}\) but they certainly need not correspond to the same field line longitude or shock physics as the measured SEPs. Non-relativistic electrons that produce type II bursts do not resonate with Alfven waves like ions and hence are more likely to be accelerated by quasi-perpendicular regions of the shocks.
Thus _Helios_ allowed spatial comparisons of the beginnings of events, their evolution and ESP events at the middle, and their reservoirs at the end (Reames et al. 1997), noted earlier.
### STEREO, with _Wind_, ACE, and SOHO
The launch of _Wind_ in November 1994 began a new era in SEP coverage from Earth and it was later joined by SOHO and ACE adding different capabilities. This coverage still exists in 2023. STEREO ahead (A) and behind (B) were launched together in late 2006 at the beginning of solar minimum. By September 2012 they had reached \(\pm\)120\({}^{\rm o}\) longitude, providing three equally-spaced observing points around the Sun, optimal coverage for finding the most extensive SEP events.
With STEREO it has become even clearer that SEP events can be quite extensive. For \(>25\) MeV proton events, Richardson et al. (2014) found 17% spanned three spacecraft, 34% two spacecraft, 36% one spacecraft, with 13% unclear. Studying the abundances of H, He, O, and Fe at 0.3, 1, and 10 MeV amu\({}^{-1}\), Cohen et al. (2017) found only 10 three-spacecraft events, out of 41.
Figure 4: **(a)** shows the distribution of spacecraft during the 22 November 1977 SEP event at W40, **(b)** shows plots and least-square fits of the particle onset times vs. \(v^{1}\) at available energies for several spacecraft, **(c)** shows the time delay and **(d)** shows the corresponding shock height of the SPR time vs. solar source longitude minus the footpoint longitude. Data from _Helios 1_ are limited (Reames and Lal 2010). The parabola in **(d)**, from Reames (2009b), may just be a first order correction for the showing of the shock on its flanks.
Most of the studies of particle distributions with STEREO have involved Gaussian fits of the intensity peaks or fluences at three longitudes. Of course, three points determine a parabola and a Gaussian is a parabola in logarithmic space, so Gaussians tend to fit the data very well. Xie et al. (2019) studied 19 - 30 MeV protons in 28 events finding \(\sigma=39^{\circ}\pm 6.8^{\circ}\). Paassilta et al. (2018) compiled a list of 46 wide-longitude events above 55 MeV, of which seven were suitable, averaging \(\sigma=43.6\pm 8.3^{\circ}\) and for 14 events with E \(>80\) MeV, de Nolfo et al. (2019) found an average \(\sigma\approx 41^{\circ}\). Lario et al. (2013) studied the 15-40 and 25-53 MeV proton peak intensities and found \(\sigma\approx 45\pm 2^{\circ}\) for both proton energy ranges. Cohen et al. (2017) found average three-spacecraft events centered at \(\stackrel{{\sim}}{{{}_{\sim}}}\)22 \(\pm\) 4\({}^{\circ}\) west of the flare site and \(\approx 43\pm 1^{\circ}\) wide; they found no dependence of the width on the charge-to-mass ratio _Q/A_ of the elements. This lack of dependence on _A/Q_ seems to argue against lateral diffusive transport. Kahler et al. (2023) fit the data of 20 MeV protons to hourly Gaussians, showing substantial broadening of the distribution with time, especially on the western flank where the shock expanded across new spiral field lines. However, all of these similar widths and parameters only apply to the largest \(\stackrel{{\sim}}{{{}_{\sim}}}\)20% of events that span three spacecraft. We have no widths from the one-spacecraft third of the events. For electron events, Klassen et al. (2016) examined events for closely spaced (\(<\)72\({}^{\circ}\)) STEREO spacecraft and found they were not well fit by Gaussians while Dresing et al. (2018) pointed out that peak intensities used for the Gaussian fits may not represent the real spatio-temporal intensity distribution as the intensity peaks may have been measured at different times.
Some of the space-time coupling in SEP distributions is illustrated by the event shown in **Figure 5**. Here the shock itself is seen at each spacecraft, yet the slight onset delay at _Wind_ suggests it misses the base of that field line. Both STEREO spacecraft show fast intensity increases, followed by early declines, but the intensity at _Wind_ peaks well after the shock passage. How do we distinguish variations of space and time? When does a Gaussian spatial distribution apply? Is the Gaussian formed by the peaks, or by the fluences; is it spatial or also temporal? Even hourly Gaussians miss the true behavior late in an event.
Figure 5: shows time variations of \(\sim\)20 MeV protons in the large three-spacecraft SEP event of 25 February 2014 on the **left** and the map of spacecraft configuration on the **right**. Vertical time flags mark the event onset from S12E77 and the shock passage times at each spacecraft. In the map, circles follow expansion of a spherical shock wave, and initial Parker spirals connect the Sun with each spacecraft; these field lines would be distorted as the shock passes. Pink field lines measure solar rotation (see text).
The two pink field lines labeled 1 and 2 in the map in **Figure 5** will be carried, from their initial positions shown, to the red location of _Wind_ by the time the shock reaches 1 or 2 solar radii, respectively, because of solar rotation. Hence the time profile we see at _Wind_ is greatly enhanced behind the shock by SEPs swept in on field lines that were much more centrally located initially. Meanwhile, STEREO A is rapidly acquiring field lines with decreased intensities of SEPs that have rotated from behind the Sun, and STEREO B eventually joins it in a suppressed reservoir region late on 2 March. Can SEP models follow this mixture of space-time variations? Time variations longer than about a day need to accommodate the 13\({}^{\circ}\) day-1 solar rotation. This event evolves over a week.
Note also in **Figure 5** that the accelerating shock wave is seen at all three spacecraft, as is often the case. These shock waves are able to wrap around the Sun, crossing field lines that the particles alone cannot cross. Thus STEREO shows us the way the physics of shock acceleration can easily replace the early confusion of the "birdcage model," "coronal diffusion," and the "fast propagation region" - diffusion from a point source - that once held sway.
### SEP Element Abundances
Spatial distributions of fluences of element abundances were studied extensively by Cohen et al. (2017) using STEREO and ACE data. The Gaussian distributions of H, He, O and Fe were found to be similar in ten three-spacecraft events, suggesting that the widths are independent of rigidity or transport. They found the average three-spacecraft Gaussian distributions to be \(43\pm 1^{\circ}\) wide, although with significant variations. None of these large three-spacecraft SEP events showed any of the enhancements of heavy elements, e.g. Fe/O, typically found in shock-reaccelerated impulsive suprathermal ions of SEP3 events. An analysis of the _A/Q_-dependence of the element abundances in a typical large SEP4 event is shown in **Figure 6**, where power-law fits of flat or suppressed heavy elements extend to include protons (Reames 2020, 2022b).
For each time period, the observed abundances of \(Z\geq 6\) ions are divided by the corresponding coronal abundances and fit to power-laws. Most SEP4 events show flat fits, i.e. abundances the same as coronal, or declining power laws as in Figure 6. The declining power laws may result from reduced scattering of heavier ions that allows them to leak more easily from the acceleration region SEP3 events have both enhanced protons and power-law heavy-element enhancements that rise with \(\lambda/Q\), reflecting shock acceleration of seed populations of normal ambient coronal ions as well as impulsive suprathermal ions with their characteristic high-\(Z\) enhancement. SEP3 events can be very large. In solar cycle 23, about half of the GLEs were SEP3 events (Reames 2022a) and half SEP4 events. In the weaker cycle 24, when STEREO was available, we find only one of the two-spacecraft events listed by Cohen et al. (2017) that was an SEP3 event (4 August 2011); this is an SEP3 event at _Wind_ (Figure 10 in Reames 2020) and shows similar enhancements at STEREO A, although with poorer statistics. Why are there so few wide SEP3 events? Is it the weak solar cycle or are SEP3 events inherently narrow, perhaps because the primary pools of seed particles are confined?
### Impulsive SEP Events
Figure 6: **(a)** shows a map of the spacecraft distributions during the SEP event of 23 January, 2012; **(b)** shows the corresponding intensity-time profiles of 20 MeV protons at each spacecraft. The abundance enhancements, relative to SEP-average (coronal) abundances, for elements noted by \(Z\), are shown vs. \(\lambda Q\) for the listed time intervals for **(c)** STEREO B, **(d)**_Wind_, and **(e)** STEREO A. Best fit power laws vs. \(\lambda Q\) for elements with \(Z>2\) are shown extended down to protons at \(\lambda Q=\)1.
A distinction of impulsive SEP events is that their longitude spread is much more limited than that of gradual events (Reames 1999) but the width of that distribution depends upon the sensitivity of the instruments observing them and has increased somewhat with time (e.g. Reames et al. 2014a). A significant factor in this width is that variations of the solar wind speed vary the longitude of the footpoint of the observer's field line, but there is also variation due to the random walk of the footpoints of the field lines (Jokipii and Parker 1969; Li and Bian 2023) prior to the event.
A search for \({}^{3}\)He-rich events between ISEE 3 and _Helios_ (Reames et al. 1991) found several corresponding events. The well-studied event of 17 May 1979 (e.g. Reames et al. 1985) with \({}^{3}\)He/\({}^{4}\)He \(\geq\) 10 was seen with similar \({}^{3}\)He enhancement by _Helios 1_ near 0.3 AU as shown in **Figure 7** as a sharp spike of very short (\(\sim\)1 hr) duration presumably resulting from reduced scattering. Associated electron trajectories were tracked spatially, using the direction and frequency of the radio type III burst as measured from ISEE 3 (Reames et al. 1991).
The time history for the Kiel Instrument on _Helios 1_ in **Figure 7** shows composite He; isotope ratios were tabulated separately using pulse-height data. Despite differences in He energies at the two spacecraft, both show the event at 0550 UT with \({}^{3}\)He/\({}^{4}\)He \(\approx\) 10 and that at 1700 UT with \({}^{3}\)He/\({}^{4}\)He \(\approx\) 1. In the early STEREO era, before the spacecraft were widely separated, Wiedenbeck et al. (2013) found electrons and ions \(\pm\)20\({}^{\rm o}\) ahead and behind Earth, during an event that occurred during a solar quiet time. The ions showed strong intensity gradients. The event was associated with a weak CME. Also, Klassen et al. (2015) studied an electron beam event early in the STEREO mission.
The discovery of fast CMEs associated with impulsive SEP events (Kahler et al. 2001) suggested that impulsive SEPs could be spread laterally when nearly radial shocks in these SEP2 events distributed particles across spiral field lines. Particles from SEP1 jets without fast shocks would be expected to follow any open field lines from the jet, presumably less widely distributed. Suggestions of greater spreads for SEP2 impulsive events with shocks, based upon current understanding, have not been explored, mainly because the available spacecraft are too widely separated.
Opportunities for multiple measurements of impulsive SEP events are rare. Perhaps the best opportunity is for measurements of the radial variation in SEP scattering between _Parker Solar Probe_ (PSP) or _Solar Orbiter_ and spacecraft near Earth. Are the time profiles of SEPs at PSP near the Sun similar to those of X-ray profiles of the event? To what extent are the jets that release SEPs
Figure 7: The extremely \({}^{\rm He}\)-rich event of 17 May 1979 is shown by the track of its radio type III burst in the **upper** panel, and time histories of electrons and ions indicated at _Helios 1_ and ISE 3 in the panels below. Indicated energies are in MeV for electrons and protons, MeV amu\({}^{\rm i}\) for He. Arrows above the central panel mark two times of event sources. (Reames et al. 1991).
part of more-extensive flaring systems? Magnetic reconnection that that opens some field lines must also close others (e.g. see Figure 4 in Reames 2021b), but reconnecting closed field lines with other closed lines would allow no escape.
## 4 Discussion
STEREO has given us an improved sense of the widths of some of the most extensive SEP events, but are the SEP distributions really characterized by nice smooth Gaussians? Are the SPR times or heights really parabolic? During the _Helios_ era, when the _Voyagers_ happened by, a few events showed us greater complexity (e.g. **Figures 2** and **3**). Such events allow us to explore the underlying physics. STEREO also allows 3D modeling of the CME and its shock (e.g. Rouillard et al. 2011, 2012; Kouloumvakos et al. 2019; Zhang et al. 2023) but this is only beginning to be extended to a 3D modeling of the SEP distribution this shock produces. Correlations between SEP peaks and shock properties at the base of the observer's field line (e.g. Kouloumvakos et al 2019) may have reached their limits. SEP peak intensities and times are controlled, and often limited, by transport and are not determined by any single point on the evolving shock; transport and shock contributions both vary with time in complex ways. No single parameter represents an entire SEP event very well.
Kahler's (1982) "big flare syndrome", while expressed for a specific case, should be a general warning that correlations do not imply causality. Increasingly energetic magnetic reconnection events at the Sun can spawn bigger flares, faster CMEs, and larger SEP events. They are all correlated, yet H\(\alpha\) flares do not _cause_ CMEs, or GLEs; ultimately they are all consequences of magnetic reconnection. Later, Kahler (1992) asks "how did we form such a fundamentally incorrect view of the effects of flares after so much observational and theoretical work?" We need clearer resolution of the underlying physics that does connect a cause and its effects.
Gopalswamy et al. (2013) sought to understand possible differences between the GLE of 17 May 2012 and six other large SEP events with similar CME speeds that were not GLEs. This event had a smaller flare (M5.1) than any of the others. A large effect was the small latitudinal distance of the shock-nose from the ecliptic, i.e. the GLE was better connected to Earth. The GeV protons in this GLE were all produced within \(\sim\)8 minutes. This suggests a highly localized region of production might occur soon after shock formation (Ng and Reames 2008) and may differ from the global properties of the CME, discounting correlations. Incidentally, the 17 May 2012 event was the only GLE included in the study by Cohen et al. (2017) and it had too few heavy ions to be studied at three spacecraft; it was not one of the ten three-spacecraft SEP events.
It has also been observed that GLEs are more likely when shocks pass through solar streamers where the higher densities and lower Alfven speeds produce higher Alfvenic Mach numbers (Liu et al. 2023) and regions of higher \(\theta_{\rm{Bn}}\) (e.g. Kong et al. 2017, 2019) that can enhance acceleration.
Intensities of GeV protons only exceed GCRs for short periods and are too weak to be a practical radiation hazards. However, \(\sim\)100 MeV protons are much more numerous and persistent and hence a significant hazard to astronauts outside the Earth's magnetic fields. When an event is near central meridian, intensities of high-energy SEPs can be extended and increased by the ESP event when the nose of the shock passes over us. Some historical examples are shown in **Figure 8**. For western sources we see the greatest effect of the shock nose early, but the shock then weakens toward our longitude and weakens in strength out to 1 AU. The early SEP intensities at 1 AU are constrained by wave growth which can establish the "streaming limit" early (Reames and Ng 1998, 2010, 2014; Ng et al. 1999, 2003, 2012) while ESP intensities are unbounded. Thus, the spatial distribution of SEP intensities at the shock, i.e. the ESP event, is of both fundamental and practical importance. The SEP
events of October 1989 shown in **Figure 8** are the basis of SEP "storm shelter" radiation shielding requirements for astronauts in missions beyond low Earth orbit (Townsend et al., 2018).
The ESP structure is formed very early in an event when protons streaming away from the shock amplify resonant Alfven waves (Stix, 1992; Melrose, 1980). Waves trap ions of a given rigidity, scattering them back and forth across the shock so they gain velocity on each transit, subsequently amplifying waves of lower \(k\) (longer wavelength) which trap ions of higher \(P\), etc., creating the ESP structure of energetic ions trapped around the shock (Lee, 1983, 2005). At the streaming limit, higher SEP intensities simply grow more waves, trapping more ions back near the shock. In gradual SEP events, the ESP structure always exists. When the shock is near the Sun, the ESP is initially hidden among the same SEPs that form it as they stream away. Of course, the shock we see at 1 AU was much stronger when it began near the Sun. The shock and the ESP event can explain _all_ of the SEPs in the gradual event, early and late. It is a key to the physics of SEP acceleration in these events, whether it emerges at your particular longitude or not. The basic ESP structure will only be exposed at some longitudes in some events.
When the flank of a shock crosses to new field lines, where the early emission is absent, the "naked" ESP event emerges as seen at _Voyager_ in **Figure 2e** or at IMP 8 in **Figure 1e**. The ESP structure persists as the shock moves outward, but the CME speed decreases and decreasing \(B\) shifts the resonance so the highest energy ions preferentially begin to leak away. Earlier peaks of \(\sim\)30 - 100 MeV protons are seen along with the naked ESP events in these figures because they leaked out _after_ the ESP events have crossed to the new field lines.
Generally, the time of the peak intensity is a contest: the initial intensities are bounded at the streaming limit while the ESP peak is unbounded, yet the highest energies in the ESP begin to leak away as the shock slows and moves out to lower \(B\). Understanding the high-energy particle
Figure 8: High-energy proton intensities during historic large SEP events near central meridian can be enhanced and extended in time by their ESP events, even at energies \(>\)100 MeV (a significant radiation hazard). Early intensities are bounded by the “streaming limit” but the ESP peak is not. All of these events are GLEs which often occur near central meridian, allowing the ESP to surface.
acceleration of SEPs is about understanding shock acceleration, wave trapping, and the formation, spatial extent, and persistence of ESP events. The large events in **Figure 8** are relatively rare, and we tend to assume that the same physics can be studied at lower energies. However, the physics changes for those energies where resonant waves dominate - a function of space and time. A few general questions do arise:
1. Comparing **Figure 1c** and **Figure 1e**, how important is \(\theta_{bn}\) in maintaining the ESP event?
2. The increasing intensity ramp at _Voyager_ in **Figure 2a** involves SEPs leaked from the approaching ESP event. Does it show strengthening of the shock with longitude toward the east or mainly increasing proximity?
3. Are the SEP3 events with enhanced heavy ions limited in longitude? Surely seed-particle populations in an event can change with longitude; where are the events that change from SEP3 to SEP4? Why are STEREO three-spacecraft events all SEP4 events? Half of GLEs are SEP3s.
4. How does the longitude extent of the SEPs compare with that of the shock itself? How does each vary with time?
5. Are GLEs or other high-energy events more limited in longitude? Would they be one- or two-spacecraft events for STEREO? Are broader shocks weaker or less efficient?
We should expect that shocks can accelerate whatever seed population or populations they encounter, wherever they go. We cannot exclude mixtures; we can only distinguish which one dominates at high \(Z\) and thus label an event SEP3 or SEP4.
We are constantly hampered by the correlated mixture of space and time. The footpoint of a field line from Earth lies 50 - 60\({}^{\rm o}\) to our west. As our connection point on the shock scans to the east, that shock also weakens with time. How much is change with longitude; how much is change with time? The only resolution is to measure spatial variations with a scale substantially less than 50\({}^{\rm o}\). By the time the shock arrives at 1 AU, it has mixed SEPs from \(\stackrel{{\sim}}{{{}_{\sim}}}\)50\({}^{\rm o}\) of solar longitude; in longer time intervals solar rotation causes greater mixing.
## 5 What Is Needed Next?
SEP evolution in space and time is complicated. STEREO spacecraft separation from Earth of \(\sim\)120\({}^{\rm o}\) was much too coarse to resolve the SEP-shock evolution we happened to observe in a _Helios_-IMP-_Voyager_ period. What kind of observations would help? Multiple points with better spatial resolution. Consider two primary spacecraft, equipped as STEREO was, with coronagraphs and in situ instruments, which are fixed in Earth's solar orbit at \(\pm\)60\({}^{\rm o}\) from Earth as shown in **Figure 9**. On each side, between each primary spacecraft and Earth are two much smaller spin-stabilized spacecraft (similar to _Wind_), 20\({}^{\rm o}\) apart, each capable of measuring SEPs, magnetic fields, and solar-wind plasma to map SEP events, shocks, and interplanetary CME structures. This configuration presumes that capabilities similar to a primary spacecraft are preexisting near Earth.
The \(\approx\)20\({}^{\rm o}\) spacing would provide at least two measurements between the longitude of a spacecraft and that of its typical solar magnetic footpoint. It would allow meaningful coverage of many spatially small- and moderate-sized events with sampling of variations along the shock for the larger ones. "Smaller" SEPs does _not_ mean weaker. Of course, it would be better to have more-complete coverage, but the configuration in **Figure 9** represents a major improvement in spatial resolution, and reasonable (33%) coverage, at a modest increase in cost and complexity over the dual-spacecraft STEREO mission. A spacing of \(\leq\) 20\({}^{\rm o}\) seems essential and provides a reasonable tradeoff between spacing and coverage. This would provide a map of the shock strength, direction of propagation, and \(\theta_{\rm Bn}\) at up to seven points at 1 AU that could be compared with the coronagraph mapping near the Sun, and could give seven SEP/ESP profiles with differing onset times and intensities. What does the shock front really look like? Is the nose of the shock a hot spot that could produce a localized GLE; for what longitude does the peak intensity arrive with the shock?
Are the highest energies in SEP events limited to short time periods and small spatial intervals as Gopalswamy et al. (2013) conclude? If so, what physics defines those intervals and why do they differ from the apparent STEREO finding of similarly broad Gaussians for most energy bands? Up to some energy, we would expect high SEP intensities to generate enough waves to extend the spatial trapping and the duration of an event (e.g. **Figure 8**), although not at the absolute peak energy where there are yet few resonant waves. These peaks could be highly localized in well-connected regions that become poorly correlated with the average properties of the CME. Also, in some events, particle trapping is increased by the presence of multiple shock waves as in the 14 July 2000 "Bastille Day" GLE (Lepping et al. 2001). To what extent can the SEP energy profile and the peak energy be predicted from an early coronagraph map of the shock strength? We would have an extra day to predict the strength of the ESP event.
## 6 Conflict of Interest
The author declares that this research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## 7 Author Contributions
All work on this article was performed by D.V. Reames.
## 8 Funding
No institutional funding was provided for this work.
Figure 9: A possible multi-spacecraft configuration of the future might include two primary spacecraft, similar to STEREO, and four spacecraft measuring SEPs, plasma, and fields, to study in situ spatial distributions of SEPs, interplanetary CMEs, and shock waves. The spacecraft would maintain their positions relative to Earth for years, spanning at least one solar maximum.
## 9 Acknowledgments
The author thanks Ed Cliver for helpful discussions.
|
2303.09151 | Performance Analysis of Passive Retro-Reflector Based Tracking in
Free-Space Optical Communications with Pointing Errors | In this correspondence, we propose a diversity-achieving retroreflector-based
fine tracking system for free-space optical (FSO) communications. We show that
multiple retroreflectors deployed around the communication telescope at the
aerial vehicle save the payload capacity and enhance the outage performance of
the fine tracking system. Through the analysis of the joint-pointing loss of
the multiple retroreflectors, we derive the ordered moments of the received
power. Our analysis can be further utilized for studies on multiple input
multiple output (MIMO)-FSO. After the moment-based estimation of the received
power distribution, we numerically analyze the outage performance. The greatest
challenge of retroreflector-based FSO communication is a significant decrease
in power. Still, our selected numerical results show that, from an outage
perspective, the proposed method can surpass conventional methods. | Hyung-Joo Moon, Chan-Byoung Chae, Mohamed-Slim Alouini | 2023-03-16T08:28:25Z | http://arxiv.org/abs/2303.09151v1 | Performance Analysis of Passive Retro-Reflector Based Tracking in Free-Space Optical Communications with Pointing Errors
###### Abstract
In this correspondence, we propose a diversity-achieving retroreflector-based fine tracking system for free-space optical (FSO) communications. We show that multiple retroreflectors deployed around the communication telescope at the aerial vehicle save the payload capacity and enhance the outage performance of the fine tracking system. Through the analysis of the joint-pointing loss of the multiple retroreflectors, we derive the ordered moments of the received power. Our analysis can be further utilized for studies on multiple input multiple output (MIMO)-FSO. After the moment-based estimation of the received power distribution, we numerically analyze the outage performance. The greatest challenge of retroreflector-based FSO communication is a significant decrease in power. Still, our selected numerical results show that, from an outage perspective, the proposed method can surpass conventional methods.
Free-space optics, fine tracking, retroreflector, MIMO-FSO.
## I Introduction
For long-distance wireless communications with high capacity, free-space optical (FSO) communications has become one of the most promising communications technologies. Unlike radio-frequency (RF) cellular communication networks, FSO communications are one-to-one due to the high directivity of laser beams. For precise beam pointing in FSO communications then, it is imperative to have a pointing, acquisition, and tracking (PAT) system [1, 2]. The PAT system is divided into two steps-coarse pointing and fine tracking [3]. At the initial stage, coarse pointing aims to achieve link availability, and, during the communication, fine tracking maintains the link from mechanical jitters and atmospheric turbulence.
A coarse pointing between the optical ground station (OGS) and the unmanned aerial vehicle (UAV) begins with the transmission of the UAV location information from the UAV to the OGS [3]. Then the OGS transmits a beacon beam that covers the area where the UAV can exist. When the UAV receives the beacon beam, it aligns the pointing to the OGS and transmits the beam back to the incoming beam direction so that also the OGS can receive the beacon beam. When both sides are well-aligned through beacon beam reception, the fine tracking stage begins. During the fine tracking stage, the system requires more precise and fast compensation of pointing errors to keep both transceivers within the field of view. For this reason, quadrant detector (QD) and fast steering mirror (FSM) are widely used in this stage [4]. Based on the conventional fine tracking method using QD and FSM, we propose a fine tracking method that reduces outage probability and saves the power budget of the UAV.
In conventional fine tracking methods for two-way FSO communications, a beacon transmitter is deployed at both unmanned aerial vehicles (UAVs) and ground stations. In practice, however, the payload and power budget of UAVs are limited. We introduce a fine tracking method that replaces the beacon transmitter at the UAV with the multiple corner-cube reflectors (CCRs)-a device that reflects incident light in the same direction-to assist tracking at the ground station [5]. There have been many studies on FSO communications in which a modulated retroreflector (MRR) replaces one side of the conventional FSO transceivers. In [6], the authors analyze outage probability, average bit error rate (BER), and ergodic capacity for the MRR-based FSO communications when nonzero boresight pointing error is assumed. The authors in [7], test (through analysis and simulation) the feasibility of the FSO communication using the micro CCR array. Different from previous studies, our proposed method assumes that the deployed CCRs are separated enough to achieve maximum path diversity. Also, we use passive CCRs to reflect a non-modulated beacon signal. Since each of the CCRs at the UAV sends the reflected beam back to the ground station, the received signal power is a sum of the uncorrelated reflected signals. This property allows the system to significantly reduce the link outage by achieving spatial diversity. Additionally, a number of separated micro CCR arrays can replace CCRs for cost and weight reduction. However, we consider classical CCRs to avoid excessive assumptions and maintain mathematical simplicity.
In our proposed method, we base the methodology of the outage-performance analysis on the moment-matching approximation of the probability distribution function (PDF). The product of the uplink and downlink channel fading can be approximated as the \(\alpha\)-\(\mu\) distribution [8] and the sum of the \(\alpha\)-\(\mu\) distributed random variables (RVs) can also be approximated as the \(\alpha\)-\(\mu\) distribution [9, 10]. Because of
this, we approximate the sum power of reflected beams into the \(\alpha\)-\(\mu\) distribution and derive the outage probability with a simple form of a cumulative distribution function (CDF). We further analyze the moment of the pointing-loss effect for the given deployment of a number of CCRs, which can be expanded into the pointing loss of the multiple input multiple output (MIMO)-FSO system.
The rest of this correspondence is organized as follows. In Section II, we introduce the signal model of the proposed retroreflector-based fine tracking system. We then describe the PDF of the pointing loss of an individual CCR. In Section III, we approximate the PDF of the received power at the ground station into the \(\alpha\)-\(\mu\) distribution by the moment-matching method. Through this derivation, we present both exact and approximated moments. In Section IV, we provide some selected simulation results, and we then finally provide our conclusions in Section V.
## II System Model
### _Signal Power Model_
A conventional FSO channel model is as follows [11]:
\[P_{R}=h_{a}h_{\ell}h_{p}P_{\text{T}}, \tag{1}\]
where \(P_{R}\) is a received power at the ground station, \(h_{a}\), \(h_{\ell}\), \(h_{p}\), and \(P_{\text{T}}\) denote channel fading, atmospheric loss, pointing loss, and transmit power at the UAV. Based on (1), we formulate the signal power model for the proposed system model and describe the analytical characteristics of each term.
Assume that multiple CCRs are deployed around the communication telescope at the UAV; the reflected beacon signal power received at the ground station can be modeled as
\[P_{\text{CCR}}=\sum_{i=1}^{M}P_{i}, \tag{2}\]
and the incoming signal power reflected from the \(i\)-th CCR is
\[P_{i}=g_{a,i}g_{\ell}g_{p}\rho f_{a,i}f_{\ell}f_{p,i}P_{\text{GS}}, \tag{3}\]
where each of the parameters on the right-hand side indicates, respectively, downlink fading, downlink atmospheric loss, downlink pointing loss, reflection effect, uplink fading, uplink atmospheric loss, uplink pointing loss, and the transmit power of the ground station [8]. We assume that the fading channels for different CCRs are independent [12] and fading channels of the uplink and downlink for each beam path are correlated. For further mathematical analysis, we substitute each term into the RV or a constant as follows:
\[X=f_{a},Y=g_{a},Z=f_{p},c=g_{\ell}g_{p}\rho f_{\ell}P_{\text{GS}}, \tag{4}\]
\[X_{i}=f_{a,i},Y_{i}=g_{a,i},Z_{i}=f_{p,i}. \tag{5}\]
The parameters \(f_{\ell}\) and \(g_{\ell}\) satisfies the Beer-Lambert law as [13]
\[f_{\ell},g_{\ell}=\exp(-\sigma z), \tag{6}\]
where \(z\) and \(\sigma\) are a propagation distance and an attenuation coefficient, respectively. The size of the CCR determines the beam divergence of the reflected beam. Assume that the shape of the effective reflection area is a circle with a radius of \({a_{\text{Re}}}^{1}\), then the downlink beam divergence angle is determined as \(\theta_{\text{Re}}=1.22\lambda/{a_{\text{Re}}}\) where \(\lambda\) is a wavelength of the optical signal [14]. Therefore, the value of \(g_{p}\) is as follows:
\[g_{p}=2{a_{\text{GS}}^{2}}/(z\theta_{\text{Re}})^{2}, \tag{7}\]
where \({a_{\text{GS}}}\) is a radius of the ground station telescope. Since \(\rho\) and \(P_{\text{GS}}\) are the system parameters, \(c\) in (4) is a constant and can be expressed as
\[c=\frac{1.34\,{a_{\text{GS}}^{2}}{a_{\text{Re}}^{2}}}{z^{2}\lambda^{2}}\exp(- 2\sigma z). \tag{8}\]
Both \(X_{i}\) and \(Y_{i}\) follow the same Gamma-Gamma distribution for each \(i\) and are correlated due to the channel reciprocity [15]. Since Gamma-Gamma RV is a product of two uncorrelated Gamma RVs, the correlation coefficient is defined at this level. We can decompose the product of the uplink and downlink fading channel into four Gamma variables as
\[U=XY=X^{(\alpha_{1})}X^{(\beta_{1})}\cdot Y^{(\alpha_{2})}Y^{(\beta_{2})}, \tag{9}\]
where \(\alpha_{1}\), \(\beta_{1}\), \(\alpha_{2}\), and \(\beta_{2}\) are a unique parameter that determines the Gamma distribution. Because uplink and downlink have the same path at negligible time intervals, \(\alpha_{1}=\alpha_{2}\) and \(\beta_{1}=\beta_{2}\) can be assumed. Thus, the marginal PDF of \(X^{(\alpha_{1})}\) and \(Y^{(\alpha_{2})}\) are the same and can be expressed as follows:
\[f_{\alpha_{1}}(X^{(\alpha_{1})})=\frac{\alpha_{1}(\alpha_{1}x)^{\alpha_{1}-1} }{\Gamma(\alpha_{1})}e^{-\alpha_{1}x}, \tag{10}\]
where \(\Gamma(\cdot)\) is the Gamma function and the shape parameter and scale parameter are \(\alpha_{1}\) and \(1/\alpha_{1}\), respectively. Similarly, the marginal PDF of \(X^{(\beta_{1})}\) and \(Y^{(\beta_{2})}\) is
\[f_{\beta_{1}}(X^{(\beta_{1})})=\frac{\beta_{1}(\beta_{1}x)^{\beta_{1}-1}}{ \Gamma(\beta_{1})}e^{-\beta_{1}x}, \tag{11}\]
where \(\beta_{1}\) and \(1/\beta_{1}\) are the parameters. Then the channel reciprocity is expressed by the channel correlation as \(\rho_{\alpha}=\mathbf{corr}(X^{(\alpha_{1})},Y^{(\alpha_{2})})\) and \(\rho_{\beta}=\mathbf{corr}(X^{(\beta_{1})},Y^{(\beta_{2})})\). As each of the fading channels is indexed as \(U_{i}=X_{i}Y_{i}\), the entire
Fig. 1: In interpreting the joint-pointing loss, one must consider the deployment of CCRs on the beam footprint plane.
randomness of \(P_{\text{CCR}}\) can be described with the following RV:
\[S=\sum_{i=1}^{M}U_{i}Z_{i}=\frac{P_{\text{CCR}}}{c}. \tag{12}\]
The rest of the channel parameters are included in \(c\) as (4), which is a constant for every single CCR.
### _PDF of \(Z_{i}\)_
As CCRs are distributed around the communication telescope (as shown in Fig. 1), when analyzing the pointing loss \(Z_{i}\), each CCR has a given boresight error. This can be described as the following system model. We define the position of the communication telescope as an origin of the two-dimensional coordinate plane. Then, the location of the CCR, beam displacement from the center point, and the superposition of two vectors can be defined, respectively, as follows:
\[\mathbf{s}_{i}=[s_{i,x},s_{i,y}]^{T},\mathbf{d}=[d_{x},d_{y}]^{T},\mathbf{r}_{ i}=[r_{i,x},r_{i,y}]^{T}. \tag{13}\]
Assuming that both the incident beam and reflected beam are a Gaussian beam at the far field (see [16, Sec. 4.5.2]), we arrive at
\[Z_{i}(\mathbf{r}_{i};w)=A_{0}\exp{\left(-\frac{2|\mathbf{r}_{i}|^{2}}{w^{2}} \right)}, \tag{14}\]
where \(w\) is a beamwidth, which follows \(w=z\theta_{\text{GS}}\) for the uplink beam divergence angle \(\theta_{\text{GS}}\) and \(A_{0}=2a_{\mathbf{Re}}^{2}/w^{2}\)[11]. Since \(\mathbf{d}\) is a beam displacement caused by the residual angle jitter of the fine tracking system, it follows a zero-mean multivariate normal distribution with the covariance matrix of \(\Sigma_{\mathbf{r}}=\text{diag}(\sigma_{s}^{2},\sigma_{s}^{2})\). Thus, the PDF of \(\mathbf{r}_{i}\) is
\[f_{\mathbf{r}_{i}}(\mathbf{r})=\frac{1}{2\pi\sigma_{s}^{2}}\exp{\left(-\frac{ 1}{2}(\mathbf{r}-\mathbf{s}_{i})^{T}\Sigma_{\mathbf{r}}^{-1}(\mathbf{r}- \mathbf{s}_{i})\right)}, \tag{15}\]
which then results in the following PDF [17]:
\[f_{Z_{i}}(Z)=\frac{w^{2}}{4\sigma_{s}^{2}}\cdot\frac{1}{Z}{\left( \frac{Z}{A_{0}}\right)}^{\frac{w^{2}}{4\sigma_{s}^{2}}}e^{-\frac{r_{s}^{2}}{2 \sigma_{s}^{2}}}I_{0}\left(\frac{s_{i}}{\sigma_{s}^{2}}\sqrt{-\frac{w^{2}}{2} \ln\frac{Z}{A_{0}}}\right) \tag{16}\] \[0\leq Z\leq A_{0},\]
where \(s_{i}=|\mathbf{s}_{i}|\) and \(I_{0}(\cdot)\) is a modified Bessel function of the first kind of order zero.
## III Outage Probability of Retroreflector Based Fine Tracking
According to the system model, an outage probability of the received power can be defined as \(\mathbf{Prob}[P_{\text{CCR}}<P_{\text{th}}]=\mathbf{Prob}[S<P_{\text{th}}/c]\). The RV \(S\) is very complex, so that the derivation of an exact distribution is almost impossible. Hence, in this section, we derive the moments of \(S\) and approximate the PDF into the \(\alpha\)-\(\mu\) distribution by the moment-matching method.
### _Moment Matching_
The PDF of the \(\alpha\)-\(\mu\) RV \(R\) is [18]
\[f_{R}(r)=\frac{\alpha\mu^{\mu}r^{\alpha\mu-1}}{\hat{r}^{\alpha\mu}\Gamma(\mu)} \exp{\left(-\mu\frac{r^{\alpha}}{\hat{r}^{\alpha}}\right)}, \tag{17}\]
where \(\alpha>0\), \(\mu=E[r^{\alpha}]^{2}/\text{Var}[r^{\alpha}]\), and \(\hat{r}=E[Z^{\alpha}]^{\frac{1}{\alpha}}\). Its CDF is given by
\[F_{R}(r)=\frac{\Gamma(\mu,\mu r^{\alpha}/\hat{r}^{\alpha})}{\Gamma(\mu)}, \tag{18}\]
where \(\Gamma(z,y)=\int_{0}^{y}t^{z-1}\exp(-t)\,dt\) is the incomplete Gamma function. To approximate \(S\) into \(R\), we use \(1\)st-, \(2\)nd-, and \(4\)th-order moments of two RVs for the moment-matching method. The \(k\)th-order moment of \(R\) is [9]
\[E[R^{k}]=\hat{r}^{k}\frac{\Gamma(\mu+k/\alpha)}{\mu^{k/\alpha}\Gamma(\mu)}. \tag{19}\]
The reduced form of the moment-based estimators for \(\alpha,\mu\), and \(\hat{r}\) are as follows:
\[\frac{\Gamma^{2}(\mu+1/\alpha)}{\Gamma(\mu)\Gamma(\mu+2/\alpha)- \Gamma^{2}(\mu+1/\alpha)}=\frac{E^{2}[S]}{E[S^{2}]-E^{2}[S]}, \tag{20}\] \[\frac{\Gamma^{2}(\mu+2/\alpha)}{\Gamma(\mu)\Gamma(\mu+4/\alpha) -\Gamma^{2}(\mu+2/\alpha)}=\frac{E^{2}[S^{2}]}{E[S^{4}]-E^{2}[S^{2}]},\] (21) \[\hat{r}=\frac{\mu^{1/\alpha}\Gamma(\mu)E[S]}{\Gamma(\mu+1/\alpha)}. \tag{22}\]
In order to solve (20), (21), and (22), we then have to derive \(1\)st-, \(2\)nd-, and \(4\)th-order moments of \(S\). The \(n_{0}\)th-order moment of \(S\) can be developed as
\[E[S^{n_{0}}]=\sum_{n_{1}=0}^{n_{0}}\sum_{n_{2}=0}^{n_{1}}\cdots\sum_{n_{M-1}=0} ^{n_{M-2}}{n_{0}\choose n_{1}}{n_{1}\choose n_{2}}\cdots{n_{M-2}\choose n_{M-1}}\]
\[\cdot E[U_{1}^{n_{0}-n_{1}}]E[U_{2}^{n_{1}-n_{2}}]\cdots E[U_{M}^{n_{M-1}}]\]
\[\cdot E[Z_{1}^{n_{0}-n_{1}}Z_{2}^{n_{1}-n_{2}}\cdots Z_{M}^{n_{M-1}}] \tag{23}\]
from (12). By (9), we can express the ordered moments of \(U\) as follows [19]:
\[E[U^{n}]= \frac{\Gamma(\alpha_{1}+n)^{2}\Gamma(\beta_{1}+n)^{2}}{\Gamma( \alpha_{1})^{2}\Gamma(\beta_{1})^{2}}(\alpha_{1}\beta_{1})^{-2n} \tag{24}\] \[\cdot z_{1}(-n,-n;\alpha_{1};\rho_{\alpha})_{2}F_{1}(-n,-n;\beta_{1}; \rho_{\beta}),\]
where \({}_{p}F_{q}(\cdot)\) is the generalized hypergeometric function. To calculate the joint-ordered moments of \(Z_{i}\)s, we derive the exact and approximated form of \(E[Z_{1}^{n_{0}-n_{1}}Z_{2}^{n_{1}-n_{2}}\cdots Z_{M}^{n_{M-1}}]\). For convenience, we transform the formula as follows:
\[E[Z_{1}^{n_{0}-n_{1}}Z_{2}^{n_{1}-n_{2}}\cdots Z_{M}^{n_{M-1}}]=E[Z_{m_{1}}Z_{m_{2} }\cdots Z_{m_{n_{0}}}], \tag{25}\]
where \(m_{1}=\cdots=m_{n_{M-1}}=M\), \(m_{n_{M-1}+1}=\cdots=m_{n_{M-2}}=M-1\), \(\cdots\), \(m_{n_{1}+1}=\cdots=m_{n_{0}}=1\).
Starting from the following equation:
\[E[Z_{m_{1}}Z_{m_{2}}\cdots Z_{m_{n_{0}}}]=\int_{0}^{2\pi}\int_{0}^{\infty}\prod_{i= 1}^{n_{0}}Z_{m_{i}}\cdot\frac{\delta}{2\pi\sigma_{s}^{2}}e^{-\frac{\delta^{2}}{2 \sigma_{s}^{2}}}\,d\delta\,d\theta, \tag{26}\]
where \(\delta=|\mathbf{d}|\) and \(\theta=\arg(\mathbf{d})\), we derive the exact moment including an integral operation and the approximated moment including combinatory sums of polynomials.
### _Exact Moment_
**Theorem 1**: _The exact form of (26) can be derived as_
\[E[Z_{m_{1}}Z_{m_{2}}\cdots Z_{m_{n_{0}}}] \tag{27}\] \[=A_{0}^{n_{0}}e^{-\sum_{i=1}^{n_{0}}\frac{4s_{m_{i}}^{2}}{w^{2}}} \int_{0}^{\infty}e^{-\left(\frac{2n_{0}}{w^{2}}+\frac{1}{2s_{\ell}^{2}}\right) \delta^{2}}\frac{\delta}{\sigma_{s}^{2}}I_{0}(K\delta)\,d\delta,\]
_where \(K=\sqrt{\left(\sum_{i=1}^{n_{0}}\frac{4s_{m_{i}}\sin\phi_{m_{i}}}{w^{2}} \right)^{2}+\left(\sum_{i=1}^{n_{0}}\frac{4s_{m_{i}}\cos\phi_{m_{i}}}{w^{2}} \right)^{2}}\) and \(\phi_{i}=\arg(\mathbf{s}_{i})\)._
_Proof:_ See Appendix A. \(\blacksquare\)
### _Approximated Moment_
**Theorem 2**: _The approximated form of (26) can be derived as_
\[E[Z_{m_{1}}Z_{m_{2}}\cdots Z_{m_{n_{0}}}] \tag{28}\] \[=\frac{A_{0}^{n_{0}}}{2\pi}e^{-\sum_{i=1}^{n_{0}}\frac{2s_{m_{i}} ^{2}}{w^{2}}}\sum_{\nu=1}^{n_{0}}\mu_{\sigma_{s}}^{(2\nu)}P_{n_{0}}^{(2\nu)}( m_{1},m_{2},\cdots,m_{n_{0}}),\]
_where \(P_{n_{0}}^{(2\nu)}(m_{1},m_{2},\cdots,m_{n_{0}})\) can be developed as (29) for \(n_{0}\leq 4\), \(\phi_{i}=\arg(\mathbf{s}_{i})\), and \(\mathcal{M}=\{1,2,\cdots,n_{0}\}\). A symbol \(\mu_{\sigma_{s}}^{(2\nu)}\) is a \(2\nu\)th-moment of the Rayleigh distribution and has a value of_
\[\mu_{\sigma_{s}}^{(2\nu)}=2^{\nu}\nu!\,\sigma_{s}^{2\nu}. \tag{30}\]
_A function \(\mathcal{C}(\cdot)\) is a definite integral of a product of cosine functions and can be organized into the sum of cosine functions as_
\[\mathcal{C}(\eta_{1},\cdots,\eta_{2\ell}) =\int_{0}^{2\pi}\prod_{i=1}^{2\ell}\cos(\theta-\eta_{i})\,d\theta \tag{31}\] \[=\frac{2\pi}{2^{2\ell}(\ell!)^{2}}\sum_{\begin{subarray}{c} \text{Sym}\{k_{i}\}_{2\ell}^{2\ell}\in\mathcal{Z}\\ \forall k_{p}\neq k_{q}\end{subarray}}\cos\Bigg{(}\sum_{j=1}^{\ell}\eta_{j}- \eta_{\ell+j}\Bigg{)},\]
_where \(\mathcal{Z}=\{1,2,\cdots,2\ell\}\). \(\blacksquare\)_
_Proof:_ See Appendix B. \(\blacksquare\)
## IV Numerical Results
In this section, we first discuss the implementation issues and the simulation parameter settings. Then, we show numerical results of the outage probability during the fine tracking stage. Table. I lists general simulation parameter values throughout this section. The link distance in the simulation is \(5\) km, which can be considered as the altitude of the UAV2. For the proposed method, the link distance affects the received signal power by the atmospheric loss and free-space path loss (in (8) and (14), respectively) twice for the uplink and downlink. However, for the conventional method, the link distance only affects the downlink channels. Thus, the decreased link distance is always more advantageous to the proposed method than the conventional one. For this reason, the proposed method will perform better than the following outage results for the UAVs lower than the altitude of \(5\) km.
Footnote 2: The vertical link distance of \(5\) km is grounded to the airspace Class E in the United States, an altitude of \(370\) m to \(5500\) m. Through the simulations, we show that the proposed method is applicable to UAVs at the highest altitude of the airspace Class E and below.
The radius of the CCRs is set to \(5\) cm, which is generally a larger size than most commercial passive CCRs. Considering the weight and size of the CCRs, we assume the blimp UAV to ensure sufficient CCR spacing and large payload capacity. That being said, the system providers can take advantage of the decreased operational altitude by launching smaller CCRs, which will considerably reduce the payload weight and operating costs. In this case, smaller UAVs, such as rotary-wing drones, can also carry multiple CCRs to apply our
Fig. 3: Relative error of the moments of \(S\) with respect to the ratio of \(\sigma_{s}\) to \(w\) with \(M=4\) for different order of moments.
Fig. 2: Outage probability of the proposed fine tracking system in weak turbulence channels with \(M=4\) and \(w\approx 8.5\) for different values of \(\sigma_{s}\).
method. As noted in Sec. II-A, we assume that all the CCRs and the communication telescope are at least \(\sqrt{2}\) m apart to preserve the channel independence3[12]. CCRs in a linear deployment are aligned at equal intervals along the axis, and those in a circular deployment are listed at equal intervals above the circumference of radius \(\sqrt{2}\) m. The moment-based parameter estimation of (20), (21), and (22) is calculated by the _fsolve_ function in MATLAB. Moreover, the outage probability is obtained by (18), with the estimated parameters.
Footnote 3: According to [12], atmospheric correlation length is about \(59\) cm for the link distance of \(5\) km and weak turbulence conditions. The weak turbulence is expressed by the refractive index structure constant, as \(C_{n}^{2}=10^{-17}\). In the simulation, the minimum CCR spacing is \(\sqrt{2}\) m, which is larger than the correlation length.
As shown in Fig. 2, for different \(\sigma_{s}\) values, the analytical results follow the simulation results, due to the joint-pointing loss derived in this paper. In Fig. 3, we show the approximation error of (28), the moment of joint-pointing loss. As a point of those in a circular deployment are listed at equal intervals above the circumference of radius \(\sqrt{2}\) m. The moment-based parameter estimation of (20), (21), and (22) is calculated by the _fsolve_ function in MATLAB. Moreover, the outage probability is obtained by (18), with the estimated parameters.
As shown in Fig. 2, for different \(\sigma_{s}\) values, the analytical results follow the simulation results, due to the joint-pointing loss derived in this paper. In Fig. 3, we show the approximation error of (28), the moment of joint-pointing loss. As a point of
Fig. 4: Comparison between the outage probability of the proposed and conventional fine tracking systems in strong turbulence channel with \(w=10\).
Fig. 5: Outage probability of the proposed fine tracking system in moderate channel with \(w=10\) for different numbers and deployments of CCRs.
comparison with our results, we also offer moments to which the \(1\)st-order Taylor approximation is applied. In Fig. 4, we emphasize the diversity effect of multiple passive CCRs by comparing the outage probability of the proposed system to that of the conventional fine tracking system, where a beacon transmitter is used at aerial vehicles. In this case, we assume that the transmit power is equal to or half of the power at the ground station due to the limitation of the aerial payload. Furthermore, since we derived the joint-pointing loss for the given locations of CCRs, we compare (in Fig. 5) the outage performance of the systems with different CCR deployments around the communication telescope.
## V Conclusion
In this correspondence, we introduced and analyzed a novel, fine tracking system that uses multiple passive corner-cube reflectors (CCRs) for spatial diversity and power saving. For the system model in which a number of passive CCRs are distributed around the communication telescope at the aircraft, we formulated a received power model at the ground station. We then derived the exact and approximated moments to approximate the PDF into the \(\alpha\)-\(\mu\) distribution. While a concern has been the low power of the reflected beam, the simulation results and analytical results support our argument that multiple passive CCRs can exceed the outage performance of the conventional method.
## Appendix A Proof of Theorem 1
From (14) and (26), we obtain
\[E[Z_{m_{1}}Z_{m_{2}}\cdots Z_{m_{n_{0}}}] \tag{32}\] \[= A_{0}^{n_{0}-\sum_{i=1}^{n_{0}}\frac{2s_{i}^{2}}{w^{2}}}\] \[\int_{0}^{\infty}e^{-\left(\frac{2s_{0}}{w^{2}}+\frac{1}{2\sigma _{s}^{2}}\right)\delta^{2}}\frac{\delta}{2\pi\sigma_{s}^{2}}\int_{0}^{2\pi}e^{ -\sum_{i=1}^{n_{0}}\frac{4s_{m_{i}}\delta\cos\left(\phi_{m_{i}}-\theta\right) }{w^{2}}}d\theta\,d\delta.\]
Since the sum of cosine functions \(-\sum_{i=1}^{n_{0}}\frac{4s_{m_{i}}\delta\cos\left(\phi_{m_{i}}-\theta\right) }{w^{2}}\) can be simplified into a single cosine function, the inner integral is then expressed as a modified Bessel function of the first kind. Then (32) results in (27).
## Appendix B Proof of Theorem 2
By substituting \(\mathbf{r}=\mathbf{s}+\mathbf{d}\) into (14) and (15) and applying \(2\)nd order Taylor approximation, the Gaussian beam profile at \(\mathbf{s}\) results in an approximated form of \(Z_{i}\) as
\[Z_{i}\mathbf{\approx}A_{0}e^{-\frac{2s_{i}|^{2}}{w^{2}}}\bigg{\{}1-\frac{4}{w^ {2}}\mathbf{s}_{i}^{T}\mathbf{d}+\frac{1}{2}\mathbf{d}^{T}\Big{(}\frac{16}{w^ {4}}\mathbf{s}_{i}\mathbf{s}_{i}^{T}-\frac{4}{w^{2}}\mathbf{I}\Big{)}\mathbf{ d}\bigg{\}}. \tag{33}\]
By substituting (33) into (26), we get
\[E[Z_{m_{1}}Z_{m_{2}}\cdots Z_{m_{n_{0}}}] \tag{34}\] \[=\int_{0}^{2\pi}\int_{0}^{\infty}\prod_{i=1}^{n_{0}}\left[A_{0}e^ {-\frac{2s_{i}^{2}}{w^{2}}}\bigg{(}1-\frac{4s_{m_{i}}}{w^{2}}\delta\cos(\phi_ {m_{i}}-\theta)\right.\] \[\left.\quad+\frac{8s_{m_{i}}^{2}}{w^{4}}\delta^{2}\cos^{2}(\phi_{ m_{i}}-\theta)-\frac{2}{w^{2}}\delta^{2}\bigg{)}\right]\cdot\frac{\delta}{2\pi \sigma_{s}^{2}}e^{-\frac{\delta^{2}}{2\sigma_{s}^{2}}}\,d\delta\,d\theta.\]
With respect to the Rayleigh distributed \(\delta\), (34) can be interpreted as an expected value of the polynomial. Consequently, we transform this into the integral of the product of cosine functions with coefficients involving Rayleigh moments. After calculating the integral of cosine functions with respect to \(\theta\) by (31), the moment of a joint-pointing loss can be expressed without integral operations as (28).
|
2310.09069 | ImageManip: Image-based Robotic Manipulation with Affordance-guided Next
View Selection | In the realm of future home-assistant robots, 3D articulated object
manipulation is essential for enabling robots to interact with their
environment. Many existing studies make use of 3D point clouds as the primary
input for manipulation policies. However, this approach encounters challenges
due to data sparsity and the significant cost associated with acquiring point
cloud data, which can limit its practicality. In contrast, RGB images offer
high-resolution observations using cost effective devices but lack spatial 3D
geometric information. To overcome these limitations, we present a novel
image-based robotic manipulation framework. This framework is designed to
capture multiple perspectives of the target object and infer depth information
to complement its geometry. Initially, the system employs an eye-on-hand RGB
camera to capture an overall view of the target object. It predicts the initial
depth map and a coarse affordance map. The affordance map indicates actionable
areas on the object and serves as a constraint for selecting subsequent
viewpoints. Based on the global visual prior, we adaptively identify the
optimal next viewpoint for a detailed observation of the potential manipulation
success area. We leverage geometric consistency to fuse the views, resulting in
a refined depth map and a more precise affordance map for robot manipulation
decisions. By comparing with prior works that adopt point clouds or RGB images
as inputs, we demonstrate the effectiveness and practicality of our method. In
the project webpage (https://sites.google.com/view/imagemanip), real world
experiments further highlight the potential of our method for practical
deployment. | Xiaoqi Li, Yanzi Wang, Yan Shen, Ponomarenko Iaroslav, Haoran Lu, Qianxu Wang, Boshi An, Jiaming Liu, Hao Dong | 2023-10-13T12:42:54Z | http://arxiv.org/abs/2310.09069v1 | # ImageManip: Image-based Robotic Manipulation with Affordance-guided Next View Selection
###### Abstract
In the realm of future home-assistant robots, 3D articulated object manipulation is essential for enabling robots to interact with their environment. Many existing studies make use of 3D point clouds as the primary input for manipulation policies. However, this approach encounters challenges due to data sparsity and the significant cost associated with acquiring point cloud data, which can limit its practicality. In contrast, RGB images offer high-resolution observations using cost-effective devices but lack spatial 3D geometric information. To overcome these limitations, we present a novel image-based robotic manipulation framework. This framework is designed to capture multiple perspectives of the target object and infer depth information to complement its geometry. Initially, the system employs an eye-on-hand RGB camera to capture an overall view of the target object. It predicts the initial depth map and a coarse affordance map. The affordance map indicates actionable areas on the object and serves as a constraint for selecting subsequent viewpoints. Based on the global visual prior, we adaptively identify the optimal next viewpoint for a detailed observation of the potential manipulation success area. We leverage geometric consistency to fuse the views, resulting in a refined depth map and a more precise affordance map for robot manipulation decisions. By comparing with prior works that adopt point clouds or RGB images as inputs, we demonstrate the effectiveness and practicality of our method. In the project webpage ([https://sites.google.com/view/imagemanip](https://sites.google.com/view/imagemanip)), real-world experiments further highlight the potential of our method for practical deployment.
## I Introduction
The research community has recently shown a growing interest in embodied AI and its practical applications in the field of robotic manipulation. Recent studies [1, 2, 3, 4, 5, 6] have highlighted the potential of using point-cloud cameras to obtain detailed 3D geometric information for manipulation tasks. However, collecting point-cloud data involves capturing 3D depth information, which frequently leads to sparse data representation. This sparsity poses severe noises, especially on specular or transparent material surfaces with high-intensity lighting.
To enhance the level of captured detail, people consider increasing the resolution of point-cloud data, or alternatively, adopting industrial-grade point cloud cameras, both of which can increase hardware costs and limit their practicality. In contrast, RGB images present a readily available and cost-effective visual modality that captures detailed observations with relatively higher resolution. They are widely employed in various computer vision tasks [7, 8, 9] due to their accessibility and compatibility with existing imaging systems. While RGB images lack inherent 3D geometry, recent advancements in the field of self-driving cars have demonstrated promising techniques to complement the geometric information in RGB images by leveraging depth estimation [10, 11, 12]. To harness the benefits of the RGB modality, which is characterized by its widespread availability and high-resolution capabilities, we introduce an RGB-only robot manipulation framework. Our approach utilizes an eye-on-hand RGB camera to capture diverse perspectives of the target object without the need for moving back and forth. To compensate for the absence of geometric information, we incorporate depth estimation for better manipulation decisions.
Specifically, as shown in Fig. 1, our process begins by capturing a global view image, providing an overall view of the target object. This image is used to generate a global visual prior, which comprises an initial depth map and a coarse affordance map. The depth map approximates the object's 3D structure, while the per-pixel affordance map identifies where the robot's end-effector should engage with the object for successful manipulation. Recognizing the limitations of a single view, we proceed to capture additional views and fuse their features to construct a comprehensive visual representation. We introduce a next view selection module, which dynamically determines the optimal camera position for capturing the subsequent view based on the global visual prior. As we obtain the next view and merge the features from different views, we generate the refined depth map, improving our estimation of the object's 3D structure. Simultaneously, we obtain the refined per-pixel affordance map, providing a more precise indication of the regions where the robot's
Fig. 1: The overall framework. In the depth map, the values represent normalized depth distances, while in the affordance map, the value of each pixel represents the probability of successful manipulation at that point.
end-effector should interact. This enhanced affordance map, in turn, serves as a supervision for the next view selection module whose goal is to select the optimal view that can provide the most extra valuable information for manipulation decisions.
We adopt the SAPIEN simulator [13] to train and evaluate our approach for pushing and pulling on 972 shapes across 15 commonly encountered indoor object categories. Following the evaluation method of prior studies [14, 15, 16], our experimental results demonstrate that our method effectively learns to predict feasible actions for novel objects, even for object categories that have not been encountered during training phase.
## II Related work
One widely used approach in robotic control and planning is state-based reinforcement learning (RL) [17, 18, 19, 2, 20, 21]. Some works have identified the possibility of using the pure state as the policy input [18]. However, when it comes to more complex settings, vision-based observation becomes necessary to acquire fine-grained 3D geometry information.
As for vision-based robotic manipulation, numerous studies have explored the tasks, including object grasping [22, 9], articulated object manipulation [14, 2], and object reorientation [18]. Various visual modalities have been explored for perceiving the environment in robotic manipulation. Some studies, such as SAGCI-System[1], RLAfford[2], and Flowbot3D[15], have employed solely point clouds as observations. On the other hand, approaches from Xu [16] and Wu [23] have integrated both RGB images and point clouds for tasks like articulated object manipulation and object grasping. However, the use of point clouds poses challenges when dealing with specular and transparent objects as they interfere with the imaging process of current depth cameras [24]. Thus, some works have discussed the advantages of using RGB-only image inputs to achieve robust robotic manipulation [22, 9] for its high pixel-wise resolution and ability to be readily accessed. In this work, we explore only RGB-based manipulation but acquire depth estimation for perceiving objects' 3D structures.
In addition, several studies have demonstrated the possibility of improving vision-based robotic manipulation via affordance prediction [14, 6, 2]. In the work Where2Act [14], affordance is defined as a per-point actionable score that measures the possibility of success when the robot interacts with each point on the object, while RLAfford [2] defines affordance as the contact frequency during the RL exploration. In this work, we follow the definition from Where2Act, but extend our consideration of affordance as an explicit constraint to narrow down the possible range of movement for the robot's end-effector during both the training and inference stages.
## III Methodology
### _Overall Framework_
The whole procedure of the overall framework is presented in Fig. 1. Initially, we capture a global image \(I_{1}\in\mathbb{R}^{H\times W\times 3}\), and obtains global visual prior, _i.e._, depth map \(D_{1}\in\mathbb{R}^{H\times W\times 1}\) and affordance map \(A_{1}\in\mathbb{R}^{H\times W\times 1}\). The depth map complements the lack of object's 3D geometry, while the per-pixel affordance map indicates each pixel's "actionability" in accomplishing the manipulation task.
Based on the global visual prior, the _next view selection_ selects the optimal camera position to capture the best next image \(I_{2}\in\mathbb{R}^{H\times W\times 3}\). Subsequently, in _camera_aware fusion_ module, the feature of \(I_{1}\) and \(I_{2}\) are token-wise fused to generate the refined visual prior, _i.e._, refined depth map \(D_{2}\in\mathbb{R}^{H\times W\times 1}\) and affordance map \(A_{2}\in\mathbb{R}^{H\times W\times 1}\). We then select contact point \(p\) with the highest "actionability" score on the refined affordance map. Apart from this, we further generate end-effector's orientation with action proposal and action scoring module. Practically, during training, the data is pre-collected in a random manner. Following the training procedure in Fig. 2, we train all modules simultaneously.
### _Obtaining Global Visual Prior_
Given the global image, as shown in Fig. 2(a), we aim to obtain global visual prior, _i.e._, **initial depth map** and **initial affordance map**. The goal of global visual prior is to serve as the condition for the following next view selection and provide global observation for the manipulation policy.
**Initial Depth Map.** Given the global image \(I_{1}\) with \(H\) height and \(W\) width, we extract high-dimensional per-pixel features \(F_{1}\) with an encoder. The encoder employed is based on the architecture of U-Net[25], featuring a ResNet-18 [26] encoder that has been pre-trained on ImageNet [27]. We then obtain the global feature with a decoder that is trained from scratch. Additionally, dense skip links are equipped between the encoder and decoder. We then adopt the depth estimator \(D_{d}\) to convert global feature to initial depth estimation map \(D_{1}\in\mathbb{R}^{H\times W\times 1}\), which contains an MLP[28] to aggregate channel-wise representation. It is then supervised by depth ground truth under L1[29] loss \(\mathcal{L}_{d}^{1}\). The goal of the initial depth map is to help place the camera in 3D space to capture the following next view.
**Initial Affordance.** The initial affordance map aims to screen out the region of manipulation interest and enable the subsequent views to focus on this region, thus reducing the exploration space for the following manipulation steps.
To obtain the affordance map, given the global feature, we
Fig. 2: Training Strategy for the Entire Network. Modules in blue only take feature from global image while modules in green only take fused feature. Modules in the mix of blue and green are shared for both views. The Orange cube denotes the feature of the contact point.
adopt the actionability scoring module \(D_{a}\) to measure the actionability of each pixel. It contains a multi-layer perception (MLP) and a sigmoid function and converts global feature of the given pixel to probability \(a_{p}\in[0,1]\), indicating the likelihood of the pixel being operable. We superpose \(D_{a}\) with the ground truth of 1 or 0, representing whether the manipulation on this point is a successful interaction. The pixel and the corresponding manipulation success result are collected beforehand. The loss \(\mathcal{L}_{a}^{1}\) is a standard binary cross-entropy loss[30]. By projecting this 2D affordance with the initial depth map, we can estimate the region of manipulation interest in 3D space. Subsequently, after identifying the next view \(I_{2}\) based on the global visual prior, we integrate its feature with the global view feature to formulate a comprehensive visual representation for enhancing depth estimation and affordance prediction (Sec III-C). As for the process of determining the next view, we adaptively choose the next view to maximize the chances of predicting successful manipulation (Sec III-D).
### _Geometric-consistency Image Fusion_
To enhance the model's ability to capture a broader visual context, we introduce geometric-consistency fusion, which combines the feature of global view and the subsequent view. However, this fusion process faces challenges due to differences in viewing angles between the two views, leading to issues like foreground obscure. Therefore, it's vital to ensure that both views share common information and allow us to fuse. To address this, we first identify an initial contact point, denoted as \(p_{1}\), which has the highest score on the initial affordance map \(A1\). We use this point as the center of the camera and place the camera into the 3D space with the help of initial depth map to capture the next view \(I_{2}\). This approach guarantees that there is at least some overlap between the regions near point \(p_{1}\) that can be fused. This region is also of high investigation interest since it is of high manipulation success probability as predicted by the initial affordance map. Though inaccuracy may exist in depth prediction that may impact the location to place the camera, it will not influence its orientation and we still can ensure it is with \(p_{1}\) as camera center. By doing so, we ensure the movement of the robot end-effector is directed towards a region that is very likely to be manipulated, thus avoiding introducing additional unnecessary movement.
Next, to find the pixel-wise correspondence between the two views, we map \(I_{1}\) into 3D space using the initial depth map, \(D_{1}\), and project it onto the camera coordinate of \(I_{2}\). However, this process may contain misalignment due to inaccuracies in depth prediction (\(D_{1}\)), causing error accumulation in pixel-wise fusion process. To avoid such, we propose to perform fusion at a token-wise level after the encoding stage. This token-wise fusion occurs at a lower resolution in which one token aggregates the feature of n\(\times\)n pixels, and n is the dimension of the high-dimensional feature. Even if there are issues with pixel correspondence, as long as the corresponding pixels fall within the token composed of n\(\times\)n pixels around the correct corresponding point, no additional errors will be introduced. It is more tolerant of inaccuracies in depth estimation.
Specifically, we determine token-level correspondence based on pixel-level correspondence, by collecting all pixel correspondences within a token and then selecting the most relevant token. In Fig. 2(b), we first extract the high-dimension feature \(F_{2}\) of the next view with the same encoder, and then fuse the two high-dimension features based on the token-wise correspondence. The fused token has the same semantic representation. To transfer the next view to the global view, in \(F_{1}\) and \(F_{2}\), if a token from F1 has a corresponding token in \(F_{2}\), we merge the two. If a token from F1 does not have a corresponding token in \(F_{2}\), we retain the token from \(F_{1}\). After obtaining the fused feature, the same actionability scoring module \(D_{a}\) and depth estimator \(D_{d}\) are adopted to predict refined depth \(D_{2}\) and affordance map \(A_{2}\) under loss \(\mathcal{L}_{a}^{2}\) and \(\mathcal{L}_{d}^{2}\). We then project the affordance map into 3D space with the depth map and select the point with the highest affordance score as the contact point to interact with the object. We visualize pixel-wise and token-wise correspondences in the project webpage. We also provide experiments in Sec. IV-C to show the effectiveness of token-wise fusion in tolerating inaccurate depth prediction.
### _Next View Selection_
This section is focused on selecting the viewpoint to increase the chances of predicting a successful manipulation. By analyzing the movement of the eye-on-hand camera, we notice that most views at different time stamps are similar and do not offer important additional information, resulting in unnecessary redundancy if we leverage all of them. Therefore, this selection mechanism aims to not only reduce the number of incorporated views by only selecting useful views but also ensure efficient manipulation moving. We first illustrate the process of selecting the optimal next view and then demonstrate how to supervise the selection module.
**Best Next View Selection Module.** During inference, the robot cannot explore randomly to assess various viewpoints. Hence, in our training process, we train a next view proposal network to adaptively choose the next image, denoted as \(I_{2}\), based on the prior global view.
Specifically, during training, after capturing \(I_{1}\), we employ a random data collection method inspired by Where2Act[14]. We randomize the contact point \(p\) for object interaction and divide the surrounding 3D space into several candidate areas. The camera is then randomly placed in one of these candidate areas with \(p\) as the camera's center to capture the next view, \(I_{2}\). The best view selection module, denoted as \(D_{v}\), learns to evaluate the camera position to capture the next view, which is conditioned on the global feature. With the trained view selection module, during inference, and based on the predicted initial affordance map, we can project it into 3D space with the assistance of the initial depth map and roughly identify the region of manipulation interests. We apply the same 3D space division strategy as used in training to obtain candidate viewpoints. The next view selection module is then employed to select from all candidates and determine the optimal camera
pose for capturing \(I_{2}\). This approach empowers the robot to make informed decisions about the next viewpoint. Also, by placing the camera towards the region that worth investigating, we ensure efficient movement.
**Supervision for the Next View Selection.** Based on the global view features and camera pose information, the view selection module outputs the probability that the next view captured at the given camera pose is a good view. During training, we introduce supervision based on the confidence score improvement when predicting affordance (\(A_{2}\)) compared to the initial affordance estimation (\(A_{1}\)). This approach helps us choose the next view that can provide a more accurate affordance prediction, thereby enhancing the overall visual feature representation and manipulation prediction. Specifically, the actionability scoring module is a binary classification module that assigns a confidence score to its prediction on whether interacting with a given pixel will result in a successful manipulation. When collecting ground-truth, we assign a value of 1 to a pixel if \(c_{2}\) is greater than \(c_{1}\); otherwise, we assign -1. \(c_{1}\) and \(c_{2}\) represent the confidence scores for affordance predictions for a given pixel. We calculate the average score for all pixels in the images. The next view is considered valuable if the average score is greater than 0, indicating that the confidence score for affordance prediction has increased for over 50% of the pixels. We use a binary cross-entropy loss[30] (\(\mathcal{L}_{v}\)) for supervision.
### _Manipulation Planning_
**Manipulation policy.** Inspired by Where2Act[14], to identify the manipulation orientation, in Fig. 2(c), we adopt an **action proposal module**\(D_{r}\) to propose action proposals given the fused feature of contact point \(p\), which is selected from the refined affordance map \(A_{2}\). It contains an MLP to predict end-effector 3-DoF orientation \(R_{z|p}\in SO(3)\), z is drawn from a Gaussian distribution. We train \(D_{r}\) to be able to propose one candidate that matches the ground-truth interaction orientation, which is penalized by the distance function between two 6D-rotation representations \(\mathcal{L}_{r}\). Meanwhile, to determine one action proposal from all candidates, we adopt the **action scoring module**\(D_{s}\) to evaluate the manipulation success rate \(s_{R|p}\in[0,1]\) of implementing a predicted action proposal \(R\), given the fused feature of the contact point \(p\). We use \(D_{a}\) to supervise it because they are positively correlated. Specifically, we calculate the sum of 100 likelihoods of \(D_{s}\) on 100 candidate proposals given a pixel feature \(f_{p}^{2}\). It is then supervised by the prediction of \(D_{a}\) under MSE loss \(\mathcal{L}_{s}\). After adjusting the relative loss scales to the same level, we obtain the total objective function: \(\mathcal{L}\) = (\(\mathcal{L}_{a}^{1}+\mathcal{L}_{a}^{2}\)) + \(\mathcal{L}_{r}\) + \(100*\mathcal{L}_{s}\) + (\(\mathcal{L}_{d}^{1}+\mathcal{L}_{d}^{2}\)) + \(\mathcal{L}_{v}\). All modules are trained simultaneously under this total objective \(\mathcal{L}\).
**Force-feedback Closed Loop Adjustment.** So far, we have obtained the contact point from refined affordance map and orientation of the gripper (p,R), to realize the first interaction with the object. In our eye-on-hand setting, we face limitations with vision-based closed-loop physical manipulation. When attempting to manipulate objects after the first step interaction, the gripper would get too close, resulting in insufficient visual information for generating subsequent predicted (p,R) pairs. To overcome this challenge, we adopt a different approach that relies on acquiring information from the force applied to the robotic arm without moving the camera back and force [31]. The control policy leverages the target pose, current pose, and previous pose to compute an intermediate pose, which is then fed into an impedance controller[32]. This controller generates a sequence of pairs (p,R) and effectively steers the robotic arm towards the intermediate pose, accomplishing the long-term manipulation step by step. This method allows us to develop a reliable manipulation policy that is resistant to disturbances and capable of handling both revolute and prismatic articulated objects.
## IV Experiment
### _Experiment Setting_
**Data collection** We adopt SAPIEN [13] and the PartNet-Mobility dataset to set up an interactive environment for our task, with VulkanRenderer of high-efficiency rasterization-based renderer. In order to tackle the significant disparity between visual and collision shapes, we leverage the V-HACD [33](Voxelized Hierarchical Approximate Convex Decomposition) algorithm. This method entails voxelizing the 3D model, subsequently engaging hierarchical approximation to iteratively diminish the voxel count and amalgamate them into larger convex voxels. Subsequently, convex decomposition is applied to transform these merged convex voxels into simpler convex shapes. We use a Franka Panda Flying gripper with two fingers as the robot actuator. We consider two types of action primitives pushing and pulling with pre-programmed interaction trajectories. Following Where2Act[14], we sample the training data in an offline fashion with approximately 1000 positive and 1000 negative trajectories per category. We randomly sample an interaction orientation \(R\in SO(3)\) from the hemisphere above the tangent plane around \(p\), oriented consistently to the positive normal direction, and collect the outcome of the interaction given (\(p\), \(R\)) in the simulator. As for the next view collection, the 3D space to place the camera for capturing next view is limited to a distance between 2.5 and 4.5 units from \(p\), an azimuth angle ranging from \(-20^{\circ}\) to \(20^{\circ}\), and an altitude angle ranging from \(-20^{\circ}\) to \(20^{\circ}\) around the normal of that point. We split the space into nine candidates and randomly set camera in one candidate to capture the next view.
**Evaluation metric.** We adopt the manipulation success rate to reflect the outcome of the manipulation which is the ratio of the number of successfully manipulated samples divided by the total number of all test samples. As for the definition of success sample, we adopt the binary success definition which is measured by thresholding the move distance of the object part at \(\delta\): success = \(1(\delta_{dis}>\delta)\). Following Where2Act[14] and Flowbot3D[15], we set \(\delta=0.01\) or \(\delta=0.1\) for short-term 'atomic' or long-term interaction respectively, meaning that the gap between the start and end part 1-DoF pose is greater than 0.01 or 0.1 unit-length. As for the metric to measure the performance of the depth estimation module, we adopt the commonly used metric Absrel (Absolute relative
difference)[34]. The absolute relative difference is given by \(\frac{1}{H\times W}\sum_{i\in H\times W}|y_{i}-y_{i}^{*}|/y_{i}^{*}\), where \(y\) and \(y^{*}\) are the predicted and ground-truth depth, respectively.
### _Baseline Comparisons_
The results of pushing and pulling are shown in Tab. I and II, and the long-term experiments are denoted as Ours (long). The long-term experiment _Ours(long)_ demonstrates the effectiveness of force-feedback controller in our manipulation system. Following [14], we conduct the comparisons with other approaches only on short-term interactions since this can reflect how well the model can perform given the initial state of the object, which is the preliminary condition in realizing the whole long-term manipulation procedure. The average Absrel for the refined depth map on all seen categories is 0.2, which is improved compared with initial depth map of 0.5 Absrel. Moreover, the average Absrel for the refined depth map on all unseen categories is 0.35, showing its domain generalization ability.
For a fair comparison, we compare with base methods under the same training dataset and end-effector. Though point-cloud based methods show promising performance, they rely on the cost-ineffective depth perceiver. Meanwhile, most failure cases of these approaches come from when the robot attempts to interact with the very edge of the object but fails to contact the object because the edge is blurred in the point-cloud modality. For RGB-based methods, they all adopt RL-based learning strategy. We notice that these methods had a huge performance drop on unseen shapes within the seen categories as well as unseen categories, potentially due to the weakness in the generalizability of pure RL methods.
### _Ablation and Analysis_
We provide ablation studies in the bottom part of Tab. I to verify the effectiveness of each proposed module.
_With pixel fusion:_ To demonstrate the effectiveness of the token-wise fusion module, we provide the straightforward pixel-level fusion alternative that fuses the high-dimensional feature of the view with their pixel correspondence. Compared to the fusion utilizing pixel-wise dense correspondences (with an average manipulation accuracy of 0.66), token-wise correspondence yields a substantial performance improvement of 0.73 due to its capacity to tolerate depth inaccuracies. Furthermore, when contrasting our method against using depth ground truth to establish token-wise correspondence (achieving 0.74), we observe only marginal improvement. This shows the ability of our fusion module to accommodate depth inaccuracies. Meanwhile, we observe that feature fusion will not directly impact the upper bound of manipulation accuracy.
_Without next view:_ We verify the effectiveness of introducing the next view by conducting training and testing only with the global visual prior, _i.e._, initial depth map, and initial affordance map. We observe that performance drops dramatically because the global view may only capture a partial observation. The missed crucial details and incomplete object representation can not ensure a reasonable robotic manipulation prediction. We further show the visualization comparisons of initial and refined affordance map on the project webpage.
_With random next view:_ To verify the effectiveness of our best view selection module, we introduce a random next view selection mechanism. Given the inital affordance and depth map, it randomly selects a candidate to place camera among the nine candidates. In this case, the manipulation success rate drops slightly since the randomly collected view may not provide valuable information to complement the global view. Meanwhile, more observations still help in formulating a comprehensive object representation for a thorough manipulation decision. More visualization of the refined map given all candidates and the upper bound analysis of view selection module is presented on the website.
_With more next views:_ In the main experiments, we only adopt two views, one global view and one best next view. In this part, we conduct experiments on three views, with one global view and two consecutively selected next best views, to explore the effectiveness of the number of views. As shown in Fig. I, increasing viewpoints can boost the performance by a small margin since more views bring complementary details, facilitating a reliable manipulation inference. However, introducing more views also brings additional computational costs. As a trade-off between accuracy and efficiency, we assume the two views are an optimal option. More analysis regarding space scope is presented on the webpage.
_With depth ground-truth:_ Ours with depth ground-truth (GT) replaces the depth prediction with depth GT. By doing so, we aim to reflect on whether the inaccuracy of depth prediction will directly impact robotic manipulation. We find that with depth ground-truth, the manipulation success rate is only improved by a small amount because depth ground-truth can help to ensure a relatively accurate contact with the object. To mitigate the effects of inaccurate depth estimation, we adopt a self-correction mechanism. Following the manipulation policy of Flowbot3D [15], which detects contact between the gripper and the object, we also assess whether there is contact between the gripper and the object based on the depth prediction. If no contact is detected, we gradually adjust the gripper's position along the predicted orientation until contact is established. This approach reduces the impact of inaccurate depth estimation on the manipulation process. Consequently, we can conclude that as long as the accuracy of depth estimation falls within a reasonable range, it will not heavily impact the final manipulation outcomes. Furthermore, by predicting depth, our model implicitly extracts the geometric structure of objects, making informed decisions regarding 3D object manipulation. In contrast, the alternative approach of directly using depth ground truth only captures 2D features without a comprehensive understanding of object geometry. This explains why, in some categories, depth prediction can even outperform depth ground truth in terms of manipulation performance.
### _Real-world Experiment_
We conduct experiments that involve interacting with various real-world household objects. We employ a Franka Emika
robotic arm with a RealSense 415 camera (the only camera we have), but only use RGB images as inputs. To achieve sim-to-real transfer, we've devised two strategies: 1) Pre-training Strategy: For the visual encoder, we've incorporated ImageNet pre-trained weights into the ResNet architecture. This integration equips the model with strong real-world perception capabilities. 2) Simulator Strategy: In simulator, we've employed domain randomization to amplify scenario diversity, diversifying elements like lighting, materials, light position, etc, aiming to ease sim-to-real transfer. We visualize the domain randomization of object material along with the manipulation process video on the project website. Since manipulation performance relies predominantly on the quality of the affordance and depth prediction, we demonstrate these under real-world settings in Fig. 3, showing the effectiveness of our model in real-world.
## V Conclusion and Limitation
In this paper, we investigate image-based robotic manipulation learning with the acquisition of depth and affordance. A global view is utilized to provide a global visual prior for subsequent viewpoint selection. Furthermore, we introduced the best next view selection and geometric-consistency fusion to extract a refined visual prior for accurate manipulation prediction. We believe generalizable image-based robotic manipulation is an exciting direction because RGB data can be captured at a low cost which will be beneficial to low-cost robot systems in the future. As for the limitation, we limit out experiments to two types of action primitives with individual initial motion trajectories. In the future, we plan to generalize the framework to more free-form interactions.
Fig. 3: The upper pairs represent refined affordance map, representing the probability of successful manipulation at that point. The bottom pairs stand for refined depth map, representing normalized depth distances. |
2301.07705 | Scalable fabrication of hemispherical solid immersion lenses in silicon
carbide through grayscale hard-mask lithography | Grayscale lithography allows the creation of micrometer-scale features with
spatially-controlled height in a process that is fully compatible with standard
lithography. Here, solid immersion lenses are demonstrated in silicon carbide
using a novel fabrication protocol combining grayscale lithography and
hard-mask techniques to allow nearly hemispherical lenses of 5 $\mu$m radius to
be etched into the substrate. The technique is highly scalable and compatible
with CMOS technology, and device aspect ratios can be tuned after resist
patterning by controlling the chemistry of the subsequent dry etch. These
results provide a low-cost, high-throughput and industrially-relevant
alternative to focused ion beam milling for the creation of high-aspect-ratio,
rounded microstructures. | Christiaan Bekker, Muhammad Junaid Arshad, Pasquale Cilibrizzi, Charalampos Nikolatos, Peter Lomax, Graham S. Wood, Rebecca Cheung, Wolfgang Knolle, Neil Ross, Brian Gerardot, Cristian Bonato | 2023-01-18T18:56:04Z | http://arxiv.org/abs/2301.07705v1 | Scalable fabrication of hemispherical solid immersion lenses in silicon carbide through grayscale hard-mask lithography
###### Abstract
Grayscale lithography allows the creation of micrometer-scale features with spatially-controlled height in a process that is fully compatible with standard lithography. Here, solid immersion lenses are demonstrated in silicon carbide using a novel fabrication protocol combining grayscale lithography and hard-mask techniques to allow nearly hemispherical lenses of 5 \(\mu\)m radius to be etched into the substrate. The technique is highly scalable and compatible with CMOS technology, and device aspect ratios can be tuned after resist patterning by controlling the chemistry of the subsequent dry etch. These results provide a low-cost, high-throughput and industrially-relevant alternative to focused ion beam milling for the creation of high-aspect-ratio, rounded microstructures.
pacs: 42.82.Cr, 42.79.Bh
## I Introduction
The reliable creation of large-scale and high-aspect-ratio microlens arrays [1; 2; 3] can impact several areas of photonics and quantum technology. Microlenses are used, for example, to collimate the outputs of vertical-cavity surface-emitting laser (VCSEL) arrays [4; 5] and quantum emitters [6; 7; 8; 9], to increase the sensitivity of image sensors by boosting coupling to the device active area [10; 11; 12], and to increase the efficiency of interconnects for optical transceiver chips [13; 14; 15].
In quantum technology, micrometer-scale solid immersion lenses (SILs) have played a significant role in efficiently extracting single photons from single solid-state quantum emitters [16; 17; 18]. In solid-state matrices, photon collection is usually limited by total internal reflection, which traps most of the emission in the high-index medium. By removing refraction at large angles, SILs can boost collection efficiency by a factor 10-20, as shown for example for the spin/photon interface associated with single nitrogen-vacancy (NV) centers in diamond [19]. Embedding NV centers in micro-fabricated SILs has enabled spectacular breakthroughs such as single-shot projective readout of its electronic spin [18], the first loophole-free Bell test [20] and the realization of a multi-node quantum network of remote solid-state quantum devices [21; 22]. More recently, this technology has also been extended to similar quantum emitters in other materials with better technological maturity, such as silicon carbide [23; 24; 25].
Conventionally, individual SILs have been fabricated by focused ion beam (FIB) milling [16; 17; 18; 25; 26; 27]. FIB gives exquisite shape control but is very time-consuming and expensive, with several hours required to mill a single microlens. One alternative to FIB milling which is typically utilized for larger-scale SILs are resist reflow, a lithography-based technique where photoresist pillars are heated past their glass transition temperature, such that the surface tension rounds the resist surface into lens-shaped structures. This method is fast and scalable, and has been used to create lenses in several materials including polymers [28; 29; 30; 31; 32; 33] glasses [3; 34; 35] sapphire [36; 37], diamond [38; 39] and more recently silicon carbide [24]. However the lens profile is constrained by the surface tension of the resist, with shape control only achievable through changing the degree of reflow (through temperature and time of heating), the wettability of the surface [40; 41] or the dimensions of the precursor resist pillars [32]. This imposes serious limitations on the tunability and uniformity of lens structures fabricated through resist reflow [1]. Further, only very shallow (low sagitta compared to radius) SILs have been demonstrated thus far using this technique, possibly due to the collapse of high-aspect-ratio features when the resist is reflow [42]. A second alternative for fabricating densely-packed lens arrays is the indentation of photoresist layers during spin-coating using self-assembled microsphere monolayers [43], with the resultant concave structures taking the shape of the microspheres used.s
Here, we present an alternative to these fabrication processes through utilising a novel process based on grayscale lithography [42; 44; 45], where the ability to vary exposure dose spatially is harnessed through use of a low contrast photoresist to create micrometer-scale struc
tures with precisely-controllable heights. We introduce a grayscale hard-mask technique into this process which decouples the etch chemistry of the resist mask and substrate, consequently enabling increased aspect ratios and process robustness over conventional grayscale lithography. These advances allow us to achieve a large degree of shape control over fabricated structures, while maintaining the scalability and speed inherent in lithography-based approaches to microlens fabrication.
## II Experimental Details
### Device Fabrication Process
The devices presented here have been fabricated on commercial SiC material (Xiamen PowerWay(r), 500 \(\mu\)m substrate with 15 \(\mu\)m thick epilayer) diced into 5x5 mm chips. Prior to photolithography, a 5 \(\mu\)m thick silica (SiO\({}_{2}\)) layer was deposited on the chip via plasma-enhanced chemical vapor deposition (STS Multiplex CVD).
Grayscale photoresist (micro resist technology GmbH ma-P 1275G) was spin-coated at 3000ppm to reach a thickness of 8 \(\mu\)m and soft-backed at a sequence of temperatures up to 100\({}^{\circ}\)C over 50 minutes.
The resist was exposed using a Heidelberg Instruments DWL 66+ direct-writing laser lithography system. The exposure beam is stepped over the chip with variable dose, allowing the resist to be partially exposed for spatially-variable height after development. The resist was developed using a TMAH-based solvent (micro resist technology GmbH mr-D 526/S) for 5 minutes, and baked to minimize retained solvent at 60\({}^{\circ}\)C for 10 minutes.
The chip was subsequently loaded into a reactive ion etching system (RIE; Oxford Instruments PlasmaLab 100), where the SiO\({}_{2}\) layer was etched with the grayscale photoresist acting as a mask using a 25/7/3 CHF\({}_{3}\)/Ar/O\({}_{2}\) gas chemistry. In this etching step, the CHF\({}_{3}\) etches the SiO\({}_{2}\) with high selectivity to the photoresist and the O\({}_{2}\) etches the photoresist with high selectivity to the SiO\({}_{2}\), allowing for a tuneable total selectivity depending on the desired height of the SiO\({}_{2}\) features. The Ar gas acts as a thermal link to improve chip cooling during the etching process, while providing a small amount of physical etching. The etch proceeds until the SiO\({}_{2}\) layer has been cleared, ideally coinciding with etching through the photoresist mask layer.
The second etch step focuses on transferring the SiO\({}_{2}\) patterns into the SiC substrate. For this RIE step, a SF\({}_{6}\)/Ar/O\({}_{2}\) chemistry is used, where the Ar again acts as a thermal link and the O\({}_{2}\) acts both to densify the plasma [46] and to improve the SiC etch rate through chemical etching of carbon [47; 48; 49; 50]. As SF\({}_{6}\) preferentially etches SiC over SiO\({}_{2}\) with a selectivity of 5-10 [51; 52], this gain allows for a high-selectivity etch step, though this can be tuned through the introduction of CHF\({}_{3}\) gas into the etch.
Utilising a two-step etch process as outlined above brings two benefits to our fabrication protocol: firstly, etching of the photoresist and SiC substrate, which both require oxygen and so cannot be separated in a single etch, is decoupled across two different steps; and secondly, the selectivity of each etch step becomes tuneable through controlling the flow rate of CHF\({}_{3}\) gas, which only etches the SiO\({}_{2}\). For further discussion see Supplementary Information V.1.
The total process time to complete all of the above steps is approximately 12 hours for a pattern including 20,000 lenses. Of this time 8 hours is spent in etching, limited by the fact that only a conventional RIE system was available for this process; in principle, an ICP-RIE system can increase the etch rate by up to an order of magnitude [53; 49; 50]. Furthermore, the only step in process which has a dependence on the number of lenses is the laser writing step, though writing time can also be reduced by careful consideration of the pattern design. Therefore, the process is immensely scalable and processing at wafer scales with reasonable process times is conceivable.
### Grayscale Lithography and Pattern Generation
As mentioned above, grayscale exposure of the photoresist is performed using a Heidelberg Instruments DWL 66+ direct-write laser lithography system. During this step, the laser writer is given an array of grayscale pixels corresponding to an area of the chip, with the brightness of each pixel denoting the dose to be applied
Figure 1: (a-d) Fabrication protocol for hemispherical solid immersion lenses in silicon carbide by grayscale hard-mask lithography, consisting of: (a) spin coating, (b) grayscale exposure and resist development, (c) dry etch of silica layer with photoresist as mask, and (d) dry etch of silicon carbide crystal with silica as a grayscale hard-mask. This process is highly scalable, as demonstrated by SEM micrographs depicting \(\sim\)1000 (d, scale bar = 100 \(\mu\)m) and 64 (e, scale bar = 10 \(\mu\)m) lenses. The resulting lens shape (g, scale bar = 2 \(\mu\)m) is reproducible across the chip and of the correct dimensions to create 5 \(\mu\)m-radius hemispherical structures.
at that location of the pattern. Since the ma-p 1275G photoresist used is specifically developed for grayscale lithography, this dose variation results in a resist height variation after exposure. However, the relationship between dose and final resist height is not necessarily linear, and depends on factors such as the intrinsic resist exposure curve, development parameters and baking process prior to exposure.
To calibrate the developed resist height to targeted values, slowly-varying linear slope patterns were exposed in the resist, initially with a linear exposure dose. The slope profiles were then measured using a stylus profilometer (KLA Tencor(tm) P-7), both after resist development and after the fabrication process was completed. By comparing the targeted height of the profile for each dose value to the actual height achieved, the dose curve can be calibrated. As this procedure corrects for the universal sensitivity curve of the resist itself, the same calibration was applied to all 3D shapes fabricated, and additional shape modifications arising from the design itself, for example through the local spread of the exposure beam, are addressed through proximity correction (see Section II.3).
Figure 2 demonstrates the shape control possible through grayscale hardmask lithography. As the shape of grayscale features, in this work rounded solid immersion lenses and trenches, can be arbitrarily modified within fabrication limitations, it is straightforward to create arrays of differently-shaped devices (Fig. 2(a)), and sweep design parameters such as lens width (Fig. 2(b)) and trench curvature (Fig. 2(c)). Therefore the desired shape could be achieved through iterative modification of the design profile, or alternatively through choosing the most desirable shape from a parameter sweep.
### Proximity Correction
In electron-beam lithography (EBL), the random trajectories of individual electrons in the beam can lead to the exposure of sections of the resist tens of micrometers away from the beam entry point, affecting the shape of resulting resist patterns. Proximity effect correction (PEC) is therefore routinely applied to EBL patterns to account for this and recover the desired shape. The characteristic distribution of the dose experienced from a point exposure of the resist is captured by a point-spread function (PSF), which is experimentally obtained for every combination of resist and substrate.
Direct laser writing lithography suffers similarly from proximity effects due to the beam profile and stray photons, resulting in a double-gaussian PSF [55], though these tend to be of much shorter range than experienced in EBL. However, the effects are still significant at micrometer scales, as for the lenses in this work, and so a PEC process was developed to correct for distortions in the lens shape. We use the two-gaussian profile of Du, et al. [55] as a framework for our PSF, viz:
\[PSF(r)=\frac{E_{0}}{\pi(1+\delta)}\left[\frac{1}{\alpha^{2}}e^{-\frac{r^{2}}{ \alpha^{2}}}-\frac{\delta}{\beta^{2}}e^{-\frac{r^{2}}{\beta^{2}}}\right]\;, \tag{1}\]
where \(r\) denotes the radial distance from the center of the Gaussian beam, \(E_{0}\) is the intensity at the center of the beam, \(\alpha\) is related to the width of the beam and \(\beta\) and \(\delta\) are experimentally-determined parameters linked to the spread and strength of scattering in the substrate, respectively. Physically, the first term in the bracket therefore describes the absorbed energy distribution due to direct illumination of the beam, and the second defines the loss caused by surface scattering and substrate absorption [55].
PEC is carried out following the approach of Pavkovich [54], where the required dose at a position \(x\) (\(D(x)\)) given a desired pattern \(P(x)\) and global dose \(E_{0}\) is:
\[D(x)=\frac{(E_{0}/K)P(x)}{\frac{1}{1+\eta}+\frac{\eta}{1+\eta}(P(x)*PSF(x))}\;, \tag{2}\]
where \(K\) is a conversion factor between incident light intensity and energy deposition in the resist, \(\eta\) is an experimental parameter related to the strength of the
Figure 2: Shape control of features through grayscale lithography. (a) SEM of pattern with varying lens shapes, with profiles of individual lens shapes (b) displaying a progression from pointed conical lenses to wide, flat-topped shapes. The surrounding trenches can also be modified (c) to optimize sidewall angle and shape.
correction, physically identified as the fraction of backpropagating light and so the strength of the proximity effect, and \(P(x)*PSF(x)\) is the convolution of the desired pattern and PSF.
By substituting Eq. 1 and the desired lens profile (Fig. 3(a), dashed black line) into Eq. 2, the corrected dose profile given a set of experimental parameters \(\alpha,\beta,\delta\) and \(\eta\) can be obtained (Fig. 3(a), solid gray line). As can be observed in Figure 3(a), the overall effect of the proximity effect correction for this profile is to increase targeted resist height (reduce exposure dose) in the regions of partial exposure, exaggerating convex profile features. This is intuitively consistent with shapes obtained without correction, where hemispherical target profiles yielded conical lens shapes (see e.g. profiles in Fig 2(b)), so that correction could be expected to involve an increase in the curvature of the target profile.
While optimising these values is a highly multivariate problem, the physical interpretation of each parameter provides an intuition for how changing its value will affect the final shape. Through a series of iterations and sweeps of the correction parameters, it was possible to achieve hemispherical lens shapes to a high level of fidelity. In Figure 3(b), a series of lenses with increasing correction strength (increasing \(\eta\)) from left to right is shown. While the lowest correction strength (\(\eta=0\), reducing Eq. 2 to \(D(x)=(E_{0}/K)P(x)\)) shows low shape fidelity to the desired 5 \(\mu\)m hemisphere, the strongest correction (\(\eta=6\)) produced a profile which only meaningfully deviates from a hemispherical shape at its corners, possibly due to the radius of curvature of the profilometer tip used. This clearly demonstrates the importance of proximity effect correction for laser writing of micrometer-scale profiles with large aspect ratios.
## III Results
We benchmark the performance of the SILs by studying the enhancement of optical collection efficiency from silicon vacancy (V\({}_{\mathrm{Si}}\)) centers, one of the leading systems for spin-based quantum technology in SiC [23; 56; 25]. In order to generate V\({}_{\mathrm{Si}}\) centers in the SiC material, the sample was irradiated with high-energy electrons after lens fabrication. A 10 MeV linear electron accelerator (MB10-30MP; Mevex Corp., Canada) was used for the irradiation process, with irradiation parameters: beam current = 125 mA, pulse length = 8 \(\mu\)s, pulse repetition rate = 460 Hz. A 10 mm thick aluminium plate in front of the sample was used to reduce the electron energy to about 4-5 MeV. The dose was determined by means of a 5 mm thick graphite calorimeter and the electron fluence then re-calculated taking a value of \(\sim\)2 MeV cm\({}^{2}\) g\({}^{-1}\) into account for the mass collision stopping power of electrons in the energy range of 4-10 MeV. Thus a the applied dose of 3.6 kGy corresponds to an electron fluence of \(\sim 1\times 10^{13}\) electrons per cm\({}^{2}\).
Photoluminescence from these emitters was measured using a room-temperature optical confocal microscope setup, as discussed in detail in Appendix V.2. V\({}_{\mathrm{Si}}\) centers in SiC feature a dipole oriented along the \(c\)-axis, i.e. at roughly 4 degrees from the normal to the sample surface. The performance of the lenses for enhancing the collected light from single V\({}_{\mathrm{Si}}\) centers was quantified through comparison of single-emitter optical properties beneath lenses (SIL emitters, see Fig. 4(a)) and in regions without photonic structures (bulk emitters, see Fig. 4(b)).
In the photoluminescence maps shown in Figure 4(a) and (b), the enhanced brightness of SIL emitters can be clearly ascertained. These maps were taken at a laser power of 3 mW, where both SIL and bulk emitters have
Figure 3: Analytical proximity effect correction of lens shapes [54]. (a) Profile before (dashed) and after (solid) applying proximity correction with a strength of \(\eta=2\). Note that the correction changes the lens height, but does not affect the lens radius, so that a bias must also be applied. (b) SEM showing a progression of lenses (Scale bar = 20 \(\mu\)m) from no proximity correction (left,(c)) to a maximum strength of \(\eta=6\) (right, (d)). A hemisphere with maximum radius of 1.2 \(\mu\)m can be fit inside the uncorrected lens, shown by the dotted line in the profile of (c), while the strongest correction shows good agreement with a hemispherical profile of radius 5 \(\mu\)m, shown by the dotted line in the profile of (d). Scale bars for (c),(d) = 2 \(\mu\)m.
reached the saturation regime (see Fig. 4(d)), to ensure that the enhancement is primarily due to the enhancement of collected light, and not the focusing effect of the lens on the excitation light, which produces a smaller beam focal spot and so higher spot intensities for the same excitation power. In general, the photoluminescence intensity as a function of excitation power, \(PL(I)\), of an optical emitter (as in Fig. 4(d)) follows the equation:
\[[h]PL(I)=PL_{\mathrm{sat}}\frac{1}{1+\frac{I}{I_{\mathrm{sat}}}}\, \tag{3}\]
where \(PL_{\mathrm{sat}}\) is the photoluminescence measured at saturation and \(I_{\mathrm{sat}}\) is the excitation power when the emitter is emitting at half its saturation intensity. The first effect of the SIL, i.e. enhancement of light collection, primarily affects the saturation power of the emitter, through which the enhancement factor of the lens is determined. In the case of the emitters measured for Fig. 4(d), these values for the SIL and bulk emitter are \(PL_{\mathrm{sat,SIL}}=21.8\) kcps and \(PL_{\mathrm{sat,bulk}}=8.1\) kcps respectively, leading to an enhancement factor of 2.7 (Note that as this bulk emitter is brighter than average, this value is underestimated compared to the statistics in Fig. 4(d)).
The second effect of the SIL, the reduction of the beam spot and correspondingly increased intensity, is conversely observed as a reduction in the half-saturation laser power. For the SIL and bulk emitters measured in Fig. 4(c), the value of this parameter is \(I_{\mathrm{sat,SIL}}=0.38\) mW and \(P_{\mathrm{sat,bulk}}=1.44\) mW respectively, reading to an intensification factor of 3.8.
In the sample measured for this experiment, the V\({}_{\mathrm{Si}}\) centers are created at random locations, and variation in the enhancement factor due both to the location of SIL emitters relative to lenses and of natural distributions in brightness between different emitters can be expected. To quantify this variation, the photoluminescence at saturation of 10 bulk emitters and 13 SIL emitters (across 7 different SILs) was measured. The enhancement factor of each SIL emitter was quantified by comparing it to the average brightness of all of the bulk emitters, and the statistics for the resulting values was fit to a normal distribution, which yielded a value for the enhancement factor of \(4.1\pm 0.6\). A histogram of the data along with best fit are plotted in Fig. 4(d).
It is worth noting here that lens performance can be anticipated to be similar for emitters in other materials, when changes in refractive index and dipole orientation are accounted for. Further, as the lens profile can be modified almost arbitrarily with the grayscale hardmask technique presented in this work, improved lens shapes can not only yield better performance than attained here but can be specifically optimized for other materials and emitters if desired.
## IV Conclusion and Outlook
We have demonstrated a scalable process to fabricate hemispherical SILs using grayscale lithography, demonstrating an enhancement in optical collection efficiency of single emitters of about 4. Our approach is much faster than FIB and more flexible, in terms of shape control, than other lithographic approaches such as reflow lithography [24; 38]. We analyzed the SILs performance on randomly located emitters, not centered on the lens: we expect much improved performance from SILs registered to the emitter position [16; 19].
While we have focused here on hemispherical SILs, our grayscale hard-mask lithography protocol is very flexible
Figure 4: Experimental results of lens performance. Photoluminescence maps taken at a laser power of 3 mW of V\({}_{\mathrm{Si}}\) emitters beneath a lens (a) and within the substrate bulk without patterned structures (b). The laser background at this power is approximately 4000 counts per second. (c) Background-subtracted power saturation measurements of single emitters in a lens (purple circles) and in the bulk (green squares), showing corresponding enhancement of the collected photoluminescence. (d) Histogram of enhancement factors observed for 13 different single V\({}_{\mathrm{Si}}\) emitters. The enhancement factor was measured at a laser power of 3 mW and calculated by taking the ratio of the background-corrected emitter brightness with the mean background-corrected brightness of 11 emitters in the bulk.
and can enable the precise fabrication of more complex shapes. Future work will enable optimisation of the SIL shape (allowing e.g. elliptical [17, 57] or conical [24] profiles) and adjacent structures, such as trenches, to re-direct light into lower numerical aperture collection optics.
Our technique could further be extended to different materials, such as diamond for example, provided one can achieve a sufficient balance of etching selectivity using the SiO\({}_{2}\) mask.
An appealing feature of silicon carbide is the integration of photonic structures with microelectronics, enabling the possibility to control the charge state of the emitter [58], tune the emission wavelength and deplete charges in the environment to minimize the spectral diffusion of the spin-selective atomic-like transitions used for spin-photon interfacing [59]. As the grayscale hard-mask method is scalable and compatible with standard photolithography and CMOS processes, it is possible to integrate the protocol with electronics workflows, allowing the co-location of photonic and electronic elements on-chip and enabling greater flexibility for hybrid photonic-electronic integrated circuits [60, 61].
###### Acknowledgements.
We thank Fiammetta Sardi, Matthias Widmann and Florian Kaiser for helpful discussions. We are grateful to Daniel Andres Penares and Mauro Brotons i Gisbert for their assistance with building the experimental characterization setup. We thank Dominique Colle of Heidelberg Instruments for fruitful discussions on grayscale dose calibration. This work is funded by the Engineering and Physical Sciences Research Council (EP/S000550/1), the Leverhulme Trust (RPG-2019-388) and the European Commission (QuanTELCO, grant agreement No 862721). C. Bekker is supported by a Royal Academy of Engineering Research Fellowship (RF2122-21-129).
|
2303.06262 | Chromatic numbers of Cayley graphs of abelian groups: A matrix method | In this paper, we take a modest first step towards a systematic study of
chromatic numbers of Cayley graphs on abelian groups. We lose little when we
consider these graphs only when they are connected and of finite degree. As in
the work of Heuberger and others, in such cases the graph can be represented by
an $m\times r$ integer matrix, where we call $m$ the dimension and $r$ the
rank. Adding or subtracting rows produces a graph homomorphism to a graph with
a matrix of smaller dimension, thereby giving an upper bound on the chromatic
number of the original graph. In this article we develop the foundations of
this method. In a series of follow-up articles using this method, we completely
determine the chromatic number in cases with small dimension and rank; prove a
generalization of Zhu's theorem on the chromatic number of $6$-valent integer
distance graphs; and provide an alternate proof of Payan's theorem that a
cube-like graph cannot have chromatic number 3. | Jonathan Cervantes, Mike Krebs | 2023-03-11T00:54:53Z | http://arxiv.org/abs/2303.06262v2 | # Chromatic numbers of Cayley graphs of abelian groups: A matrix method
###### Abstract
In this paper, we take a modest first step towards a systematic study of chromatic numbers of Cayley graphs on abelian groups. We lose little when we consider these graphs only when they are connected and of finite degree. As in the work of Heuberger and others, in such cases the graph can be represented by an \(m\times r\) integer matrix, where we call \(m\) the dimension and \(r\) the rank. Adding or subtracting rows produces a graph homomorphism to a graph with a matrix of smaller dimension, thereby giving an upper bound on the chromatic number of the original graph. In this article we develop the foundations of this method. In a series of follow-up articles using this method, we completely determine the chromatic number in cases with small dimension and rank; prove a generalization of Zhu's theorem on the chromatic number of 6-valent integer distance graphs; and provide an alternate proof of Payan's theorem that a cube-like graph cannot have chromatic number 3.
_Keywords--_ graph, chromatic number, abelian group, Cayley graph, integer distance graph, cube-like graph, circulant graph, Payan's theorem, Zhu's theorem
## 1 Introduction
Chromatic numbers of Cayley graphs on abelian groups have been studied in many particular cases, including integer distance graphs, circulant graphs, unit-distance graphs, cube-like graphs, Paley graphs, and so on. The intention of the present paper is to begin to lay a foundation for a systematic unified approach to problems of this sort.
We recall the Cayley graph construction. Given a group \(G\) and a subset \(S\) of \(G\), we say \(S\) is _symmetric_ if we have that \(s^{-1}\in S\) whenever \(s\in S\). Given a group \(G\) and a symmetric subset \(S\) of \(G\), we define the _Cayley graph_ of \(G\) with respect to \(S\), denoted \(\operatorname{Cay}(G,S)\), to be the graph with vertex set \(G\), where two vertices \(x\) and \(y\) are adjacent if and only if \(x=ys\) for some \(s\in S\). (Some authors require \(S\) to be a generating set for \(G\) in order to define the Cayley graph at all; for now we do not impose this restriction, but very shortly we will reverse course and assume after all that \(S\) generates \(G\).) The set \(S\) being symmetric makes \(\operatorname{Cay}(G,S)\) an undirected graph. An _abelian Cayley graph_ is a Cayley graph on an abelian group. We also recall that the _chromatic number_ of a graph \(X\), denoted \(\chi(X)\), is the smallest number of colors needed to assign every vertex of \(X\) a color so that if \(v\) and \(w\) are adjacent vertices, then \(v\) and \(w\) are assigned different colors. (While the letter \(G\) is often used to denote a graph, this can be misleading for Cayley graphs, as the "\(G\)" suggests a group. We tend to use \(G\) for groups and \(X\) for graphs. Some other authors prefer \(\Gamma\) for groups and \(G\) for graphs.) From now on we shall assume the reader is familiar with basic terminology in graph theory, as in [32], as well as basic concepts in group theory, as in [12].
So far as the authors have been able to ascertain, no set of methodical procedures for determining the chromatic number of an abelian Cayley graph currently exists. As one contributor notes in a post labeled "Chromatic numbers of infinite abelian Cayley graphs" on the online bulletin board MathOverflow [23], "...I know very little about [this], and to my surprise, I wasn't able to find much in the literature!" Tao expresses a similar sentiment in [30]: "There is a bit of literature on chromatic numbers of abelian Cayley graphs, though from a quick search I didn't find anything that would make it substantially easier to compute the chromatic number of such graphs as compared to general graphs. Still it might be worth keeping the Cayley graph literature in mind."
Given a group \(G\) and a symmetric subset \(S\) of \(G\), let \(G_{e}\) be the subgroup of \(G\) generated by \(S\). Then the subgraph \(X_{e}\) of \(X=\operatorname{Cay}(G,S)\) induced by \(G_{e}\) is precisely the connected component of \(X\) containing the identity element \(e\). It has been previously observed that \(\chi(X_{e})=\chi(X)\). To prove this claim, note that \(\chi(X_{e})\leq\chi(X)\) by virtue of \(X_{e}\) being a subgraph of \(X\). To prove the reverse inequality, we show that every proper \(k\)-coloring \(c\) of \(X_{e}\) induces a proper \(k\)-coloring of \(X\). Let \(T\) be a left transversal set for \(G_{e}\) in \(G\). We then extend \(c\) to \(G\) by defining \(c(tx)=c(x)\) whenever \(t\in T,x\in G_{e}\). Consequently, in our study of chromatic numbers of Cayley graphs, it suffices to consider connected graphs. (_Nota bene_: The existence of \(T\) requires the axiom of choice (AC). Sans AC, the statement \(\chi(X_{e})=\chi(X)\) may fail--see [28] for an example of this phenomenon involving an abelian Cayley graph.)
Moreover, when \(X=\operatorname{Cay}(G,S)\) has a finite chromatic number, it follows from the de Bruijn-Erdos theorem [10] that \(\chi(X)\) equals the maximum, over all finite symmetric subsets \(R\) of \(S\), of \(\chi(\operatorname{Cay}(G,R))\). We remark that AC again plays a role here, as the proof of the de Bruijn-Erdos theorem relies on it.
For the reasons noted in the previous two paragraphs, in this paper we confine our attention to finite-degree connected abelian Cayley graphs. That is, we assume that \(G\) is an abelian group; that \(S\) is a finite, symmetric subset of \(G\); and that \(S\) generates \(G\).
We require that our graphs be undirected, and we do not allow multiple edges. However, we do permit them to have loops. These will occur whenever the identity element is in \(S\). (Recall that a _loop_ is an edge from a vertex to itself.) The reason for allowing loops will become apparent as we go along--they arise naturally as a by-product of our methods. A graph with a loop cannot be properly colored.
### Previous work
The problem of finding the chromatic number of an abelian Cayley graph has been studied in detail in many particular instances. Below, we briefly touch on a few of these.
* **Integer distance graphs.** An _integer distance graph_ is a Cayley graph on the group \(\mathbb{Z}\) of integers. Chromatic numbers of such graphs have been widely investigated; see [22] for a survey of this subject. In Example 2.16 below, we see how our methods can be used to recover a simple but foundational result for integer distance graphs due to Eggleton, Erdos, and Skilton. In [33], Zhu determines the chromatic number of all integer distance graphs of the form \(\operatorname{Cay}(\mathbb{Z},\{\pm a,\pm b,\pm c\})\). In [7], we recover Zhu's results using our techniques, in the process obtaining improved upper bounds for the periods of optimal colorings of such graphs.
* **Circulant graphs.** A _circulant graph_ is a Cayley graph on the group \(\mathbb{Z}_{n}\) of integers modulo \(n\). Chromatic numbers of circulant graphs have been explored extensively, for example in [2, 16, 18, 24, 25].
* **Unit-distance graphs.** The long-standing "chromatic number of the plane" problem (sometimes referred to as the Hadwiger-Nelson problem) asks for the minimum number of colors needed to assign every point in \(\mathbb{R}^{2}\) a color so that points of distance \(1\) from each other always receive different colors. Equivalently, we can ask for the chromatic number \(\chi(\mathbb{R}^{2})\) of the graph \(G=\operatorname{Cay}(\mathbb{R}^{2},D)\), where \(D\) is the unit circle. Subgraphs of \(G\) are called _unit-distance graphs_. Currently, it is known that \(\chi(\mathbb{R}^{2})\) is at least \(5\) and at most \(7\). The book [29] details the history of this problem. The lower bound of \(5\) was proved by de Grey in [11], subsequent to the publication of [29]. Example 2.13 below illustrates how our methods can be used to compute chromatic numbers of unit-distance graphs in some cases.
* **Cube-like graphs.** Let \(A\) be a finite set, and let \(\mathcal{P}(A)\) be the power set of \(A\), that is, the set of all subsets of \(A\). Then \(\mathcal{P}(A)\) is an abelian group under the symmetric difference operation. A _cube-like graph_ is a Cayley graph on \(\mathcal{P}(A)\). In [26], Payan proves that a cube-like graph cannot have chromatic number \(3\). Other results on chromatic numbers of cube-like graphs can be found, for example, in [20] as well as [19, Section 9.7]. In [5], we provide an alternate proof of Payan's theorem using our methods.
* **Paley graphs.** Let \(F\) be a finite field such that \(q=|F|\) is congruent to \(1\) modulo \(4\). Then, regarding \(F\) as a group under addition, the set \(S\) of quadratic residues in \(F\) is symmetric. The _Paley graph_\(G(q)\) is defined by \(G(q)=\operatorname{Cay}(F,S)\). The Mathematical Reviews summary for [3] states that "The authors prove in this paper that the clique number and the chromatic number of the Paley graph \(G(p^{2r})\) are both \(p^{r}\), where \(p\) is an odd prime and \(r\) is a natural number."
Some results of a general nature are known, and we touch on several of these below. None of them, however, yield exact values of chromatic numbers.
* **Probabilistic results.** In [1], Alon studies chromatic numbers of random Cayley graphs, with some theorems pertaining to the particular case of random Cayley graphs on abelian groups.
* **Spectral bounds.** Classically, Hoffman [17] has a lower bound on the chromatic number of a finite graph in terms of the eigenvalues of its adjacency matrix. For a finite Cayley graph on an abelian group \(G\), these eigenvalues can in turn be expressed in terms of the irreducible characters of \(G\), as discussed many places, including [21]. This method is employed, for instance, by Vinh in [31] to obtain lower bounds on chromatic numbers of a certain class of finite abelian Cayley graphs.
* **Computational complexity.** In [15] and [9], it is shown that finding the chromatic number is NP-hard for cube-like graphs and circulant graphs, respectively.
* **No-hole 2-distant colorings.** A _no-hole 2-distant coloring_ of a graph \(X\) is a mapping \(c\) from the vertex set \(V\) of \(X\) to the non-negative integers such that (i) if \(u\) and \(v\) are adjacent vertices, then \(|c(u)-c(v)|\geq 2\), and (ii) the image of \(V\) under \(c\) consists of consecutive integers. In [8], Chang, Lu, and Zhou prove that for a finite abelian group \(G\), the Cayley graph \(\operatorname{Cay}(G,S)\) has a no-hole 2-distant coloring if and only if \(G\setminus S\) generates \(G\).
### Summary of this paper and its follow-on articles
This paper can be regarded as the first in a suite of four articles, the other three being [5], [6], and [7]. This set of papers is organized as follows.
\(\bullet\) In Section 2 of the present paper, we lay the groundwork. We begin by showing that any finite-degree connected abelian Cayley graph can be represented by an associated integer matrix we call the "Heuberger matrix." We show how basic chromatic properties of the graph, such as bipartiteness, can be found immediately from the matrix. We then catalog several graph isomorphisms and homomorphisms induced by various row and column operations, and we give examples to show how these can be used to compute chromatic numbers in some cases.
\(\bullet\) In [5], we use this method of Heuberger matrices to provide an alternate proof of Payan's theorem on cube-like graphs.
\(\bullet\) In [6], we state and prove two main results which give precise and easily checked numerical conditions that completely determine the chromatic number when the associated Heuberger matrix is \(2\times 2\) or \(3\times 2\).
\(\bullet\) In [33], Zhu finds the chromatic number for an arbitrary integer distance graph of the form \(\operatorname{Cay}(\mathbb{Z},\{\pm a,\pm b,\pm c\})\), where \(a,b,\) and \(c\) are distinct positive integers. Such graphs, we show, have associated \(3\times 2\) matrices. Hence, in [7], we demonstrate how the results of [6] yield Zhu's theorem as a corollary.
Preliminaries
In this section we establish the basic definitions and lemmas that will be used throughout [5], [6], and [7].
### Standardized abelian Cayley graphs
For reasons noted in the introduction, we restrict ourselves to finite-degree connected abelian Cayley graphs. Let \(G\) be an abelian group written using additive notation, and let \(S=\{\pm g_{1},\ldots,\pm g_{m}\}\) be a symmetric subset of \(G\) that generates \(G\). Let \(\mathbb{Z}\) denote the group of integers under addition, and let \(\mathbb{Z}^{m}\) denote the \(m\)-fold product of \(\mathbb{Z}\) with itself. Let \(e_{k}\) be the element of \(\mathbb{Z}^{m}\) which equals \(0\) everywhere except in the \(k\)th component, where it equals \(1\). Define a group homomorphism \(\varphi\colon\mathbb{Z}^{m}\to G\) by \(e_{k}\mapsto g_{k}\). Let \(H\) be the kernel of \(\varphi\). It is straightforward to see that \(\varphi\) induces a graph isomorphism between \(X=\operatorname{Cay}(\mathbb{Z}^{m}/H,\{H\pm e_{1},\ldots,H\pm e_{m}\})\) and \(\operatorname{Cay}(G,S)\). In this way we standardize our graphs so that the generating set always (essentially) consists of the canonical basis vectors.
We can standardize further still. Elements of \(\mathbb{Z}^{m}\) are \(m\)-tuples of integers; we write them as column vectors. Let \(y_{1},\ldots,y_{r}\in\mathbb{Z}^{m}\), and let \(H=\{y_{1},\ldots,y_{r}\}\) be the subgroup of \(\mathbb{Z}^{m}\) generated by \(y_{1},\ldots,y_{r}\). We adopt the following convention. To represent the graph \(X\), we write the \(m\times r\) matrix \(M_{X}\) whose \(j\)th column is \(y_{j}\), with the superscript SACG and subscript \(X\). Hence, given a finite-degree connected abelian Cayley graph--even if it is of infinite order--all relevant information about it is contained in this finite matrix with integer entries. We refer to a graph \(X\) in this form as a _standardized abelian Cayley graph_ (hence the "SACG"), and we call \(M_{X}\) a _Heuberger matrix_ of \(X\). The authors got the idea for representing an abelian Cayley graph this way from [16], ergo the eponym. We call \(m\) the _dimension_ of \(M_{X}\), and we call \(r\) the _rank_ of \(M_{X}\). When the graph does not need to be named, we sometimes omit the subscript. Note that \(M_{X}\) is not unique; a standardized abelian Cayley graph can have many different Heuberger matrices associated to it.
Often we will wish to reverse the process, that is, to define an abelian Cayley graph from a given \(m\times r\) integer matrix \(M\). In this case, we take \(H\) to be the subgroup of \(\mathbb{Z}^{m}\) generated by the columns of \(M\); we let \(G=\mathbb{Z}^{m}/H\), and we take \(S\) to be \(\{H\pm e_{1},\ldots,H\pm e_{m}\}\). Then \(M\) is a Heuberger matrix of \(\operatorname{Cay}(G,S)\). Hence in the sequel when we write \(M_{X}^{\text{SACG}}\) without reference to a given abelian Cayley graph, we mean that \(X\) equals \(\operatorname{Cay}(G,S)\) in this manner.
_Example 2.1_.: Let \(n\in\mathbb{Z}\). We have that \(\operatorname{Cay}(\mathbb{Z}_{n},\{\pm 1\})\cong(n)_{X}^{\text{SACG}}\). (We use the symbol \(\cong\) to mean "is isomorphic to.") Hence
\[(n)_{X}^{\text{SACG}}\text{ is }\left\{\begin{array}{ll}\text{a doubly infinite path graph}&\text{if }n=0\\ \text{a single vertex with a loop}&\text{if }|n|=1\\ \text{a path of length 1}&\text{if }|n|=2\\ \text{an }|n|\text{-cycle}&\text{if }|n|\geq 3.\end{array}\right.\]
_Example 2.2_.: We have that the circulant graph \(\operatorname{Cay}(\mathbb{Z}_{35},\{46,\pm 10\})\) is isomorphic to
\[\begin{pmatrix}5&0\\ 4&7\end{pmatrix}_{X}^{\operatorname{SACG}},\]
as shown in [16, Example 12]. Later, we will refer to this graph and others like it as "Heuberger circulants."
_Example 2.3_.: Consider \(W=\operatorname{Cay}(\mathbb{Z},\{\pm 6,\pm 10,\pm 25\})\). The graph \(W\) is an example of an integer distance graph; later, we will refer to \(W\) as a "Zhu \(\{6,10,25\}\) graph." Taking \(\varphi\colon e_{1}\mapsto 6,e_{2}\mapsto 10,e_{3}\mapsto 25\) and all other notation as above, after some computation we find that \(H\) is generated by \((5,-12,6)^{t}\) and \((0,5,-2)^{t}\). (Here we use the notation \(y^{t}\) for the transpose of a vector or matrix \(y\).) Hence we write the graph \(X\) as \(\begin{pmatrix}5&0\\ -12&5\\ 6&-2\end{pmatrix}_{X}^{\operatorname{SACG}},\) with Heuberger matrix
\[\begin{pmatrix}5&0\\ -12&5\\ 6&-2\end{pmatrix}.\]
_Example 2.4_.: We now discuss matrices associated to integer distance graphs in general. Let \(a_{1},\ldots,a_{r+1}\) be positive integers with \(\gcd(a_{1},\ldots,a_{r+1})=1\). Let \(g_{k}=\gcd(a_{1},\ldots,a_{k})\) for \(k=2,\ldots,r\). Let \(u_{ij}\) be integers such that \(a_{1}u_{1k}+\cdots+a_{k}u_{kk}=a_{k+1}g_{k}\) for \(k=2,\ldots,r\). Using elementary number theory as in [27] to find all solutions to the linear homogeneous Diophantine equation \(a_{1}x_{1}+\cdots+a_{r+1}x_{r+1}=0\), we find that \(\operatorname{Cay}(\mathbb{Z},\{\pm a_{1},\ldots,\pm a_{r+1}\})\) is isomorphic to
\[\begin{pmatrix}\frac{a_{2}}{g_{2}}&-\frac{u_{1}g_{2}}{g_{3}}&-\frac{u_{1}g_{ 2}}{g_{4}}&\cdots&-\frac{u_{1},r-1+g_{r-1}}{g_{r}}&-u_{1r}\\ -\frac{a_{1}}{g_{2}}&-\frac{u_{2}g_{2}}{g_{3}}&-\frac{u_{2}g_{3}g_{4}}{g_{4}}& \cdots&-\frac{u_{2},r-1+g_{r-1}}{g_{r}}&-u_{2r}\\ 0&\frac{g_{2}}{g_{3}}&-\frac{u_{3}g_{3}}{g_{4}}&\cdots&-\frac{u_{3},r-1+g_{r-1 }}{g_{r}}&-u_{3r}\\ 0&0&\frac{g_{3}}{g_{4}}&\cdots&-\frac{u_{4},r-1+g_{r-1}}{g_{r}}&-u_{4r}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&\frac{g_{r-1}}{g_{r}}&-u_{rr}\\ 0&0&0&\cdots&0&g_{r}\end{pmatrix}_{X}^{\operatorname{SACG}}.\]
Conversely, suppose \(M_{X}\) is an \((r+1)\times r\) matrix associated to a standardized abelian Cayley graph \(X\). Let \(v=(v_{1},\ldots,v_{r+1})^{t}\) be the column vector whose \(j\)th component equals \((-1)^{j}\) times the determinant of the matrix obtained by deleting the \(j\)th row from \(M_{X}\). In other words, \(v\) is the generalized cross product (essentially, the exterior product) of the columns of \(M_{X}\). We claim that if \(v\neq 0\) and \(\gcd(v_{1},\ldots,v_{r+1})=1\), then \(X\) is isomorphic to \(\operatorname{Cay}(\mathbb{Z},\{\pm v_{1},\ldots,\pm v_{r+1}\})\).
To prove this, take the homomorphism \(\varphi\colon\mathbb{Z}^{r+1}\to\mathbb{Z}\) with \(\varphi\colon e_{j}\mapsto v_{j}\). Let \(y_{1},\ldots,y_{r}\) be the columns of \(M_{X}\), and let
be the subgroup of \(\mathbb{Z}^{r+1}\) generated by \(\{y_{1},\ldots,y_{r}\}\). We must show that \(\ker(\varphi)=H\). The fact that \(H\subset\ker(\varphi)\) follows from \(M_{X}^{*}v=0\), which in turn follows from cofactor expansion. Conversely, suppose that \((c_{1},\ldots,c_{r+1})=c\in\ker(\varphi)\), or equivalently, that \(c\cdot v^{t}=0\), where \(\cdot\) denotes the usual dot (a.k.a. inner) product. We will show that \(c\in H\). It suffices to prove this when \(\gcd(c_{1},\ldots,c_{r+1})=1\). Let \(H_{\mathbb{Q}}\) be the span of \(\{y_{1},\ldots,y_{r}\}\) over the rational numbers \(\mathbb{Q}\). Since \(v\neq 0\), it follows that \(H_{\mathbb{Q}}\) has dimension \(r\) over \(\mathbb{Q}\). Thus the orthogonal complement \(H_{\mathbb{Q}}^{\perp}\) of \(H_{\mathbb{Q}}\) in \(\mathbb{Q}^{r*1}\) equals \(\operatorname{span}_{\mathbb{Q}}v=H_{\mathbb{Q}}^{\perp}\). So \(H_{\mathbb{Q}}=v^{\perp}\). Hence \(c\in H_{\mathbb{Q}}\). In other words, \(c=M_{X}q\) for some \(q\in\mathbb{Q}^{r}\). From \(\gcd(v_{1},\ldots,v_{r+1})=1\), we have from the theory of the Smith normal form [12] that there exist unimodular matrices \(U\), \(V\), such that
\[UM_{X}V=\begin{pmatrix}I_{r}\\ 0\end{pmatrix},\]
where \(I_{r}\) denotes the \(r\times r\) identity matrix and \(0\) denotes a zero row.
Take
\[q=\left(\frac{a_{1}}{b_{1}},\ldots,\frac{a_{r+1}}{b_{r+1}}\right)\]
for some integers \(a_{i},b_{i}\) with \(\gcd(a_{i},b_{i})=1\) for all \(i\). Let \(b\) be the product of the \(b_{i}\), that is, \(b=b_{1}\cdots b_{r+1}\). Observe that \((UM_{x}V)(V^{-1}bq)=Ubc\), so the nonzero entries of \(Ubc\) are the same as those of \(V^{-1}bq\). For a tuple \(w\) of integers, let \(\gcd(w)\) denote the greatest common denominator of its entries. Note that if \(w\) is a \(k\)-tuple of integers and \(A\) is unimodular, then \(\gcd(w)=\gcd(Aw)\). (This follows by writing \(A\) as a product of elementary matrices, none of which have any effect on \(\gcd\).) Putting the pieces together, we have
\[\gcd(bq)=\gcd(V^{-1}bq)=\gcd(Ubc)=\gcd(bc)=b.\]
Therefore \(b|\frac{b}{b_{j}}\cdot a_{j}\) for all \(j=1,\ldots,r+1\). Hence for all \(j\) we have that \(b_{j}|a_{j}\), which implies that \(b_{j}=\pm 1\), because \(\gcd(b_{j},a_{j})=1\). Therefore \(q\in\mathbb{Z}^{r}\), showing that \(c\in H\).
_Example 2.5_.: In a similar vein, we can construct Heuberger matrices associated to arbitrary circulant graphs. Take all notation as in Example 2.4, and let \(n=a_{r+1}\). Then the circulant graph \(C_{n}(a_{1},\ldots,a_{r}):=\operatorname{Cay}(\mathbb{Z}_{n},\{\pm a_{1}, \ldots,\pm a_{r}\})\) is isomorphic to
\[\begin{pmatrix}\frac{a_{2}}{g_{2}}&-\frac{u_{1}g_{2}}{g_{3}}&-\frac{u_{1}g_{ 3}}{g_{4}}&...&-\frac{u_{1,r-1}g_{r-1}}{g_{r}}&-u_{1r}\\ -\frac{a_{1}}{g_{2}}&-\frac{u_{2}g_{2}g_{3}}{g_{3}}&-\frac{u_{2}g_{3}g_{4}}{g_ {4}}&...&-\frac{u_{2,r-1}g_{r-1}}{g_{r}}&-u_{2r}\\ 0&\frac{a_{2}}{g_{3}}&-\frac{u_{3}g_{4}}{g_{4}}&...&-\frac{u_{3,r-1}g_{r-1}}{g _{r}}&-u_{3r}\\ 0&0&\frac{a_{3}}{g_{4}}&...&-\frac{u_{4,r-1}g_{r-1}}{g_{r}}&-u_{4r}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&...&\frac{a_{r-1}}{g_{r}}&-u_{rr}\\ \end{pmatrix}.\]
This is the same matrix as in Example 2.4, but with the last row deleted.
Conversely, suppose \(M_{X}\) is an \(r\times r\) matrix. Let \(M_{X}^{\prime}\) be the matrix obtained by deleting the first column from \(M_{X}\)
Let \(v=(v_{1},\ldots,v_{r})^{t}\) be the column vector whose \(j\)th component equals \((-1)^{j}\) times the determinant of the matrix obtained by deleting the \(j\)th row from \(M_{X}^{\prime}\). Let \(n=\det M_{X}\). We claim that if \(n\neq 0\) and \(v\neq 0\) and \(\gcd(v_{1},\ldots,v_{r})=1\), then \(X\) is isomorphic to \(\operatorname{Cay}(\mathbb{Z}_{n},\{xv_{1},\ldots,xv_{r}\})\). To prove this, take the homomorphism \(\varphi\colon\mathbb{Z}^{r}\to\mathbb{Z}_{n}\) with \(\varphi\colon e_{j}\mapsto v_{j}\). Let \(y_{1},\ldots,y_{r}\) be the columns of \(M_{X}\), and let \(H\) be the subgroup of \(\mathbb{Z}^{r}\) generated by \(\{y_{1},\ldots,y_{r}\}\). We must show that \(\ker(\varphi)=H\). The fact that \(H\subset\ker(\varphi)\) follows from cofactor expansion. Conversely, suppose that \((c_{1},\ldots,c_{r})=c\in\ker(\varphi)\), or equivalently, that \(c\cdot v\equiv 0\pmod{n}\), where \(\cdot\) denotes the dot product. We will show that \(c\in H\). From \(c\cdot v\equiv 0\pmod{n}\) we have that \(c\cdot v^{t}=kn\) for some integer \(k\). Because \(\det M_{x}=n\), we have that \(y_{1}\cdot v^{t}=n\). Hence \((c-ky_{1})\cdot v^{t}=0\). From the results of Example 2.4, we have that \(c-ky_{1}\) equals a linear combination of columns of \(M_{X}^{\prime}\) with integer coefficients. Thus \(c\in H\).
We note that if \(\gcd(n,\gcd(v))=1\), then \(\gcd(v)=1\). This follows from the fact that a matrix times its adjoint equals its determinant times the identity matrix, and \(v\) is a row of the adjoint.
Of course, one could just as well delete the last column instead of the first column. We remark that an \(n\times n\) square matrix \(M\) with \(n\geq 2\) has the property that there exists a unimodal matrix \(U\) such that the gcd of the determinants of the \((n-1)\times(n-1)\) minors of the submatrix formed by deleting the last column of \(MU\) is \(1\) if and only if the first \(n-1\) diagonal entries in the Smith normal form of \(M\) are all \(1\). In [14], Ekedahl proves that for square integer matrices, the asymptotic probability of all but one diagonal entry in the Smith normal form being \(1\) is approximately \(0.846936\). The precise figure involves values of the Riemann zeta function. In a rough sense, then, we can say that at least \(5/6\) of all SACGs with square Heuberger matrices of a fixed size are isomorphic to circulant graphs. In [6], we give an example of an SACG with a \(2\times 2\) Heuberger matrix that is not isomorphic to a circulant graph. \(\Box\)
For a standardized abelian Cayley graph \(X\), various row and column operations can be performed to an associated Heuberger matrix \(M_{X}\) to produce an isomorphic (indeed, sometimes identical) graph \(X^{\prime}.\) In the following lemma, we catalog some of these. Recall that the \(\mathbb{Z}\)_-span_ of a set \(\{y_{1},\ldots,y_{r}\}\) of elements of an abelian group is the set
\[\{a_{1}y_{1}+\cdots+a_{r}y_{r}\mid a_{1},\ldots,a_{r}\in\mathbb{Z}\},\]
that is, the set of linear combinations of \(y_{1},\ldots,y_{r}\) with integer coefficients.
**Lemma 2.6**.: _Let \(X\) and \(X^{\prime}\) be standardized abelian Cayley graphs with Heuberger matrices \(M_{X}\) and \(M_{X^{\prime}}\), respectively._
1. _If_ \(M_{X^{\prime}}\) _is obtained by permuting the columns of_ \(M_{X}\)_, then_ \(X=X^{\prime}\)_._
2. _If_ \(M_{X^{\prime}}\) _is obtained by multiplying a column of_ \(M_{X}\) _by_ \(-1\)_, then_ \(X=X^{\prime}\)_._
3. _Suppose_ \(y_{j}\) _and_ \(y_{i}\) _are the_ \(j\)_th and_ \(i\)_th columns of_ \(M_{X}\)_, respectively, with_ \(j\neq i\)_. If_ \(M_{X^{\prime}}\) _is obtained by replacing the_ \(j\)_th column of_ \(M_{X}\) _with_ \(y_{j}+ay_{i}\) _for some integer_ \(a\)_, then_ \(X=X^{\prime}\)_._
4. _If_ \(M_{X^{\prime}}\) _is obtained by deleting any column from_ \(M_{X}\) _which is in the_ \(\mathbb{Z}\)_-span of the other columns, then_ \(X=X^{\prime}\)_. (In particular, deleting a zero column does not change the graph.)_
5. _If_ \(M_{X^{\prime}}\) _is obtained by permuting the rows of_ \(M_{X}\)_, then_ \(X\) _is isomorphic to_ \(X^{\prime}\)
6. _If_ \(M_{X^{\prime}}\) _is obtained by multiplying a row of_ \(M_{X}\) _by_ \(-1\)_, then_ \(X\) _is isomorphic to_ \(X^{\prime}\)_._
These statements can all be proved by standard arguments. For example, the first four items listed (the column operations) have no effect on the subgroup \(H\). The fifth operation essentially corresponds to permuting the basis vectors \(e_{j}\); the sixth corresponds to reflecting a coordinate, i.e., mapping \(e_{j}\mapsto-e_{j}\).
We remark that any finite composition of operations 1-3 is equivalent to multiplication on the right by a unimodular matrix, and any finite composition of operations 5-6 is equivalent to multiplication on the left by a signed permutation matrix.
### Basal results on chromatic numbers of abelian Cayley graphs
Recall that the _Cartesian product_ (a.k.a. _box product_) of two graphs \(X\) and \(Y\) with vertex sets \(V(X)\) and \(V(Y)\), respectively, is defined to be the graph \(X\box Y\) with vertex set \(V(X)\times V(Y)\), where \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are adjacent if and only if either (i) \(x_{1}=x_{2}\) and \(y_{1}\) is adjacent to \(y_{2}\) in \(Y\), or else (ii) \(x_{1}\) is adjacent to \(x_{2}\) in \(X\) and \(y_{1}=y_{2}\). Also recall that
\[\chi(X\box Y)=\max(\chi(X),\chi(Y)). \tag{1}\]
When the matrix has a block structure, we can find the chromatic number by taking the maximum of the chromatic numbers of the graphs associated to the blocks.
**Lemma 2.7**.: _Suppose the standardized abelian Cayley graphs \(X\) and \(Y\) have Heuberger matrices \(M_{X}\) and \(M_{Y}\), respectively. Define the standardized abelian Cayley graph \(Z\) by \(\begin{pmatrix}M_{X}&0\\ 0&M_{Y}\end{pmatrix}_{Z}^{\mathrm{SACG}}.\) In other words, \(M_{Z}\) is the matrix direct sum \(M_{X}\oplus M_{Y}\). Then \(\chi(Z)=\text{max}(\chi(X),\chi(Y))\)._
Proof.: It is straightforward to show that \(Z\) is isomorphic to \(X\box Y\). The result follows from (1).
We next show that deleting a row of zeroes from a matrix has no effect on the chromatic numbers of the associated standardized abelian Cayley graphs.
**Lemma 2.8**.: _Let \(y_{1},\ldots,y_{r}\in\mathbb{Z}^{m}\), where \(m\geq 2\). For all \(j=1,\ldots,r\), let \(y_{j}=(y_{1j},\ldots,y_{mj})^{t}\). Suppose we have the standardized abelian Cayley graph \((y_{1}\ \cdots\ y_{r})_{X}^{\mathrm{SACG}}\). Suppose that for some \(k\) with \(1\leq k\leq m\), we have that \(y_{k1}=\cdots=y_{kr}=0\). (That is, the \(k\)th row of \(X\)'s matrix has all zeroes.) Define the standardized abelian Cayley graph \(X^{\prime}\) by_
\[\left(\begin{array}{cccc}y_{11}&...&y_{1r}\\ \vdots&\ddots&\vdots\\ y_{k-1,1}&...&y_{k-1,r}\\ y_{k+1,1}&...&y_{k+1,r}\\ \vdots&\ddots&\vdots\\ y_{m1}&...&y_{mr}\end{array}\right)^{\text{SACG}}_{X^{\prime}}.\]
_That is, the matrix for \(X^{\prime}\) is the same as that for \(X\), but with the \(kth\) row deleted. Then \(\chi(X)=\chi(X^{\prime})\)._
Proof.: By Lemma 2.6, we have that \(X\) is isomorphic to
\[\left(\begin{array}{cccc}y_{11}&...&y_{1r}&0\\ \vdots&\ddots&\vdots&\vdots\\ y_{k-1,1}&...&y_{k-1,r}&0\\ y_{k+1,1}&...&y_{k+1,r}&0\\ \vdots&\ddots&\vdots&\vdots\\ y_{m1}&...&y_{mr}&0\\ 0&...&0&0\end{array}\right)^{\text{SACG}}.\]
Here we permute the rows of \(X\) so as to move the row of all zeroes to the bottom, then append a column of all zeroes on the right using Lemma 2.6(4). The result now follows from Lemma 2.7 and Example 2.1.
### Graph homomorphisms for standardized abelian Cayley graphs
Given two graphs \(X\) and \(Y\) with vertex sets \(V_{X}\) and \(V_{Y}\), respectively, we recall that a _graph homomorphism_ is a mapping \(\psi\colon V_{X}\to V_{Y}\) such that if \(v\) and \(w\) are adjacent vertices in \(X\), then \(\psi(v)\) and \(\psi(w)\) are adjacent vertices in \(Y\). As is well known, a proper coloring \(c\) of \(Y\) can be "pulled back" via \(\psi\) to give the proper coloring \(c\circ\psi\) of \(X\). Hence we obtain the following standard lemma.
**Lemma 2.9**.: _Suppose \(\psi\) is a graph homomorphism from \(X\) to \(Y\). Then \(\chi(X)\leq\chi(Y)\)._
With Cayley graphs, it is well known that certain group homomorphisms are graph homomorphisms. To be precise, let \(G_{1}\) and \(G_{2}\) be groups, and let \(S_{1}\) and \(S_{2}\) be symmetric subgroups of \(G_{1}\) and \(G_{2}\), respectively. Let \(\psi\colon G_{1}\to G_{2}\) be a group homomorphism such that \(\psi(S_{1})\subset S_{2}\). It follows immediately from the various definitions that \(\psi\) is a graph homomorphism from \(\text{Cay}(G_{1},S_{1})\) to \(\text{Cay}(G_{2},S_{2})\).
Suppose \(X=\text{Cay}(\mathbb{Z}^{m}/H,S)\) and \(Y=\text{Cay}(\mathbb{Z}^{\ell}/J,T)\) are standardized abelian Cayley graphs, where \(S=\{H\pm e_{1},\ldots,H\pm e_{m}\}\)) and \(T=\{J\pm e_{1},\ldots,J\pm e_{\ell}\}\). (Note that we somewhat ambiguously use the notation \(e_{j}\) here both for a standard basis vector in \(\mathbb{Z}^{m}\) as well as for one in in \(\mathbb{Z}^{\ell}\), and this will be our usual practice going forward. We hope that in each
case the context will make clear which set \(e_{j}\) is an element of.) Let \(M_{X}\) and \(M_{Y}\) be Heuberger matrices associated to \(X\) and \(Y\), respectively. To indicate that \(\tau\colon\mathbb{Z}^{m}\to\mathbb{Z}^{\ell}\) is a group homomorphism with \(\tau(S)\subset T\), we introduce the following notation:
\[(M_{X})_{X}^{\text{SACG}}\xrightarrow[\tau]{\otimes}(M_{Y})_{Y}^{\text{SACG}}.\]
It is often the case that the mapping \(\tau\) can be inferred from the context, and for that reason we frequently omit it.
From the Heuberger matrices associated to standardized abelian Cayley graphs, we can easily construct an assortment of graph homomorphisms. Any group homomorphism \(\tau\) from \(\mathbb{Z}^{m}/H\) to \(\mathbb{Z}^{m}/J\) is uniquely determined by the images of \(e_{1},\dots,e_{m}\). (By an abuse of notation, we write \(e_{j}\) here when we more properly should write the coset \(H+e_{j}\). We shall adopt this convention from now on and hope that no confusion will result.) Let \(y_{j}\) be the \(j\)th column of \(M_{X}\). The mapping is well-defined if and only if
\[y_{1j}\tau(e_{1})+\dots+y_{mj}\tau(e_{m})\in J\text{ for all }j=1,\dots,r. \tag{2}\]
Moreover, the requirement that \(\tau\) be a graph homomorphism implies that
\[\text{for all }i=1,\dots,m,\text{ we have that }\tau(e_{i})=\pm e_{k}\text{ for some }k=1,\dots,\ell. \tag{3}\]
Conversely, one can verify that any function \(\tau\) on \(\{e_{1},\dots,e_{m}\}\) satisfying (2) and (3) can be extended to a well-defined graph homomorphism. The following lemma catalogues several standard graph homomorphisms we obtain in this manner.
**Lemma 2.10**.:
1. _Every isomorphism from Lemma_ 2.6 _defines a graph homomorphism._
2. _We obtain a graph homomorphism by reducing a column by a common factor. To be precise, for any integer_ \(a\)_, any_ \(y_{1},\dots,y_{r}\in\mathbb{Z}^{m}\)_, and any_ \(j=1,\dots,r\)_, we have_ \[\begin{pmatrix}y_{11}&\dots&ay_{1j}&\dots&y_{1r}\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ y_{m1}&\dots&ay_{mj}&\dots&y_{mr}\end{pmatrix}^{SACG}\xrightarrow[\begin{matrix} y_{11}&\dots&y_{1j}&\dots&y_{1r}\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ y_{m1}&\dots&y_{mj}&\dots&y_{mr}\end{matrix}^{SACG}.\] _Here the mapping is given by_ \(e_{j}\mapsto e_{j}\)_._
3. _We obtain a graph homomorphism when we "collapse" the top two rows by adding them. We can write this in
terms of the associated matrices as follows:_ \[\begin{pmatrix}y_{11}&\cdots&y_{1r}\\ y_{21}&\cdots&y_{2r}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}\end{pmatrix}_{X}^{SACG}\stackrel{{\text{$ \oplus$}}}{{\rightarrow}}\begin{pmatrix}y_{11}+y_{21}&\cdots&y_{1r}+y_{2r}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}\end{pmatrix}_{Y}^{SACG}.\] _That is, the top row of_ \(M_{Y}\) _equals the sum of top two rows of_ \(M_{X}\)_. Here the mapping is given by_ \(e_{1},e_{2}\mapsto e_{1}\) _and_ \(e_{j}\mapsto e_{j-1}\) _for all_ \(j\geq 3\)_._
4. _We obtain a graph homomorphism by appending an arbitrary column, then mapping_ \(e_{j}\mapsto e_{j}\) _for all_ \(j\)_. In other words, for all any_ \(y_{1},\ldots,y_{r+1}\in\mathbb{Z}^{m}\)_, we have_ \[\begin{pmatrix}y_{11}&\cdots&y_{1r}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}\end{pmatrix}^{SACG}\stackrel{{\text{$ \oplus$}}}{{\rightarrow}}\begin{pmatrix}y_{11}&\cdots&y_{1r}&y_{1r+1}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}&y_{m,r+1}\end{pmatrix}^{SACG}.\]
5. _We obtain a graph homomorphism by appending a row of all zeroes, then mapping_ \(e_{j}\mapsto e_{j}\) _for_ \(j=1,\ldots,m\)_. In other words, for all any_ \(y_{1},\ldots,y_{r}\in\mathbb{Z}^{m}\)_, we have_ \[\begin{pmatrix}y_{11}&\cdots&y_{1r}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}\end{pmatrix}^{SACG}\stackrel{{\text{$\oplus$}}}{{ \rightarrow}}\begin{pmatrix}y_{11}&\cdots&y_{1r}\\ \vdots&\ddots&\vdots\\ y_{m1}&\cdots&y_{mr}\\ 0&\cdots&0\end{pmatrix}^{SACG}.\]
6. _Any composition of the above graph homomorphisms is again a graph homomorphism._
We frequently compose "permute rows" and "multiply a row by \(-1\)" isomorphisms with a "sum the top two rows" homomorphism. Each row of the "target matrix" is then a sum of rows, or their negatives, from the "source matrix."
### Graph homomorphisms and chromatic numbers of standardized abelian Cayley graphs
The homomorphisms of the previous subsection, together with Lemma 2.9, will help us find bounds on chromatic numbers in many cases. As a first example of this technique, we observe that we can immediately determine from an associated Heuberger matrix whether a standardized abelian Cayley graph is bipartite.
**Lemma 2.11**.: _Let \(y_{1},\ldots,y_{r}\in\mathbb{Z}^{m}\). For all \(j=1,\ldots,r\), let \(y_{j}=(y_{1j},\ldots,y_{mj})^{t}\). Suppose we have \((y_{1}\cdots\,y_{r})_{X}^{SACG}\). Then \(\chi(X)=2\) if and only if \(s_{j}=y_{1j}+\cdots+y_{mj}\) is even for all \(j\). In other words, \(X\) is bipartite if and only if all column sums are even._
Proof.: First, suppose all column sums are even. If \(s_{j}\neq 0\) for at least one value of \(j\), then we have
\[(y_{1}\cdots y_{r})_{X}^{\text{SACG}}\xrightarrow{\oplus}(s_{1}\cdots s_{r})^{ \text{SACG}}\xrightarrow{\oplus}(2\cdots 2\ 0\cdots 0)^{\text{SACG}}=(2)_{Y}^{\text{SACG}}.\]
Here the first homomorphism comes from summing all of the rows, and the second comes from permuting columns to move all zeroes to the right, then reducing each nonzero column by a factor. All columns of \((2\cdots 2\ 0\cdots 0)\) are in the \(\mathbb{Z}\)-span of the first column, whence we achieve the final equality by deleting all columns except the first column. The graph \(Y\) is a path of length \(1\), hence \(2\)-colorable. If \(s_{1}=\cdots=s_{r}=0\), then in a similar way we have a homomorphism to \((0)_{Y}^{\text{SACG}}\), which is a doubly infinite path graph, hence also \(2\)-colorable. By Lemma 2.9, we have that \(\chi(X)\leq 2\). Since \(X\) contains at least one edge, therefore \(\chi(X)=2\).
Conversely, suppose that \(s_{j}\) is odd for some \(j\). Starting at any vertex, we obtain an odd cycle by taking \(y_{1j}\) steps along \(H+e_{1}\), then \(y_{2j}\) steps along \(H+e_{2}\), and so on, finally taking \(y_{mj}\) steps along \(H+e_{m}\). (By "taking a step along \(q\)," we mean moving from \(v\) to \(v+q\). If \(y_{kj}\) is negative, then by "taking \(y_{kj}\) steps along \(H+e_{k}\)," we mean taking \(-y_{kj}\) steps along \(H-e_{k}\).)
A nearly identical proof shows that whenever the column sums are not relatively prime, the graph is \(3\)-colorable.
**Lemma 2.12**.: _Let \(y_{1},\ldots,y_{r}\in\mathbb{Z}^{m}\). For all \(j=1,\ldots,r\), let \(y_{j}=(y_{1j},\ldots,y_{mj})^{t}\), and let \(s_{j}=y_{1j}+\cdots+y_{mj}\). Suppose we have \((y_{1}\cdots y_{r})_{X}^{\text{SACG}}\). Suppose \(s_{j}\neq 0\) for some \(j\). If \(e=\gcd(s_{1},\ldots,s_{r})>1\), then \(\chi(X)\leq 3\)._
Proof.: We have
\[(y_{1}\cdots y_{r})_{X}^{\text{SACG}}\xrightarrow{\oplus}(s_{1}\cdots s_{r}) ^{\text{SACG}}\xrightarrow{\oplus}(e\cdots e\ 0\cdots 0)^{\text{SACG}}=(e)_{Y}^{\text{SACG}}.\]
The graph \(Y\) is a cycle of length \(e\geq 3\), hence \(3\)-colorable. By Lemma 2.9, we have that \(\chi(X)\leq 3\).
We require in the preceding lemma that at least one of the column sums is not zero so as to guarantee that \(e\) is defined. If all column sums are zero, then \(\chi(X)=2\) by Lemma 2.11.
Let \(\omega\in\mathbb{C}\). Recall that the _minimal polynomial of \(\omega\) over the integers_, denoted \(\min_{\mathbb{Z}}\omega\), is defined as \(\min_{\mathbb{Z}}\omega=k\min_{\mathbb{Q}}\omega\), where \(\min_{\mathbb{Q}}\omega\) is the minimal polynomial of \(\omega\) over the rational numbers, and \(k\) is the smallest positive integer such that \(k\min_{\mathbb{Q}}\omega\) has integer coefficients.
_Example 2.13_.: In this example, we compute the chromatic number of a certain infinite unit-distance graph in the plane. Let \(\omega=\left(\frac{5}{8},\frac{\sqrt{39}}{8}\right)\in\mathbb{R}^{2}.\) Observe that \(\omega\) is a unit vector in \(\mathbb{R}^{2}.\) Equivalently, identifying \(\mathbb{R}^{2}\) with the complex plane \(\mathbb{C}\) so that \(\omega=\frac{5}{8}+\frac{i\sqrt{39}}{8}\), we have that \(|\omega|=\left|\frac{5}{8}+\frac{i\sqrt{39}}{8}\right|=1\). Hence \(\omega^{2}\) and \(\omega^{3}\) are also unit vectors. Regarding \(\mathbb{R}^{2}\) as a group under addition, let \(G\) be the subgroup generated by \(\{1,\omega,\omega^{2},\omega^{3}\}\), and let \(W=\text{Cay}(G,\{\pm 1,\pm\omega,\pm\omega^{2},\pm\omega^{3}\})\). So \(W\) is a unit-distance graph. One can compute that \(\min_{\mathbb{Z}}\omega=4x^{2}-5x+4\). Using this fact, with some calculation
it can be shown that \(W\) is isomorphic to the standardized abelian Cayley graph
\[\begin{pmatrix}4&0\\ -5&4\\ 4&-5\\ 0&4\end{pmatrix}_{X}^{\text{SACG}}.\]
Both columns of \(M_{X}\) sum to \(3\). Therefore, by Lemmas 2.11 and 2.12, we have that \(\chi(W)=\chi(X)=3\).
_Example 2.14_.: Generalizing the previous example reveals a surprising connection between chromatic numbers and minimal polynomials. Let \(\omega\in\mathbb{C}\) be an algebraic number, not necessarily of unit modulus. Let
\[p(x)=\min_{\mathbb{Z}}\omega=c_{d}x^{d}+\cdots+c_{1}x+c_{0},\]
where \(d\) is the degree of \(p\). Define a graph \(W\) whose vertex set is \(\mathbb{C}\), where two vertices are adjacent if and only if their difference equals \(\omega^{n}\) for some nonnegative integer \(n\). We claim that if \(\chi(X)\geq 4\), then \(|p(1)|=|p(-1)|=1\).
To prove this claim, first observe that \(W\) is precisely \(\text{Cay}(\mathbb{C},S)\), where \(S=\{\omega^{n}\mid n\in\mathbb{Z}\text{ and }n\geq 0\}\). As discussed in the introduction, it follows from the de Bruijn-Erdos theorem that if \(W\) is finitely colorable, then for sufficiently large \(m\) we have that \(\chi(W)=\chi(X)\), where \(S_{m}=\{\omega^{n}\mid n\in\mathbb{Z}\text{ and }0\leq n\leq m-1\}\), and \(G\) is the subgroup of \(\mathbb{C}\) generated by \(S_{m}\), and \(X=\text{Cay}(G,S_{m})\). A straightforward induction proof shows that \(X\) is isomorphic to the standardized abelian Cayley graph \(Y\) with a Heuberger matrix
\[M_{Y}=\begin{pmatrix}c_{0}&c_{1}&\cdots&c_{d}&0&0&\cdots&0&\cdots&0\\ 0&c_{0}&\cdots&c_{d-1}&c_{d}&0&\cdots&0&\cdots&0\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0&0&0&\cdots&c_{0}&\cdots&c_{d}\end{pmatrix}^{t}.\]
Each column sum of \(M_{Y}\) equals \(p(1)\). Hence we must have \(|p(1)|=1\), else \(\chi(Y)\leq 3\) by Lemma 2.12. Similarly, by Lemma 2.6, we can multiply every other row of \(M_{Y}\) by \(-1\), giving us a matrix for an isomorphic graph. Each column sum of this new graph equals \(p(-1)\), and the result follows as before.
An identical argument shows that if \(|p(1)|\neq 1\) or \(|p(-1)|\neq 1\), then \(\chi(X)\leq 3\) for all \(m\).
We do not know whether this result is vacuous. Hence we pose the following question: Does there exist an algebraic number \(\omega\in\mathbb{C}\) such that the Cayley graph of \(\mathbb{C}\) generated by the non-negative powers of \(\omega\) is not \(3\)-colorable?
The number of columns in a Heuberger matrix for one of our graphs can be taken to be the rank \(r\) of the subgroup \(H\), that is, the cardinality of a minimal generating set for \(H\). The difficulty of determining the chromatic number seems to increase as \(r\) increases. In this paper we consider cases where \(r\) is small. The cases \(r=0\) and \(r=1\) are fairly
straightforward; the case \(r=2\) is dealt with in [6] for \(m=2\) and \(m=3\); and the cases \(r=2,m\geq 4\) as well as \(r\geq 3\) are left for future investigation. (Here \(m\) is the dimension, i.e., the number of rows in the matrix.)
When \(r=0\), we have that \(H\) is the trivial subgroup, so \(M_{X}\) is a zero matrix. In this case we have \(\chi(X)=2\), by Lemma 2.11.
The case of \(r=1\), i.e., when \(M_{X}\) has just one column, is handled by the so-called "Tomato Cage Theorem." As discussed in [4], the reason for the name of this theorem is as follows. When \(r=1\) and \(m=2\), we can visualize the corresponding graph by taking the infinite grid graph with vertex set \(\mathbb{Z}^{2}\) and "wrapping it around itself" to form a cylindrical mesh, which reminded the authors of a tomato cage.
**Theorem 2.15** (Tomato Cage Theorem).: _Let \(y_{1}=(y_{11},\ldots,y_{m1})^{t}\in\mathbb{Z}^{m}\), and define the standardized abelian Cayley graph \(X\) by \((y_{1})_{X}^{\text{SACG}}\). If \(y_{1}=\pm e_{j}\) for some \(j\), then \(X\) has loops and hence cannot be properly colored. Otherwise,_
\[\chi(X)=\begin{cases}2&\text{ if }y_{1}\text{ has an even number of odd entries}\\ 3&\text{ if }y_{1}\text{ has an odd number of odd entries.}\end{cases}\]
Proof.: It follows immediately from the definitions that if \(y_{1}=\pm e_{j}\) for some \(j\), then \(X\) has loops. Suppose then that \(y_{1}\neq\pm e_{j}\) for all \(j\). Let \(s=|y_{11}|+\cdots+|y_{m1}|\). Observe that \(s\) is even if and only if \(s_{1}=y_{11}+\cdots+y_{m1}\) is even if and only if \(y_{1}\) has an even number of odd entries. So by Lemma 2.11, we have that \(\chi(X)=2\) if and only if \(y_{1}\) has an even number of odd entries. If not, then \(s\) is odd. Also, because \(y_{1}\neq\pm e_{j}\) for all \(j\), we have that \(s>1\). We have
\[\begin{pmatrix}y_{11}\\ \vdots\\ y_{m1}\end{pmatrix}_{X}^{\text{SACG}}\xrightarrow{\oplus}\begin{pmatrix}|y_{1 1}|\\ \vdots\\ |y_{m1}|\end{pmatrix}^{\text{SACG}}\xrightarrow{\oplus}(s)_{Y}^{\text{SACG}}.\]
The first homomorphism comes from multiplying rows by \(-1\) as needed; the second comes from summing all of the rows. Note that \(Y\) is a cycle of length \(s\), hence \(3\)-colorable. The result follows from Lemma 2.9.
_Example 2.16_.: The following essentially appears as Theorem 10 in [13]. Let \(a,b\) be coprime positive integers and let \(X\) be the integer distance graph on \(\mathbb{Z}\) with respect to \(D=\{a,b\}\). That is, \(X=\text{Cay}(\mathbb{Z},\{\pm a,\pm b\})\). Let \(x=(-b,a)^{t}\). Then \(\text{Cay}(\mathbb{Z}^{2}/\{x\},\{(x)\pm e_{1},\langle x\rangle\pm e_{2}\}) \cong X\), as can be seen by \(e_{1}\mapsto a,e_{2}\mapsto b\). Applying the Tomato Cage Theorem, we see that \(\chi(G)=2\) if and only if \(a\) and \(b\) have the same parity, and that \(\chi(G)=3\) otherwise.
### Acknowledgments
The authors wish to thank Jaan Parts for many helpful suggestions--in particular, for coining the term "Heuberger matrix." |
2303.11078 | Model Barrier: A Compact Un-Transferable Isolation Domain for Model
Intellectual Property Protection | As scientific and technological advancements result from human intellectual
labor and computational costs, protecting model intellectual property (IP) has
become increasingly important to encourage model creators and owners. Model IP
protection involves preventing the use of well-trained models on unauthorized
domains. To address this issue, we propose a novel approach called Compact
Un-Transferable Isolation Domain (CUTI-domain), which acts as a barrier to
block illegal transfers from authorized to unauthorized domains. Specifically,
CUTI-domain blocks cross-domain transfers by highlighting the private style
features of the authorized domain, leading to recognition failure on
unauthorized domains with irrelevant private style features. Moreover, we
provide two solutions for using CUTI-domain depending on whether the
unauthorized domain is known or not: target-specified CUTI-domain and
target-free CUTI-domain. Our comprehensive experimental results on four digit
datasets, CIFAR10 & STL10, and VisDA-2017 dataset demonstrate that CUTI-domain
can be easily implemented as a plug-and-play module with different backbones,
providing an efficient solution for model IP protection. | Lianyu Wang, Meng Wang, Daoqiang Zhang, Huazhu Fu | 2023-03-20T13:07:11Z | http://arxiv.org/abs/2303.11078v1 | # Model Barrier: A Compact Un-Transferable Isolation Domain
###### Abstract
As scientific and technological advancements result from human intellectual labor and computational costs, protecting model intellectual property (IP) has become increasingly important to encourage model creators and owners. Model IP protection involves preventing the use of well-trained models on unauthorized domains. To address this issue, we propose a novel approach called Compact Un-Transferable Isolation Domain (CUTI-domain), which acts as a barrier to block illegal transfers from authorized to unauthorized domains. Specifically, CUTI-domain blocks cross-domain transfers by highlighting the private style features of the authorized domain, leading to recognition failure on unauthorized domains with irrelevant private style features. Moreover, we provide two solutions for using CUTI-domain depending on whether the unauthorized domain is known or not: target-specified CUTI-domain and target-free CUTI-domain. Our comprehensive experimental results on four digit datasets, CIFAR10 & STL10, and VisDA-2017 dataset demonstrate that CUTI-domain can be easily implemented as a plug-and-play module with different backbones, providing an efficient solution for model IP protection.
## 1 Introduction
The recent success of deep learning models heavily relies on massive amounts of high-quality data, specialized training resources, and elaborate manual fine-tuning [55, 56, 5, 5]. Obtaining a well-trained deep model is both time-consuming and labor-intensive [8]. Therefore, it should be protected as a kind of scientific and technological achievement intellectual property (IP) [9, 54], thereby stimulating innovation enthusiasm in the community and further promoting the development of deep learning. As shown in Fig. 1, in supervised learning (SL), the model owner uses the overall features of the authorized domain (pink square) for training, obtains a high-performance model, and grants the right to use it to a specific user. Authorized users can use the model on the authorized domain to obtain correct predictions. However, since the model is trained with overall features, its potential feature generalization region is large and may cover some unauthorized domains. Therefore, there is a natural pathway between the authorized domain and the unauthorized domain, and the released high-performance model obtained by SL can be illegally transferred to the unauthorized domain (blue squares) through methods such as domain adaptation [15, 16], and domain generalization [17, 18], to obtain correct prediction results. This presents a challenge in protecting well-trained models. One of the most concerning threats raised is "Will releasing
Figure 1: Model IP protection with our proposed CUTI-domain. **Left**: In standard supervised learning (SL), the model owner trains a high-performance model on the authorized domain (pink square) and then authorizes a specific user. Authorized users have the right to use the model on the authorized domain to get the correct prediction. However, a stealer can easily access the model on the unauthorized domain (blue square), which violates the legitimate rights and interests of the model owner. **Right**: Our method constructs a CUTI-domain between the authorized and unauthorized domains, which could block the illegal transferring and lead to a wrong prediction for unauthorized domains.
the model make it easy for the main competitor to copy this new feature and hurt owner differentiation in the market?" Thus, the model IP protection has been proposed to defend against model stealing or unauthorized use.
A comprehensive intellectual property (IP) protection strategy for deep learning models should consider both ownership verification and applicability authorization [10, 57]. Ownership verification involves verifying who has permission to use the deep model by embedding watermarks [11, 12], model fingerprint [13], and predefined triggers [14]. The model owner can grant usage permission to a specific user, and any other users will be infringing on the owner's IP rights. However, an authorized user can easily transfer the model to an unauthorized user, so the model owner must add special marks during training to identify and verify ownership. Moreover, these methods are vulnerable to fine-tuning, classifier retraining, elastic weight consolidation algorithms, and watermark overwriting, which can weaken the model's protection. On the other hand, applicability authorization involves verifying the model's usage scenarios. Users with permission can apply the deep model for the tasks specified by the model owner, and it is an infringement to use it for unauthorized tasks [10]. However, users can easily transfer high-performance models to other similar tasks to save costs, which is a common and hidden infringement. Therefore, if the performance of the model can be limited to the tasks specified by the owner and reduced on other similar tasks, unauthorized users will lose confidence in stealing and re-authoring the model. To achieve this, a non-transferable learning (NTL) method is proposed [10], which uses an estimator with a characteristic kernel from Reproducing Kernel Hilbert Spaces to approximate and increase the maximum mean difference between two distributions on finite samples. However, the authors only considered using limited samples to increase the mean distribution difference of features between domains and ignored outliers. The convergence region of NTL is not tight enough. Moreover, the calculation of the maximum mean difference is class-independent, which reduces the model's feature recognition ability in the authorized domain to a certain extent.
To address the challenges outlined above, we **first** propose a novel approach called the Compact Un-Transferable Isolation (CUTI) domain to prevent illegal transferring of deep models from authorized to unauthorized domains. Our approach considers the overall feature of each domain, consisting of two components: shared features and private features. Shared features refer to semantic features, while private features include stylistic cues such as perspective, texture, saturation, brightness, background environment, and so on. We emphasize the private features of the authorized domain and construct a CUTI-domain as a model barrier with similar private style features. This approach prevents illegal transfers to unauthorized domains with new private style features, thereby leading to wrong predictions. **Furthermore**, we also provide two CUTI-domain solutions for different scenarios. When the unauthorized domain is known, we propose the target-specified CUTI-domain, where the model is trained with a combination of authorized, CUTI, and unauthorized domains. When the unauthorized domain is unknown, we use the target-free CUTI-domain, which employs a generator to synthesize unauthorized samples that replace the unauthorized domain in model training. **At last**, our comprehensive experimental results on four digit datasets, CIFAR10 & STL10, and VisDA-2017 demonstrate that our proposed CUTI-domain effectively reduces the recognition ability on unauthorized domains while maintaining strong recognition on authorized domains. Moreover, as a plug-and-play module, our CUTI-domain can be easily implemented within different backbones and provide efficient solutions.*
Footnote *: [https://github.com/LyWang12/CUTI-Domain](https://github.com/LyWang12/CUTI-Domain).
## 2 Related Work
### Model IP Protection
There are currently two main categories of methods for IP protection, including ownership verification and applicability authorization. For ownership verification, the most classic method is watermarking embedding [19]. Kuribayashi [20] proposed a quantifiable watermark embedding method, which reduces the variation caused by embedding watermarks. Adi [21] proposed a tracking mechanism in a black-box way. However, such watermark embedding approaches have been proved to be susceptible to some watermark removal and watermark overwriting methods. In our experiments, a simple watermark is embedded into the model for ownership verification by triggering mis-classification. Comprehensive experimental results demonstrate that the proposed CUTI-domain is resistant to the common watermark removal methods.
Applicability authorization is derived from usage authorization. Usage authorization usually uses a preset private key to encrypt the whole/part of the network. Only authorized users can obtain the private key and then use the model. There are many advanced methods for usage authorization. For example, Alam [24]. proposed an explicit locking mechanism for a lightweight deep neural network, utilizing S-Boxes with good cryptographic properties to lock each training parameter of a DNN model. Without knowledge of the legitimate private key, unauthorized access can severely degrade the accuracy of the model. Song [25] analyzed and calculated the critical weight of the deep neural network model, and significantly reduced the time cost by encrypting the critical weight to lock the deep neural network model against unauthorized use.
Wang _et al_. [10] proposed data-based applicability authorization named NTL, which preserves model performance on authorized data while degrading model performance in other data domains. Compared to the above, we construct a new class-dependent CUTI-domain with infinite samples whose features are more similar with the source domain. By decreasing the performance of the model on the CUTI-domain and the target domain, the generalization bound of the model can be compared, thereby constraining the model performance within the authorized source domain.
### Domain Transferring
In practice, domain gaps may arise due to different data collection scenarios in different domains. Domain adaptation (DA) and domain generalization (DG) are common solutions used to alleviate domain gaps [58, 59]. DA refers to transferring a model from a labeled source domain to an unlabeled but relevant target domain where the target domain's data is accessible during the training process [26]. DG differs from DA in that the target domain is inaccessible during model training [27, 28]. DANN [29] s a classic DA method that introduced a gradient inversion layer and a domain discriminator to confuse the feature distributions of the two domains. Subsequently, CDAN [30] further introduced categorical information entropy into the domain discriminator to alleviate the class mismatch problem. For DG, Tobin _et al_. [31] used domain randomization to generate more training data from simulated environments for generalization in real environments. Prakash _et al_. [32] further considered the structure of the scene when randomly placing objects for data generation, enabling the neural network to learn how to utilize context when detecting objects. Recently, some methods have been proposed and successfully applied to cross-domain applications by seeking an intermediate state between the source domain and the target domain, emphasizing the similarity between domains to improve the model's transferability [33, 34, 35, 36, 37]. In contrast, this paper aims to seek an intermediate state to highlight the difference between the authorized source domain and the unauthorized target domain, constraining the feature transferability of the model and protecting the scientific and technological achievements IP of the model owner.
## 3 Methodology
In this section, we first present our proposed CUTI-domain with the aim of developing a solution that limits the performance of the model to authorized source domains and reduces feature recognition capabilities on the unauthorized target domain. Then, depending on whether the unauthorized target domain is known or not, we provide two solutions, target-specified CUTI-domain and target-free CUTI-domain, for protecting the model IP.
### Compact Un-Transferable Isolation Domain
In the deep neural network model, the overall features extracted by the feature extractor include two abstract components, _i.e_., semantic features, and style features. Semantic features reflect the structural information of samples and play a leading role in sample recognition; while style feature refers to a series of weakly related clues, such as perspective, texture, saturation, brightness, and background environment. For different domains with the same task, semantic features are shared, while style features are private. Most of the previous DA and DG works have been devoted to improving feature transferability between domains, _i.e_., strengthening the focus of the model on shared features while suppressing seemingly disturbing private style features. However, to protect the intellectual property of the model, this paper aims to limit the feature recognition ability of the model by highlighting private style features of the source domain through style transfer, thus leading to the failure of recognition on target domains that contain irrelative private style.
Style transfer techniques suggest that styles are homogeneous and composed of repeated structural motifs. Two images can be considered similar in style if the features extracted by a trained classifier share the same statistics [38, 39]. First- and second-order statistics are often used as style features due to their computational efficiency. These statistics refer to the mean and variance of the extracted features. On the other hand, semantic features only contain pure semantic information and exclude any style information. Similar to [40], the semantic feature \(f_{s}\) of the extracted feature \(f\) can be obtained by removing the style features, as follows:
\[f_{s}=\frac{f-\mu(f)}{\sigma(f)}, \tag{1}\]
where \(\mu(f)\) and \(\sigma(f)\) denote the mean and variance of \(f\). Furthermore, style can be re-assign by \(f_{s}\gamma+\beta\), where \(\gamma\) and \(\beta\) are learned parameters. Afterward, Huang _et al_. [38] further explored adapting \(f\) to arbitrarily given style by using style features of another extracted feature instead of learned parameters.
Drawing inspiration from these ideas, we propose a novel CUTI-domain design that incorporates style features from the source domain. Specifically, we randomly extract style features from the source domain and fuse them with the overall features of the CUTI-domain, making the private style features of the CUTI-domain more similar to those of the source domain. By doing so, we reduce the model's ability to recognize features on both the CUTI-domain and the target domain, thereby implicitly blocking the pathway between the source and target domains and limiting the model's performance to the source domains.
Our CUTI-domain is generated by the CUTI-domain generator, which is a lightweight, and plug-and-play module, as shown in Fig. 2. \(f_{s}^{l}\) and \(f_{i}^{l}\) represent the deep features of the \(i\)-th feature extractor block in the source domain and the CUTI-domain, respectively. First, \(f_{s}^{l}\) and \(f_{i}^{l}\) are sent into CUTI-domain generator in parallel, and then the mean \(\mu^{l}\) and variance \(\sigma^{l}\) of \(f_{s}^{l}\) are calculated according to the channel as private style features, followed by a \(1\times 1\) convolution layer \(Conv\). Next, the \(\mu^{l}\) and \(\sigma^{l}\) are multiplied and added channel-wisely by \(f_{i}^{l}\) as:
\[f_{i}^{l}\leftarrow(f_{i}^{l}\bigotimes Conv(\sigma^{l}))\bigoplus Conv(\mu^{l }). \tag{2}\]
As can be seen in Fig. 2, the private style features of updated \(f_{i}^{l}\) are closer to those of \(f_{s}^{l}\), while retaining its original semantic features. Through continuous computation, CUTI-domain generators can construct a labeled CUTI-domain containing a similar private style to the source domain.
### Model IP Protection with CUTI-domain
#### 3.2.1 Target-Specified CUTI-Domain
We first introduce how to utilize our CUTI-Domain with a given unauthorized target domain. Fig. 3 illustrates the whole framework trained with our proposed CUTI-domain, which consists of \(L\) feature extractor blocks, \(L\) CUTI-domain generators and a classifier. \(x_{s}\), \(x_{i}\) and \(x_{t}\) denote the data of the source domain, CUTI-domain and target domain respectively. During training, when \(epoch=2e\), _i.e_., epoch is equal to an even number, \(x_{s}\) and \(x_{i}\) are fed into the feature extractor blocks in parallel, followed by a CUTI-domain generator. The classifier at the end of the network is used to predict the category of the sample, and the prediction results are denoted by \(p_{s}\) and \(p_{i}\), respectively. When \(epoch=2e+1\), _i.e_., epoch is equal to an odd number, \(x_{s}\) and \(x_{t}\) are input into the network parallelly without CUTI-domain generators, \(p_{s}\) and \(p_{t}\) denote their predicted results.
Based on the designed framework, an alternative loss function \(\mathcal{L}\) according to the epoch is utilized:
\[\mathcal{L}=\begin{cases}KL(p_{s}||y_{s})-KL(p_{i}||y_{i}),&\text{if }epoch=2e,\\ KL(p_{s}||y_{s})-KL(p_{t}||y_{t}),&\text{if }epoch=2e+1,\end{cases} \tag{3}\]
where \(KL(\cdot)\) stands for Kullback-Leibler divergence. By reducing the \(KL(\cdot)\) between prediction results and the labels on the source domain, and expanding the \(KL(\cdot)\) prediction results and the labels on the CUTI-domain/target domain, the feature recognition ability of the model in the source domain can be gradually improved, and the ability in the CUTI-domain/target domain can be gradually reduced, thus effectively constrain the performance of the model within authorized source domains for IP protection. Finally, we summarize the strategy of our proposed target-specified CUTI-domain in Algorithm 1.
During training, we initialize the CUTI-domain with the target domain training set, and feed data of source domain training set \(x_{s}\), CUTI-domain \(x_{i}\), and target domain training set \(x_{t}\) into the network in parallel to train the model as in Algorithm 1. At test time, model performance is evaluated on the source domain test set and the target domain test set. In an ideal situation, the model can maintain high sample recognition ability on the authorized source domain, but achieve poor performance on the unauthorized target domain.
#### 3.2.2 Target-Free CUTI-Domain
Sometimes, the unauthorized target domain is unknown. In this case, we cannot directly feed the target domain and CUTI-domain into the network for model training. To solve this, a synthesized target domain could be utilized. For example, Wang _et al_. [10] designed a GAN-based method by freezing parameters to generate synthesized samples in different directions in place of the target domain train set. Al
Figure 3: Illustration of the framework trained with our proposed CUTI-domain. The whole framework consists of \(L\) feature extractor blocks, \(L\) CUTI-domain generators, and a classifier. When the epoch is equal to an even/odd number, samples of the source domain \(x_{s}\) and CUTI-domain \(x_{i}\) / target domain \(x_{t}\) are fed into the feature extractor block in parallel. \(p_{s}\), \(p_{i}\) and \(p_{t}\) denote their prediction results.
Figure 2: Illustration of our proposed CUTI-domain generator. The output feature of the source domain \(f_{s}^{l}\) and the CUTI-domain \(f_{i}^{l}\) in the \(i\)-th feature extractor block are sent to the CUTI-domain generator, and then the mean \(\mu^{l}\) and variance \(\sigma^{l}\) of \(f_{s}^{l}\) is fused with \(f_{i}^{l}\), so that the private style features of the updated \(f_{i}^{l}\) are closer to the feature of source domain \(f_{s}^{l}\) while the original semantics features are maintained.
though their method is able to generate high-quality synthesized samples, the direction of the generator is specified and there are certain omissions. Huang [38] designed an adaptive instance normalization (AdaIN) method based on GAN, which can generate synthesized images with a specific style for a content image.
To take full advantage, we add Gaussian noise to AdaIN to obtain synthesized samples with random styles. Finally, we mix the synthesized samples generated by the above two methods (, GAN and AdaIN) to replace the target domain training set and initialize the CUTI-domain. The framework is consistent with the target-specified CUTI-domain and the training process is detailed in the Appendix. During testing, the model is evaluated on source domain test set and other unknown domain with the same task. Our goal is not to design GANs to generate high-quality synthesized images, but to evaluate the ability of CUTI-domain to block illegal feature transfer in the context of synthesized images. We still focus on reducing the feature recognition ability of the model on unauthorized target domains and maintaining it on the authorized source domains.
## 4 Experiment
### Implementation Details
We evaluate our method on seven popular DA/DG benchmarks,, MNIST (MT) [41], USPS (US) [42], SVHN (SN) [43] and MNIST-M (MM) [44] are commonly used digit datasets, containing ten digits from 0 to 9 extracted from vary scenes; CIFAR10 [45] and STL10 [46] are all ten-class classification datasets. We follow the procedure of French et al. [47] to process the dataset such that the correspondence between them holds. VisDA-2017 [48] is a Synthetic-to-Real dataset containing training (T) and validation (V) sets from 12 categories. Following the general setup, we adopt accuracy (\(\%\)) as the performance metric of each task.
For tasks with different complexity, different backbones are adopted to compare with the method proposed by Wang (NTL) [10]. We used VGG-11 [49] for digit datasets, VGG-13 [49] for CIFAR10 and STL10, and VGG-19 [49] for VisDA. Since VGG contains five feature extractor blocks, \(L\) is set to 5 in this paper, and CUTI-domain generator is deployed after each max-pooling layer in the block. Pre-trained models are used for all backbones for a fair comparison. The implementation of our comprehensive experiments is based on the public platform Pytorch and an NVIDIA GeForce RTX 3090 GPU with 24GB of memory. The batch size for each domain is set to 32.
### Result of Target-Specified CUTI-Domain
We selected one as the source domain and one as the target domain from the digital datasets of four different domains, and constructed 16 transfer tasks, as shown in Table 1. The left of \(\Rightarrow\) represents the accuracy of the model trained on the source domain dataset using supervised learning (SL), and the right of \(\Rightarrow\) is the accuracy of CUTI-domain. CUTI source/target drops represent the drop (relative drop) of the proposed CUTI-domain in the source and target domains, respectively. As can be seen, the average drop of CUTI-domain on the target and source domains is 55.94 (84.94%) and 0.13 (0.13%), respectively. The last two columns represent the average performance degradation of NTL with values of 46.48 (76.34%) and 1.30 (1.39%), respectively. Compared with NTL, the decline of CUTI-domain in the target domain is higher, and the negative impact on the source domain is smaller. It can be inferred that CUTI-domain can better reduce the sample recognition ability of the model for the target domain, and the decreases on the source domain are slight.
Fig. 4 shows the results on CIFAR10 \(\rightarrow\) STL10, STL10 \(\rightarrow\) CIFAR10 and T \(\rightarrow\) V. The bars with different colors in each subgraph represent the accuracy of the corresponding method in the source domain, the accuracy in the target domain, and the degradation (relative degradation) of the model performance, respectively. Where the results of NTL are reproduced by its source code to obtain comparable experimental data. SL has the largest generalization region, resulting in the highest classification accuracy on the target domain. By blocking the pathway with model locker, we observe successful target domain reduction in accuracy for NTL and CUTI-domain, with higher degradation than SL. Meanwhile, regardless of the task, the degradation of CUTI-domain is higher than that of NTL, indicating that CUTI-domain can better compress the generalization region of the model.
### Result of Ownership Verification
In this section, we conduct ownership verification of model by triggering classification errors. Specifically, we add a regular backdoor-based model watermark patch on the authorized source domain dataset followed by NTL [10], and treat the processed source domain as the new unauthorized target domain. The model classification accuracy of SL and CUTI-domain on the authorized source domain without watermark patch and the unauthorized target domain with watermark patch are shown in Table 2. As shown in the second and third columns in the table, for SL, there is little difference in the accuracy before and after embedding the watermark patch, so the model is not sensitive to the watermark patch. While for CUTI-domain, after watermark patch embedding, the accuracy on unauthorized target domain is greatly reduced, and this difference in performance can be used to verify the ownership of the model. In addition, we also test the robustness of CUTI-domain using FTAL [21], RTAL [21], EWC [22], AU [22] and watermark overwriting, which are state-of-the-art model watermark re
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Source & \multicolumn{3}{c|}{Training Methods} & \multicolumn{3}{c|}{Watermark Removal Approaches on CUTI} & \multicolumn{2}{c}{Avg Drop\(\uparrow\)} \\ \cline{2-9} without & SL & CUTI & FTAL [21] & RTAL [21] & EWC [22] & AU [22] & Overwriting & \multirow{2}{*}{CUTI} & \multirow{2}{*}{NTL} \\ Patch & [Test w/o Watermark (\(\%\))] & & & [Test w/o Watermark (\(\%\))] & & & & \\ \hline MT & 99.0 / 99.3 & 11.3 / 99.1 & 9.0 / 100.0 & 9.4 / 100.0 & 9.7 / 100.0 & 9.0 / 100.0 & 9.4 / 96.2 & **89.9** & 88.4 \\ US & 99.8 / 99.8 & 7.7 / 99.8 & 8.0 / 100.0 & 8.7 / 100.0 & 9.4 / 100.0 & 9.4 / 100.0 & 8.7 / 98.6 & **90.9** & 85.7 \\ SN & 91.3 / 92.3 & 9.9 / 92.1 & 9.4 / 98.3 & 13.9 / 97.6 & 10.8 / 100.0 & 8.7 / 100.0 & 10.4 / 95.8 & **87.7** & 79.0 \\ MM & 96.6 / 96.0 & 16.8 / 96.0 & 14.3 / 95.4 & 24.0 / 98.6 & 14.6 / 100.0 & 14.6 / 100.0 & 14.9 / 95.8 & **81.5** & 77.3 \\ CIFAR & 83.3 / 75.1 & 10.7 / 86.8 & 14.9 / 97.9 & 14.9 / 93.8 & 14.9 / 100.0 & 9.4 / 97.2 & 16.7 / 90.3 & **81.7** & 74.6 \\ STL & 87.9 / 93.2 & 22.0 / 88.2 & 20.0 / 96.9 & 26.4 / 93.8 & 13.9 / 100.0 & 22.9 / 94.1 & 21.2 / 89.6 & **74.0** & **74.0** \\ VisDA & 93.6 / 92.2 & 13.1 / 94.1 & 15.0 / 95.5 & 20.5 / 95.1 & 15.3 / 100.0 & 21.9 / 95.1 & 19.4 / 96.2 & **78.0** & 76.8 \\ \hline Mean & / & / & / & / & / & / & / & / & **83.4** & 79.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The accuracy (\(\%\)) of ownership verification by SL and CUTI-domain. FTAL [21], RTAL [21], EWC [22], AU [22] and Overwriting are state-of-the-art watermark removal methods. Avg drop represents the average degradation of the test dataset with watermark patch relative to test dataset without a watermark patch. The bold numbers indicate the best performance.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Source/Target & MT & US & SN & MM & \begin{tabular}{c} CUTI \\ Source \\ Drop\(\downarrow\) \\ \end{tabular} & \begin{tabular}{c} CUTI \\ Type\(\downarrow\) \\ \end{tabular} & \begin{tabular}{c} CUTI \\ Type\(\downarrow\) \\ \end{tabular} &
\begin{tabular}{c} NTL \\ Type\(\downarrow\) \\ \end{tabular} \\ \hline MT & 99.2 \(\rightarrow\) 99.1 & 98.0 \(\rightarrow\) 6.7 & 38.2 \(\rightarrow\) 5.6 & 67.8 \(\rightarrow\) 8.7 & **0.10 (0.10\%)** & **61.00 (88.56\%)** & 1.00 (1.01\%) & 46.57 (75.60\%) \\ US & 92.6 \(\rightarrow\) 10.0 & 99.7 \(\rightarrow\) 99.6 & 25.5 \(\rightarrow\) 6.8 & 41.2 \(\rightarrow\) 8.4 & **0.10 (0.10\%)** & **44.70 (80.72\%)** & 1.00(1.00\%) & 38.67 (75.55\%) \\ SN & 66.7 \(\rightarrow\) 9.2 & 70.5 \(\rightarrow\) 6.7 & 91.2 \(\rightarrow\) 90.9 & 34.6 \(\rightarrow\) 10.9 & **0.30 (0.33\%)** & **48.33 (81.73\%)** & 1.10(1.23\%) & 40.60 (77.25\%) \\ MM & 98.4 \(\rightarrow\) 9.5 & 88.4 \(\rightarrow\) 6.8 & 46.3 \(\rightarrow\) 7.6 & 95.4 \(\rightarrow\) 95.4 & **0.00 (0.00\%)** & **69.73 (88.75\%)** & 2.10(2.30\%) & 60.10 (76.95\%) \\ \hline Mean & / & / & / & / & / & **0.13 (0.13\%)** & **55.94 (84.94\%)** & 1.30 (1.39\%) & 46.48 (76.34\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The accuracy (\(\%\)) of target-specified CUTI-domain on digit datasets. The left of ‘\(\Rightarrow\)’ represents the accuracy of the model trained on the source domain dataset with SL, and the right of ‘\(\Rightarrow\)’ is the accuracy of CUTI-domain. CUTI source/target drop represent the average degradation (relative degradation) of the proposed CUTI-domain relative to SL on the source/target domains. NTL source/target drop is calculated from the original paper. The bold numbers indicate the best performance.
Figure 4: The accuracy (\(\%\)) of SL, target-specified NTL and target-specified CUTI-domain on CIFAR10, STL10 and VisDA-2017. The left of ‘\(\rightarrow\)’ represents the source domain and the right of ‘\(\rightarrow\)’ is the target domain. The bars with different colors in each subgraph represent the accuracy of the corresponding method in the source domain, the target domain, and the degradation (relative degradation) of the model performance, respectively. The data of NTL is obtained by reproducing its source code.
moval methods. For a fair comparison, the settings of these watermark removal methods are all consistent with NTL. The last two columns of Table 2 are the drop in accuracy for watermarked versus un-watermarked data. It can be observed that both CUTI-domain and NTL can effectively resist the attack of watermark removal method, and the performance of CUTI-domain is about 4% higher.
### Result of Target-free CUTI-Domain
As detailed in Section 3.3.2, when the target domain is unknown, we use synthesized samples to replace the target domain training set, and use other unknown domain datasets as the target domain test set, as shown in Table 3 and Fig. 5. For a fair comparison, the data of NTL is obtained by reproducing its source code. It can be observed that the average drop of CUTI-domain on the target domain is higher than that of NTL, and the drop on the source domain is lower, indicating better IP protection ability of our proposed CUTI-domain on unauthorized domains.
Fig. 5 shows the results for CIFAR10 \(\rightarrow\) STL10, STL10 \(\rightarrow\) CIFAR10 and T \(\rightarrow\) V. Where the results of NTL are reproduced by its source code to obtain comparable experimental data. Consistent with the previous text, the degradation of CUTI-domain and NTL are higher than SL, and CUTI-domain is the highest, which implies that CUTI-domain can better compress the generalization region of the model. Meanwhile, considering the complexity of the VisDA-2017 dataset, it is more difficult to extract representative features to build CUTI-domain, so the drop of T \(\rightarrow\) V of CUTI-domain is only slightly higher than NTL.
### Result of Applicability Authorization
In this section, we validate the applicability of model by restricting its generalization ability to the authorized source domain. Specifically, similar to section 4.3, we add an authorized watermark patch on the source domain as a new authorized source domain training set. Then the original source domain, synthesized samples, and synthesized samples with authorized watermark patches are mixed as the unauthorized target domain training set. During testing, other unknown domains are used as the test set. The experimental results of CUTI-domain are shown in Table 4, where the results of NTL are reproduced by its source code. It can be observed that the model performs better on the source domain with authorized watermark patch, but performs poorly on other unknown domains with or without watermark patch. This is consistent with our expectation that the generalization ability of the model is restricted to the source domain with the authorized watermark patch. Meanwhile, the average drop rate of our proposed CUTI-domain is 83.75 (84.27%), which is higher than NTL with 81.03 (81.81%). This is mainly because NTL directly utilizes the limited features of the source and target domains for distance maximization, while we construct a CUTI
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Source/Target} & \multirow{2}{*}{MT} & \multirow{2}{*}{US} & \multirow{2}{*}{SN} & \multirow{2}{*}{MM} & CUTI & CUTI & NTL & NTL \\ & & & & & & & & & \\ & & & & & & & & \\ \hline MT & 99.2 \(\rightarrow\) 98.8 & 98.0 \(\rightarrow\) 6.7 & 38.2 \(\rightarrow\) 6.7 & 67.8 \(\rightarrow\) 13.1 & **0.40 (0.40\%)** & **59.17 (85.43\%)** & 0.70 (0.71\%) & 57.30 (84.06\%) \\ US & 92.6 \(\rightarrow\) 9.1 & 99.7 \(\rightarrow\) 99.7 & 99.1 \(\rightarrow\) 25.5 & 6.8 \(\rightarrow\) 8.5 & **0.60 (0.60\%)** & **44.97 (80.96\%)** & **0.60 (0.60\%)** & 42.90 (74.71\%) \\ SN & 66.7 \(\rightarrow\) 11.9 & 70.5 \(\rightarrow\) 14.3 & 91.2 \(\rightarrow\) 88.7 & 34.6 \(\rightarrow\) 13.6 & **2.50 (2.74\%)** & **44.00 (74.19\%)** & 3.20 (3.51\%) & 41.53 (64.09\%) \\ MM & 98.4 \(\rightarrow\) 19.6 & 88.4 \(\rightarrow\) 6.8 & 46.3 \(\rightarrow\) 9.5 & 95.4 \(\rightarrow\) 95.1 & **0.30 (0.31\%)** & **65.73 (83.96\%)** & 2.00 (2.10\%) & 63.03 (77.62\%) \\ \hline Mean & / & / & / & / & **0.95 (1.02\%)** & **53.47 (81.13\%)** & 1.63 (1.73\%) & 51.19 (75.12\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The accuracy (\(\%\)) of target-free CUTI-domain on digit datasets. The left of ‘\(\Rightarrow\)’ represents the accuracy of the model trained on the source domain dataset with SL, and the right of ‘\(\Rightarrow\)’ is the accuracy of CUTI-domain. CUTI source/target drop represent the average degradation (relative degradation) of the proposed CUTI-domain relative to SL on the source/target domains. The data of NTL is obtained by reproducing its source code. The bold numbers indicate the best performance.
Figure 5: The accuracy (\(\%\)) of SL, target-free NTL and target-free CUTI-domain on CIFAR10, STL10, and VisDA-2017. The left of ‘\(\rightarrow\)’ represents the source domain and the right of ‘\(\rightarrow\)’ is the target domain. The bars with different colors in each subgraph represent the accuracy of the corresponding method in the source domain, the target domain, and the degradation (relative degradation) of the model performance, respectively. The data of NTL is obtained by reproducing its source code.
domain with infinite samples similar to the source domain, whose generalization boundary is more compact, and thus the model IP protection ability is stronger.
### Ablation Study
**Backbone:** In this section, we verify the ability of IP protection of CUTI-domain combined with other backbones on the VisDA-2017 dataset. As shown in the left of Fig. 6, compared with SL, CUTI-domain can further reduce the recognition ability on the target domain when implemented with VGG-19 [49], ResNet-34 [50], Inception3 [51], Xception [52] and SWIM [53], Meanwhile, the accuracy of the CUTI-domain on target domain is lower when combined with Xception [52] and SWIM [53], since they have stronger feature extraction ability on complex datasets than VGG-19 [49], ResNet-34 [50] and Inception3 [51], lead to building a CUTI-domain that is more similar to the source domain, thus better compacting the model performance within the source domain, implying stronger model IP protection ability.
**Loss Function:** We further explore the contribution of various parts of our proposed alternative loss function \(\mathcal{L}\) in Eq. (3). The variants of \(\mathcal{L}\) are designed as:
\[\mathcal{L}_{1} =KL(p_{s}||y_{s})-KL(p_{t}||y_{t}), \tag{4}\] \[\mathcal{L}_{2} =KL(p_{s}||y_{s})-KL(p_{i}||y_{i}),\] (5) \[\mathcal{L}_{3} =KL(p_{s}||y_{s})-KL(p_{t}||y_{t})-KL(p_{i}||y_{i}). \tag{6}\]
We validate the performance of different loss function variants on three random tasks, as shown in the right of Fig. 6. Due to the different complexity of the datasets, the difficulties of feature extraction and CUTI-domain construction are varying. On datasets with simple features (_i.e_., MT \(\rightarrow\) SN), \(\mathcal{L}\) performs better, while on slightly more complex datasets, \(\mathcal{L}\) only slightly outperforms other loss functions. It can be seen that the accuracy scores of different loss function variants on the target domain are relatively close, and \(\mathcal{L}\) has the lowest accuracy on each dataset, implying the validity of our proposed alternative loss function \(\mathcal{L}\).
## 5 Conclusion
In the field of artificial intelligence, protecting well-trained models as a form of IP poses challenges. To address this issue, we propose a novel CUTI-domain that acts as a barrier to constrain model performance to authorized domains. Our approach involves creating an isolation domain with features similar to those in the authorized domain, effectively blocking the model's pathway between authorized and unauthorized domains and leading to recognition failure on unauthorized domains with new private style features. We also offer two versions of the CUTI-domain, _e.g_., target-specified and target-free, depending on whether the unauthorized domain is known. Our experimental results on seven popular cross-domain datasets demonstrate the efficacy of our lightweight, plug-and-play CUTI-domain module. We hope that our work could promote the research of model IP protection and security, which should be taken seriously in real-world applications.
**Acknowledge:** This work was supported by the National Natural Science Foundation of China (Nos. 62136004, 62276130), the Key Research and Development Plan of Jiangsu Province (No. BE2022842), and Huazhu Fu's A*STAR Central Research Fund and Career Development Fund (C222812010).
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c c} \hline Source & \multicolumn{3}{c|}{Test with Path(\%)} & \multicolumn{3}{c|}{Test without Path(\%)} & \multicolumn{3}{c|}{CUTI} & CUTI & CUTI & NTL & NTL & NTL \\ & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{} \\ \cline{2-13} & MT & US & SN & MM & MT & US & SN & MM & Domain\(\uparrow\) & Domain\(\downarrow\) & \(\uparrow\) & Domain\(\downarrow\) & \(\uparrow\) \\ \hline MT & 100.0 & 14.3 & 17.6 & 12.9 & 10.3 & 8.6 & 18.3 & 14.1 & **100.0** & **13.7** & **86.27(86.27\%)** & 99.8 & 14.5 & 85.31(85.49\%) \\ US & 9.6 & 99.2 & 14.9 & 10.7 & 10.2 & 6.7 & 8.6 & 10.6 & **99.2** & **10.2** & **89.91(89.73\%)** & 98.5 & 13.3 & 85.20(86.50\%) \\ SN & 10.8 & 13.5 & 99.1 & 23.0 & 9.6 & 9.2 & 17.7 & 8.9 & 99.1 & **13.2** & **85.86(86.64\%)** & **99.3** & 15.8 & 83.51(84.10\%) \\ MM & 8.9 & 9.1 & 15.0 & 99.5 & 10.0 & 9.1 & 11.9 & 25.9 & **99.5** & **12.8** & **86.66(87.09\%)** & **99.5** & 14.0 & 85.49(85.92\%) \\ \hline & \multicolumn{3}{c|}{CIFAR} & \multicolumn{3}{c|}{STL} & \multicolumn{3}{c|}{CIFAR} & \multicolumn{3}{c|}{STL} & \multicolumn{3}{c}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c}{} & \multicolumn{3}{c}{} \\ \cline{2-13} & CIFAR & 97.9 & 42.5 & 13.1 & 11.6 & **97.9** & **22.4** & **75.50(77.12\%)** & 97.5 & 24.6 & 72.90(74.77\%) \\ STL & 29.0 & 99.9 & 13.3 & 15.2 & **99.9** & **19.2** & **80.73(80.81\%)** & 98.6 & 20.5 & 78.10(79.21\%) \\ \hline & T & V & T & V & & & & & / & & \\ \hline T & 100.0 & 22.9 & 20.3 & 10.1 & **100.0** & **17.8** & **82.23(82.23\%)** & **100.0** & 23.3 & 76.70(76.70\%) \\ \hline Mean & & / & & & **99.4** & **15.5** & **83.75(84.27\%)** & 99.0 & 18.0 & 81.03(81.81\%) \\ \hline \end{tabular}
\end{table}
Table 4: The accuracy (\(\%\)) of applicability authorization on digit datasets, CIFAR10, STL10 and VisDA-2017. CUTI authorized/other domain represent the average accuracy of CUTI-domain on the authorized/other domain, CUTI drop denote the degradation (relative degradation). The results of NTL are reproduced by its source code. The bold numbers indicate the best performance.
Figure 6: **Left:** The accuracy (\(\%\)) of SL and target-specified CUTI-domain combined with different backbones on VisDA-2017. **Right:** The accuracy (\(\%\)) of SL and target-specified CUTI-domain with different loss functions on three random tasks. |
2305.15190 | Wafer-Scale MgB2 Superconducting Devices | Progress in superconducting device and detector technologies over the past
decade have realized practical applications in quantum computers, detectors for
far-infrared telescopes, and optical communications. Superconducting thin film
materials, however, have remained largely unchanged, with aluminum still being
the material of choice for superconducting qubits, and niobium compounds for
high frequency/high kinetic inductance devices. Magnesium diboride
($\mathrm{MgB}_2$), known for its highest transition temperature
($\mathrm{T}_c$ = 39 K) among metallic superconductors, is a viable material
for elevated temperature and higher frequency superconducting devices moving
towards THz frequencies. However, difficulty in synthesizing wafer-scale thin
films have prevented implementation of $\mathrm{MgB}_2$ devices into the
application base of superconducting electronics. Here, we report ultra-smooth
(< 0.5 nm root-mean-square roughness) and uniform $\mathrm{MgB}_2$ thin (< 100
nm) films over 100 mm in diameter for the first time and present prototype
devices fabricated with these films demonstrating key superconducting
properties including internal quality factor over $\mathrm{10}^4$ at 4.5 K and
high tunable kinetic inductance in the order of tens of pH/sq in a 40 nm film.
This groundbreaking advancement will enable development of elevated
temperature, high frequency superconducting quantum circuits and devices. | Changsub Kim, Christina Bell, Jake Evans, Jonathan Greenfield, Emma Batson, Karl Berggren, Nathan Lewis, Daniel Cunnane | 2023-05-24T14:28:58Z | http://arxiv.org/abs/2305.15190v2 | # Wafer-scale magnesium diboride thin films
###### Abstract
Progress in superconducting device and detector technologies over the past decade have realized practical applications in quantum computers[1], detectors for far-IR telescopes[2, 3, 4], and optical communications[5]. Superconducting thin film materials, however, have remained largely unchanged, with aluminum still being the material of choice for superconducting qubits[1], and Nb compounds for higher frequency devices[6, 7]. MgB\({}_{2}\), known for its highest T\({}_{\text{c}}\) (39 K) among metallic superconductors[8], is a viable material for higher frequency superconducting devices moving towards THz frequencies. However, difficulty in synthesizing thin films have prevented implementation of MgB\({}_{2}\) devices into the application base of superconducting electronics, despite promising preliminary results for a number of applications. We have developed smooth and uniform MgB\({}_{2}\) films on 4-inch Si wafers by depositing uniform Mg-B co-sputtered film, capping the film _in situ_ to create a closed environment, followed by an optimized post-annealing step. We further report mature device fabrication processes and demonstrate test structures to measure properties of the films. This includes resonators with internal Q factor over 10\({}^{4}\) at 4.5 K and tunable high kinetic inductance (5-50 pH/\(\square\) readily achieved in a 40 nm film), opening up the path for development of high frequency and high temperature MgB\({}_{2}\) microdevices.
The quantum and nonlinear nature of superconductors has been of scientific interest since the discovery of superconductivity. Many applications of superconducting phenomena using thin film microdevices have shown unparalleled sensitivity for both power[2, 4, 9, 10] and coherent detectors[3, 11], quantum-limited amplification[6, 12], and computation (quantum supremacy)[1]. Current state-of-the-art superconducting devices are based on tried and tested aluminum or niobium thin films due to the ease of deposition (single element chemistry) and fabrication. More recently, research has taken advantage of more novel compounds and doped materials like TiN[13], NbTiN[14], Mn doped Al[15], and granular Aluminum (gr-AI)[16] for high kinetic inductance and to tune the critical temperature for pair-breaking applications. However, because of their low transition temperature (T\({}_{\text{c}}\), 1.20 K for Al and 9.26 K for Nb), the devices operate not only at low temperatures but at low frequencies (< 90 GHz for Al and < 700 GHz for Nb) from small superconducting gaps (superconducting energy gap \(\Delta\) = 1.764 \(k_{\text{B}}\)T\({}_{\text{c}}\) according to the BCS theory). Using higher T\({}_{\text{c}}\) films can allow higher temperature operation, higher frequency operation, or a combination of the two to better suit operational needs. MgB\({}_{2}\) has the highest T\({}_{\text{c}}\) of 39 K among metallic superconductors, and two superconducting gaps, with the interaction parameters between these gaps dependent on film quality and orientation[17]. The superconducting properties (density of states, penetration depth, etc) will fall somewhere between the BCS model predictions for the two independent gaps (\(\Delta_{\text{m}}\) -2.2 meV, \(\Delta_{\text{o}}\) -7 meV)[18] enabling device operations above 1 THz[19].
MgB\({}_{2}\) thin film thermodynamics[20, 21], deposition[22, 23, 24, 25], and fabrication[26] have been studied extensively since the discovery of superconductivity in the compound, and some promising prototypes have been demonstrated[27, 28, 29, 30, 31, 32] but practical applications have not caught on due to poor scalability and reproducibility of the films, and fabricationmaturity of the material. The macroscopic film properties that would enable wider adoption of the material include large scale uniformity, and roughness below 1 nm rms, while maintaining good superconducting properties (high T\({}_{\text{c}}\) and J\({}_{\text{c}}\)). Even further, utilization of a commonly available PVD technique like sputtering and deposition on silicon substrates would enable integration of the material into more processes and technologies. While many groups have demonstrated the capability to sputter MgB\({}_{2}\), none have matured into successful technologies, likely due to difficulty in fabricating devices from these films or even difficulty in achieving reproducible films. Here, we report MgB\({}_{2}\) thin films on Si substrates with 4-inch diameter with 1-\(\sigma\) wafer uniformity of 94.2 % (Fig. 2d), T\({}_{\text{c},0}\) over 32 K (Fig. 1a), and roughness below 0.5 nm rms (Fig. 2a). T\({}_{\text{c},0}\) can be as high as 37 K (Fig. 1b), approaching bulk values, if we relax the expectations on the film roughness. We further developed standardized processes for MgB\({}_{2}\) microdevices fabrication and demonstrate superconducting resonators with Q over 10\({}^{4}\) at 4.5 K (Fig. 3f), J\({}_{\text{c}}\) of 10 MA/cm\({}^{2}\) at 4.2 K (Fig. 1c) and kinetic inductance which can
be tuned from moderate to high levels, meeting a strict criterion to realistically achieve mature fabrication capabilities.
Magnetron sputtering is widely available, easily scalable, and produces uniform films, but _in-situ_ sputtered MgB\({}_{2}\) films resulted in low T\({}_{c}\)[24, 33] from oxidation, small grain size, and/or off-stoichiometry due to high vapor pressure and low sticking coefficient of magnesium at elevated temperatures (> 200 \({}^{\circ}\)C), as well as contamination. Post-annealing of these films shows improvements in T\({}_{c}\) but at the cost of roughness (> 10 nm rms). Room temperature deposition results in uniform distribution of magnesium but requires a post-annealing process. Annealing magnesium-boron composite film in vacuum resulted in evaporation of magnesium, rough surface and transition temperature around 6 K. Magnesium evaporation is prevented by capping the composite film with a thin (tens of nm) layer of high melting temperature material such as tantalum or boron. Tantalum does not react with magnesium or boron at typical annealing temperatures below 800 \({}^{\circ}\)C, but cracks above 700 \({}^{\circ}\)C and needs to be removed for easily measuring superconducting properties. Boron, a dielectric material with a high melting point of 2076 \({}^{\circ}\)C, serves as a better capping layer. Surface boron oxide has a low melting point of 450 \({}^{\circ}\)C and provides a crack-free, viscous capping layer [34]. There are two potential byproducts between MgB\({}_{2}\) and boron: MgB\({}_{4}\) and MgB\({}_{p}\). Both have slightly higher formation energies compared to MgB\({}_{2}\)[35], so their formation would be minimal, and would not affect the measurement of superconducting properties even if a thin layer of the byproducts is formed, because they are dielectric materials unlike metallic MgB\({}_{2}\). We have developed and demonstrated fabrication maturity in removing these capping layers for device development and optimization. In our work we tried many deposition conditions and, mostly through optimizing the roughness of the film, we chose a co-deposited Mg-B precursor film at room temperature. Sputtering of boron is very challenging, and the high melting point leads to a very low deposition rate.
Sapphire substrates have often been used for growing MgB\({}_{2}\) films because both have hexagonal crystal structure and lattice mismatch is less than 0.1 % with 30\({}^{\circ}\) rotation[41]. However, oxygen readily diffuses from sapphire to MgB\({}_{2}\) because magnesium has
Fig. 1: **DC superconducting transitions and properties of MgB\({}_{2}\) thin films.****a**, Resistivity versus temperature plot of an MgB\({}_{2}\) thin film showing superconducting transition with T\({}_{c,0}\) = 32 K. Inset: R vs T plot from 4.2 K to 300 K. **b**, Resistivity versus temperature plot of MgB\({}_{2}\) film with T\({}_{c,0}\) = 37.2 K, highest ever reported for sputtered MgB\({}_{2}\) film. **c**, Critical current density (J\({}_{c}\)) of two MgB\({}_{2}\) thin films at different temperatures (J\({}_{c}\) > 10\({}^{7}\) MA-cm\({}^{2}\) at 4.2 K) showing reproducibility of the films.
lower oxidation enthalpy compared to aluminum[42] and forms an interdiffusion layer[19]. This is especially detrimental for thin films under 100 nm. An alternative substrate is hexagonal SiC, especially for in-situ growth of MgB\({}_{2}\), because of the close lattice match. In fact, the highest T\({}_{\text{c}}\) ever measured for MgB\({}_{2}\) is from highly textured MgB\({}_{2}\) thin film slightly strained by SiC[25]. However, SiC is more expensive than Si, and polycrystalline MgB\({}_{2}\) films from post-annealing process would not be able to take advantage of the lattice match. Because magnesium reacts with silicon to form MgSi, a thin inert buffer layer was used to prevent direct contact between MgB\({}_{2}\) and Si. As was demonstrated previously[43], silicon nitride proved to be unreactive against magnesium, and LPCVD nitride as thin as 30 nm has been successfully tested to be sufficiently thick enough to serve as a good buffer. In theory, the thickness is only limited to the threshold where no pinholes are left exposing Si, mostly dominated by the initial substrate roughness. The PECVD nitride deposition on Si wafers is a highly commercialized process[44], not requiring the development of an independent recipe specific to MgB\({}_{2}\) thin films. Early in our work, we found that identical deposition and annealing recipes resulted in higher critical temperature on a silicon nitride buffer than on sapphire. Later it was confirmed through XPS that magnesium from MgB\({}_{2}\) layer migrated to sapphire substrate and resulted in large deviation in stoichiometry of the film near film/substrate interface (Fig. 2e).
MgB\({}_{2}\) film uniformity is confirmed by mapping sheet resistance measurements by Eddy current on an insulating wafer, such as sapphire or high-resistivity silicon. As-deposited Mg-B composite films and post rapid-annealed MgB\({}_{2}\) thin films on 4" wafers shown in Fig. 2c and d demonstrate 1-\(\sigma\) wafer uniformities of 98.02% and 94.2% before and after annealing.
Fig. 2: **Smooth MgB: thin film characterization.****a,** Atomic force microscopy of the MgB\({}_{2}\) thin film with T\({}_{\text{c}\odot}\) of 32 K (Fig. 1a) and the surface roughness of 0.476 nm rms. **b,** Post-annealed resistivity of MgB\({}_{2}\) films (y-axis) with roughness below 0.5 nm rms controlled by initial magnesium content in the as-deposited Mg-B composite films (resistivity along the x-axis). Reducing the magnesium content increases as-deposited resistivity as well as post-annealed resistivity by forming boron-rich MgB\({}_{2}\) films. Eddy current maps of **c,** as-deposited Mg-B composite film and **d,** post-annealed MgB\({}_{2}\) film after annealing at 600 "C for 10 minutes. The wafer uniformities (% of wafer within 1-\(\sigma\) of the average sheet resistance) are 98.02% and 94.2% before and after annealing, respectively. **e,** Deviation from median magnesium to boron ratio in 40 nm thick boron-rich MgB\({}_{2}\) film samples on silicon nitride buffer layer (red) and sapphire (blue) analyzed by depth-profile x-ray photoelectron spectroscopy. Magnesium to boron ratio stays within 10% of median for samples on silicon nitride. Significant migration of magnesium from MgB\({}_{2}\) layer to sapphire results in huge deviation of -30% from the median ratio near the interface. The range of deviation from median Mg-B ratio for the sample on silicon nitride is shaded in grey.
For device design and fabrication, controlling resistivity and related materials properties such as kinetic inductance brings a huge advantage. Typically, such control is accomplished by deliberately adding impurities [45] or changing stoichiometry [46]. However, in materials deposited through reactive sputtering, the gas nonuniformity and hysteretic parameter curve make tuning unrealistic. The high T\({}_{c}\) MgB\({}_{2}\) thin films therefore have a tremendous advantage when it comes to the range of resistivity/kinetic inductance control [22]. Resistivity control in MgB\({}_{2}\) can be done by controlling the magnesium to boron ratios. Adding more boron or reducing the amount of magnesium by reducing the RF power on magnesium target, increases the resistivity of as-deposited/pre-annealed Mg-B composite films gradually, as shown in Fig. 2b. Annealing the films causes magnesium particles to react with boron and form MgB\({}_{2}\), resulting in a linear correlation between the resistivity of percolating magnesium (as-deposited) and percolating MgB\({}_{2}\) (post-annealed).
Here we present and discuss results describing the internal quality factor and kinetic inductance of planar quarter-wave resonators patterned into our MgB\({}_{2}\) thin films using a coplanar waveguide (CPW) architecture, as shown in Fig. 3. This demonstrates fabrication maturity on par with more commonly used materials (e.g. NbTiN [47]) and shows figures of merit for microwave applications at higher temperatures. Also, given that the BCS surface resistance is both frequency and temperature dependent, better performance at lower frequencies and higher temperatures is a strong case for better performance for higher frequency at lower temps. Work is ongoing to improve on dielectric losses in the film stack, as well as demonstrate high frequency losses in the material and will be published following those results. The losses in a superconducting resonator can be measured through the quality factor of the resonator. By fitting the resonator to a well-known model [48], we can obtain the internal microwave losses in the resonator. These losses include superconducting losses (from the BCS surface resistance), dielectric losses (both in the substrate/passivation layers, as well as the interfaces), and radiative losses. While our initial goal of this experiment was to try to measure the BCS surface resistance in the films, through troubleshooting and optimizing the losses, we have concluded that we are still dominated by dielectric losses in our thin film stack. By adopting standard fabrication techniques like a buffered oxide etch, we continue to see decreases in these losses, with the Q\({}_{i}\) - 10\({}^{4}\) at 4.5 K (Fig. 3f). We assume that the losses are limited by dielectrics or interfaces for two reasons. First, as we cool to lower temperature, the value of Q\({}_{i}\) has no temperature dependence from 1.5 K down to 1 K, with the value about 4x10\({}^{4}\). Work is ongoing to measure these films below 4 K, which will help separate dielectric losses (specifically TLS) from quasi-particle losses. Another strong indicator that the losses in our devices are indeed limited by dielectrics (or interfaces) is that the quality factor is highly dependent on the width of the devices. Smaller devices have higher dielectric fill factors and so the dielectric loss contribution goes up. The inset of Fig. 3f shows the quality factor of 5 resonators (widths are 3, 4, 5, 6, and 10 um) of the same length (5 mm) scaling from -7500 to 10,000. In the case that these losses are in the superconductor, there would be an \(\hat{P}^{\prime}\) dependence and given that the widest traces yield the highest frequency resonators, they would have the lowest quality factors.
These resonators also serve to demonstrate the high kinetic inductance (L\({}_{x}\)) prevalent in these films. Since the geometric inductance and capacitance won't change much with different geometry of the same impedance, we can easily see how the kinetic inductance fraction \(a\) changes as a function of the width of the line. The design frequency of the resonator was simulated to be close to 7 GHz and the actual frequency of the resonances falls between 1.5 GHz and 2.8 GHz for conductor width from 3 um to 10 um. Comparing the design frequency with the actual frequency can give the value of alpha [49], \(\alpha\) = 1 - \(\left(\nicefrac{{f_{resonator}}}{{f_{design}}}\right)^{2}\). Because MgB\({}_{2}\) doesn't have a typical temperature dependence, using the Mattis-Bardeen theory to estimate \(a\) is not feasible, but the method used is very accurate for the case of \(a\) > 0.5. In the same device used to describe the losses, alpha goes from 0.84 at 10 um linewidth to 0.95 at 3 um linewidth, with a consistent L\({}_{x}\) = 22-24 pH/\(\square\). This is shown in Fig. 3c where two chips from different wafers are shown. In the second device, the same film recipe is used, but the films are slightly thinner and so L\({}_{x}\) = 28 pH/\(\square\). The later shows even higher kinetic inductance can be readily achieved. By tuning the Mg content in the precursor film we can achieve a range of 5-50 pH/\(\square\) for a similar film thickness (around 40 nm). Using thinner films, we should be able to achieve very large numbers approaching or exceeding nH/\(\square\) values. A systematic study of RF properties as a function of Mg content is ongoing. In these films, tuning the stoichiometry is much better than reactive gas sputtering, since we don't need to control gas partial pressures in a vacuum environment, which is a challenge in depositing Nb and Ti nitride films. We are also able to thin down the films using an argon ion mill, without degradation of the superconducting properties (within some limits). This has been reported for producing even ultra-thin films of the same material [50]. Annealing a thicker film also avoids surface tension requiring more energy to achieve the same grain size and hence achieving the same critical temperature at lower temperatures.
We have used a fairly simple model that assumes a linear combination of the penetration depths associated with the two gaps in the materials [31] to fit the temperature dependent frequency shift of two different resonators. The high coupling quality factor (-10\({}^{4}\)) made it difficult to find the resonance above 10-15 K and so the data does not go beyond this temperature, however, the fits shown in Fig. 3e show that the contribution from the larger gap is small (-12%), but non-negligible. This makes sense given that the smaller gap is responsible for the bulk of the kinetic inductance, which is high in our films. Using our thin film capability and expanding the two-gap model to include an interaction parameter which could also have temperature dependence, we hope to contribute to a global model for the contribution of the second gap to the overall film properties. Until now, film properties could not be reproduced under different
conditions to modify the MB model for the two-gap material. Furthermore, there is only a rudimentary understanding on how these two gaps will affect high frequency performance in the films, and how pairbreaking of the smaller gap will contribute to RF losses while there still exists a larger gap. This is perhaps the most important future study planned for these films.
The wafer-scale superconducting MgB\({}_{2}\) films can be readily used for microdevice fabrication and by demonstrating key figures of merit - uniform and smooth large-scale film on Si substrate with high T\({}_{\text{c}}\), \(\lambda_{\text{c}}\), and \(\lambda_{\text{k}}\) - we have taken this technology to a level that can be immediately implemented. Historically, the low melting point and highly oxidizing nature of magnesium present multi-fold problems, and have prevented any sustainable large-scale fabrication of MgB\({}_{2}\) devices. We made uniform and smooth large-area as-deposited Mg-B composite films by RF sputtering and substrate biasing. The films were grown on commercially available silicon substrate by adding a low-stress nitride buffer layer. By creating a closed environment with boron capping layer and prevented magnesium evaporation during annealing, we were able to optimize the annealing temperature and time by rapid thermal processor to achieve high T\({}_{\text{c}}\). We have further developed fabrication techniques for these films to achieve small critical features (0.5 \(\upmu\)m demonstrated) and high yield. The devices have enabled the measurement of high \(\lambda_{\text{c}}\), high kinetic inductance, and low rf losses, promising a new era of high temperature/high frequency devices with applications spreading from superconducting photon detectors, transmission lines, parametric amplifiers, frequency multipliers, as well as quantum computing qubits and circuits.
|
2302.11351 | Abrupt and spontaneous strategy switches emerge in simple regularised
neural networks | Humans sometimes have an insight that leads to a sudden and drastic
performance improvement on the task they are working on. Sudden strategy
adaptations are often linked to insights, considered to be a unique aspect of
human cognition tied to complex processes such as creativity or meta-cognitive
reasoning. Here, we take a learning perspective and ask whether insight-like
behaviour can occur in simple artificial neural networks, even when the models
only learn to form input-output associations through gradual gradient descent.
We compared learning dynamics in humans and regularised neural networks in a
perceptual decision task that included a hidden regularity to solve the task
more efficiently. Our results show that only some humans discover this
regularity, whose behaviour was marked by a sudden and abrupt strategy switch
that reflects an aha-moment. Notably, we find that simple neural networks with
a gradual learning rule and a constant learning rate closely mimicked
behavioural characteristics of human insight-like switches, exhibiting delay of
insight, suddenness and selective occurrence in only some networks. Analyses of
network architectures and learning dynamics revealed that insight-like
behaviour crucially depended on a regularised gating mechanism and noise added
to gradient updates, which allowed the networks to accumulate "silent
knowledge" that is initially suppressed by regularised (attentional) gating.
This suggests that insight-like behaviour can arise naturally from gradual
learning in simple neural networks, where it reflects the combined influences
of noise, gating and regularisation. | Anika T. Löwe, Léo Touzo, Paul S. Muhle-Karbe, Andrew M. Saxe, Christopher Summerfield, Nicolas W. Schuck | 2023-02-22T12:48:45Z | http://arxiv.org/abs/2302.11351v4 | # Regularised Neural Networks mimic Human Insight
###### Abstract
Humans sometimes show sudden improvements in task performance that have been linked to moments of insight. Such insight-related performance improvements appear special because they are preceded by an extended period of impasse, are unusually abrupt, and occur only in some, but not all, learners. Here, we ask whether insight-like behaviour also occurs in artificial neural networks trained with gradient descent algorithms. We compared learning dynamics in humans and regularised neural networks in a perceptual decision task that provided a hidden opportunity which allowed to solve the task more efficiently. We show that humans tend to discover this regularity through insight, rather than gradually. Notably, neural networks with regularised gate modulation closely mimicked behavioural characteristics of human insights, exhibiting delay of insight, suddenness and selective occurrence. Analyses of network learning dynamics revealed that insight-like behaviour crucially depended on noise added to gradient updates, and was preceded by "silent knowledge" that is initially suppressed by regularised (attentional) gating. This suggests that insights can arise naturally from gradual learning, where they reflect the combined influences of noise, attentional gating and regularisation.
Inights Neural Networks Learning
1 Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
2 Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, 14195 Berlin, Germany
3 Department of Physics, Ecole Normale Superieure, Paris, France 75005
4 Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
5 School of Psychology, University of Birmingham, Birmingham B15 2SA, UK
6 Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
7 Gatsby Computational Neuroscience Unit, University College London, London, UK W1T 4JG
8 Sainsbury Wellcome Centre, University College London, London, UK W1T 4JG
9 CIFAR Azrieli Global Scholar, CIFAR, Toronto, Canada
10 Institute of Psychology, Universitat Hamburg, Von-Melle-Park 5, 20254 Hamburg, Germany
*equal contribution
E-mail: [email protected]
## 1 Introduction
he ability to learn from experience is common to all animals and some artificial agents. Neural networks trained with stochastic gradient descent (SGD) are a current theory of human learning that can account for a wide range of learning phenomena, but while standard networks seem to imply that all learning is gradual, humans may sometimes learn in an abrupt manner.
Such non-linear improvements in task performance or problem solving have been described as insights or aha-moments Kohler (1925); Durstewitz et al. (2010), and are often thought to reflect a qualitatively different, discrete learning mechanism Stuyck et al. (2021); Weisberg (2015). One prominent idea, dating back to Gestalt psychology Kohler (1925), is that an insight occurs when an agent has found a novel problem solution by restructuring an existing task representation Kounios and Beeman (2014). It has also been noted that humans often lack the ability to trace back the cognitive process leading up to an insight Jung-Beeman et al. (2004), suggesting that insights involve unconscious processes becoming conscious. Moreover, so called "aha-moments" can sometimes even be accompanied by a feeling of relief or pleasure in humans Kounios and Beeman (2014); Danek et al. (2014); Kounios and Beeman (2015). Such putative uniqueness of the insight phenomenon would also be in line with work that has related insights to brain regions distinct from those associated with gradual learning Shen et al. (2018); Jung-Beeman et al. (2004). These include, for instance, the anterior temporal gyrus Jung-Beeman et al. (2004); Tik et al. (2018), as well as subcortical areas such as the left amygdala or right hippocampal gyrus Shen et al. (2018). Altogether, these findings have led psychologists and neuroscientists to propose that insights are governed by a distinct learning process Jung-Beeman et al. (2004), that cannot be accounted for by current common theories of learning.
Here, we show that insight-like phenomena can occur without dedicated mechanisms for re-representation or a division of labour between conscious and unconscious processes. Our argument does not concern the subjective experiences related to insights, but focuses on showing how insight-like behaviour can emerge from gradual learning algorithms. Specifically, we aim to explain the following three main observations Schuck et al. (2015, 2022); Gaschler et al. (2019, 2013, 2015): First, insights trigger abrupt behavioural changes, accompanied by meta-cognitive suddenness (a "sudden and unexpected flash") Bowden et al. (2005); Gaschler et al. (2013); Metcalfe and Wiebe (1987); Weisberg (2015). These abrupt behavioural changes are often accompanied by fast neural transitions, which have been observed in humans as well as animals Durstewitz et al. (2010); Karlsson et al. (2012); Miller and Katz (2010); Schuck et al. (2015); Allegra et al. (2020). Second, insights occur selectively in some subjects, while for others improvement in task performance arises only gradually, or never Schuck et al. (2015). Finally, insights occur "spontaneously", i.e. without the help of external cues Friston et al. (2017), and are therefore observed after a seemingly random duration of impasse Ohlsson (1992) or delay after a change in environmental contingencies for different participants. In other words, participants seem to be "blind" to the new solution for an extended period of time, before it suddenly occurs to them. Insights are thus characterised by suddenness, selectivity, and delay.
The idea that insight-like behaviour can arise naturally from gradual learning is supported by previous work on neural networks trained with gradient descent Power et al. (2022). Saxe and colleagues Saxe et al. (2014), for instance, have shown that non-linear learning dynamics, i.e. suddenness in the form of saddle points and stage-like transitions, can result from gradient descent even in linear neural networks, which could explain sudden behavioural improvements. Other work has shown a delayed or stage-like mode of learning in neural networks that is reminiscent of the period of impasse observed in humans, and reflected for instance in the structure of the input data Saxe et al. (2019); Schapiro and McClelland (2009); McClelland and Rogers (2003), or information compression of features that at some point seemed task-irrelevant Flesch et al. (2022); Saxe et al. (2019). Finally, previous work has also found substantial individual differences between neural network instances that are induced by random differences in weight initialisation, noise, or the order of training examples Bengio et al. (2009); Flesch et al. (2018), which can become larger with training Mehrer et al. (2020).
Two factors that influence discontinuities in learning in neural networks are regularisation and gating. Regularisation plays a key role in the suppression of input features. While this avoids overfitting and can help a network to escape a local minimum Liu et al. (2020), it might also cause above mentioned "blindness" to a solution that involves inputs which were once erroneously deemed irrelevant. Gating, on the other hand, is known to cause exponential transitions in learning that are widely seen in multiplicative dynamical systems like the logistic growth model. Both techniques are widely used in artificial neural networks Bishop (2006); Krishnamurthy et al. (2022); Jozefowicz et al. (2015), and are inspired by biological brains Groschner et al. (2022); Poggio et al. (1985); Costa et al. (2017). Regularisation and gating could therefore be important aspects of network structure and training that are related to the temporary impasse followed by a sudden performance change, akin to insight-like behaviour.
Based on these findings, we hypothesised that insight-like behaviour - as characterised by suddenness, selectivity, and delay - can occur in simple neural networks trained with gradient descent. As indicated above, a simple neural network architecture with multiplicative gates and regularisation served as our candidate model. We predicted that due to the multiplicative nature of gating, regularising gates during training could lead to blindness of some relevant features that are key to a solution. We focused specifically on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to the less aggressive L2-regularisation. We reason that applying L1-regularisation, besides creating non-linear learning dynamics due to the multiplicative nature of the weights and gates, will lead to a sustained suppression period before the fast transition, similar to the delay observed in humans.
## Results
To study insight-like learning dynamics, 99 participants and 99 neural networks, matched in their behavioural performance to their human counterparts (see below for details), performed a decision task that required a binary choice about circular arrays of moving dots Rajananda et al. (2018) for humans and a symbolic version in which inputs were two scalars for networks. Dots were characterised by two features with different degrees of noise, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a colour (orange or purple) (Fig.1A). Participants and networks had to learn the correct choice in response to each stimulus from trial-wise binary feedback, and were not instructed which features of the stimulus to pay attention to.
Importantly, the task provided a hidden opportunity to improve one's decision strategy that could be discovered through insight, similar to the spontaneous strategy switch task developed earlier Schuck et al. (2015). Participants first underwent an initial training phase (4 blocks, 100 trials each in humans, 8 blocks/800 trials in networks), during which only the motion direction predicted the correct choice, while stimulus colour was random (_motion phase_, see Fig.1D). Without any announcement, stimulus colour became predictive of the correct response in a later phase, such that from then on both features could be used to determine choice (_motion and colour phase_, 5 blocks for humans and networks, Fig.1D). Such unannounced changes in feature relevance elicit insights, i.e. behaviour exhibits changes that are sudden, delayed and selective, and post-experimental verbal questionnaires indicate that these changes go hand in hand with gaining consciousness about the new regularity Gaschler et al. (2019).
To test whether and when participants employed the hidden colour insight, we assessed whether choices were sensitive to the motion direction (using the colour insight meant that stimulus motion could be ignored). Specifically, following an initial pre-training period (see Methods) the amount of motion noise varied randomly in five levels of motion coherence ( 5%, 10%, 20%, 30% or 45%, noise variability started in the last two blocks before the onset of the _motion and colour phase_). Behaviour in trials with the highest amount of noise in dot motion (5% coherence, 30 trials per block) was then used to test whether participants had an insight about the usefulness of the colour, as high performance in these trials could only be achieved by using the colour information Schuck et al. (2015). Colour difficulty was constant and consistently allowed participants and networks to identify colour easily. A second measure that we used to investigate insight was a post-experimental questionnaire, in which participants were asked whether (1) they had noticed a rule in the experiment, (2) how long it took them to notice the rule, (3) whether they had paid attention to colour during choice. The questionnaire was administered after the _motion and colour phase_, and was followed by a instruction block that served as a sanity check (see Methods).
### Human Behaviour
Data from the _training phase_, during which motion directions were highly coherent and colours changed randomly (Block 1-2, dark grey tiles in Fig 1D), showed that participants learned the response mapping for the four motion directions well (78% correct, t-test against chance: \(t(98)=30.8\), \(p<.001\)). In the following task phase, noise was added to the motion, while the colour remained uncorrelated (_motion phase_, blocks 3-4, grey tiles in Fig. 1D). This resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: \(\chi^{2}(1)\) = 726.36, \(p<.001\); RTs: \(\chi^{2}(1)\) = 365.07, \(p<.001\); N = 99, Fig.2A). Crucially, performance during this phase was heavily diminished in the conditions with the largest amounts of motion noise, i.e. the two lowest coherence conditions: the percentage of correct choices was at only 60% and 63% in the two lowest coherence conditions, and did not change over time (paired t-test block 3 vs 4: \(t(195.9)=-1.13\), \(p=0.3\), \(d=0.16\)). Hence, performance (improvements) largely beyond these low baseline levels can only be attributed to colour use, rather than heightened motion sensitivity.
The noise level continued to influence performance in the _motion and colour phase_, as evidenced by a difference between performance in high vs. low coherence trials (20, 30 & 45% vs 5 & 10 % coherent motion, respectively; \(M=93\pm 6\%\) vs \(M=77\pm 12\%\); \(t(140.9)=12.5\), \(p<.001\), \(d=1.78\), see Fig.2A-B). Notably, however, the onset of the colour correlation triggered performance improvements across all coherence levels (\(t(187.2)=-12.4\), \(p<.001\), \(d=1.8\); end of _motion phase_: \(M=78\pm 7\%\) vs. end of _motion and colour phase_: \(M=91\pm 8\%\)), contrasting the stable performance found during the motion phase and suggesting that at least some participants leveraged colour information once available.
We asked whether these improvements are related to gaining conscious insight by analysing the post-experimental questionnaire. Results show that conscious knowledge about the colour regularity arose in some, but not all, participants: 57.6% (57/99) reported in the questionnaire to have used colour, while 42.4% indicated to not have noticed or used the colour. We then checked whether these conscious insights were related to the key behavioural characteristics of suddenness, selectivity, and variable delay. To test for sadness, we fitted each participant's time course of accuracy on low coherence trials by either (1) a linear ramp or (2) a sigmoid function. While a linear model (free parameters: intercept \(y_{0}\) and slope \(m\)) can capture gradual improvements in behaviour that might occur without insight, a better fit
of a non-linear sigmoid function indicates sudden behavioural transitions (free parameters: slope \(m\), inflection point \(t_{s}\) and function maximum \(y_{max}\)). Performance across participants on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative insight participants (BIC sigmoid function: \(M=-6.7\), \(SD=0.7\), protected exceedance probability: 1, BIC linear function: \(M=-6.4\), \(SD=0.5\), protected exceedance probability: 0). The sigmoid function also outperformed a step function with free parameters inflection point \(t_{s}\) and function maximum \(y_{max}\) (BIC step function: \(M=-6.5\), \(SD=0.6\), protected exceedance probability: 0) (Fig.2D-E, Fig.S2).
We next tested insight selectivity, i.e. whether all participants, or only a subset, showed abrupt behavioural transitions, as indicated by participants' self-reports. Chance level of suddenness was determined by an out-of-sample null distribution of sigmoid steepness derived from a control experiment (N = 20), in which participants performed an identical task, except that colour never started to correlate with motion, and hence no insight was possible. Fitting the same sigmoid
Figure 1: Stimuli and task design **(A)** Stimuli and stimulus-response mapping: dot clouds were either coloured in orange or purple and moved to one of the four directions NW, NE, SE, SW with varying coherence. A left response key, “X”, corresponded to the NW/SE motion directions, while a right response key “M” corresponded to NE/SW directions. **(B)** Schematic of simple neural network with regularised gate modulation with colour codes corresponding to respective colour and motion _weights_ and _gates_. Number of nodes shown is the exact number of nodes used in the neural network simulations. **(C)** Trial structure: a fixation cue is shown for a duration that is shuffled between 400, 600, 800 and 1000 ms. The random dot cloud stimulus is displayed for 2000 ms. A response can be made during these entire 2000 ms and a feedback cue will replace the fixation cue at the centre of the stimulus until the end of the stimulus display duration. **(D)** Task structure of the two-alternative forced choice task for humans and neural networks: each block consisted of 100 trials. Colour was predictive of correct choices and correlated with motion directions as well as correct response buttons in the last five blocks (_motion and colour phase_). In the last block, humans and networks were instructed to use colour inputs to respond. In the _motion phase_, colour changed randomly and was not predictive. A first training block only for humans contained only 100% motion coherence trials to familiarise subjects with the S-R mapping. The remaining training blocks contained only high coherence (0.2, 0.3, 0.45) trials.
function to this data, we derived a baseline distribution of the steepness (see Methods for details). Comparing the steepness values (at the inflection point) obtained in our experimental sample to the baseline distribution derived from the control group with no colour correlation, showed that about half of participants (48/99, 48.5%) had values larger than the 100% percentile of the control distribution. This thus suggests that truly abrupt insight occurred selectively in these "insight participants" (Fig.2F). 79.2% of the participants classified as insight subjects also self-reported to have used colour to make correct choices (Fig. S6A-B). Hence, our behavioural marker of unexpectedly sudden performance changes can serve as a valid indicator for insight.
We validated our behavioural metric of selectivity through additional analyses. Splitting behaviour into two separate insight (participants with steepness values larger than the 100% percentile of the control distribution) and no-insight groups showed that, as expected based on the dependency of accuracy and our behavioural metric, insight subjects started to perform significantly better in the lowest coherence trials once the _motion and colour phase_ (Fig.2C) started, (mean proportion correct in _motion and colour phase_: \(M=83\pm 10\%\)), compared to participants without insight (\(M=66\pm 8\%\)) (\(t(92)=9.5\), \(p<.001\), \(d=1.9\)). Unsurprisingly, a difference in behavioural accuracy between insight participants and no-insight participants also held when all coherence levels were included (\(M=91\pm 5\%\) vs. \(M=83\pm 7\%\), respectively, t-test: \(t(95.4)=6.9\), \(p<.001\), \(d=1.4\)). Interestingly, accuracy in the _motion phase_, which was not used in steepness fitting, did not differ between groups (low coherence trials: \(M=59\%\), vs. \(M=62\%\); \(t(94.4)=-1.9\), \(p=0.07\), \(d=0.38\); all noise levels: \(M=76\%\) vs \(M=76\%\),\(t(96)=0.45\), \(p=0.7\), \(d=0.09\)). Reaction times, which are independent from the choices used in model fitting and thus served as a sanity check for our behavioural metric split, reflected the same improvements upon switching to the colour strategy. Subjects that showed insight about the colour rule (\(M=748.47\pm 171.1\) ms) were significantly faster (\(t(96.9)=-4.9\), \(p<.001\), \(d=0.97\)) than subjects that did not (\(M=924.2\pm 188.9\) ms) on low coherence trials, as well as over all noise levels (\(t(97)=-3.8\), \(p<.001\), \(d=0.87\)) (\(M=675.7\pm 133\) ms and \(M=798.7\pm 150.3\) ms, respectively).
Finally, we asked whether insights occurred with random delays, as reported earlier. To quantify this key characteristic, insight moments were defined as the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). We verified the precision of our switch point identification by time-locking the data to the individually fitted switch points. This showed that accuracy steeply increased between the halved task block (50 trials) immediately before vs. after the switch, as expected (\(M=62\%\) vs \(M=83\%\)\(t(89)=-11.2\), \(p<.001\), \(d=2.34\), Fig.2C, Fig. S5A). Additionally, reaction times dropped steeply from pre- to post-switch (\(M=971.63\) ms vs. \(M=818.77\) ms, \(t(87)=3.34\), \(p<.001\), \(d=0.7\)). The average delay of insight onset was 1.3 task blocks (130 trials) (\(\pm 95\) trials / \(0.95\) blocks, Fig.2G). The distribution of delays among insight participants ranged from 0 to 3 blocks after the start of the _motion and colour phase_, and statistically did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(48)=0.15\), \(p=0.69\)).
Hence, the behaviour of human subjects showed all characteristics of insight: sudden improvements in performance that occurred only in a subgroup and with variable delays.
#### Neural Network Behaviour
To probe whether insight-like behaviour can arise in simple neural networks trained with gradient descent, we simulated 99 network models performing the same decision making task. The networks had two input nodes (\(x_{c}\), \(x_{m}\), for colour and motion, respectively), two input-specific gates (\(g_{m}\), \(g_{c}\)) and weights (\(w_{m}\), \(w_{c}\)), and one output node (\(\hat{y}\), Fig.1B). Network weights and gates were initialised at 0.01. The stimulus features motion and colour were reduced to one input node each, which encoded colour/motion direction of each trial by taking on either a positive or a negative value. More precisely, given the correct decision \(y=\pm\), the activities of the input nodes were sampled from i.i.d. normal distributions with means \(yM_{m}\) and \(yM_{c}\) and standard deviations \(\sigma_{m}=0.01\) and \(\sigma_{c}=0.01\) for colour and motion respectively. Hence \(M_{m}\) and \(M_{c}\) determine the signal to noise ratio in each input. We fixed the colour mean shift \(M_{c}=0.22\), while the mean shifts of the motion node differed by noise level and were fitted individually such that each human participant had one matched network with comparable pre-insight task accuracy in each motion noise condition (see below).
The network multiplied each input node by two parameters, a corresponding weight, and a gate, and returned a decision based on the output node's sign \(\hat{y}\):
\[\hat{y}=\mathrm{sign}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta) \tag{1}\]
where \(\eta\sim\mathcal{N}(0,\sigma)\) is Gaussian noise, and weights and gates are the parameters learned online through gradient descent. To train L1-networks we used a simple squared loss function with L1-regularisation of gate weights:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\lambda(|g_ {m}|+|g_{c}|) \tag{2}\]
with a fixed level of regularisation \(\lambda=0.07\).
During training, Gaussian noise was added to each gradient update to mimic learning noise and induce variability between individual networks (same gradient noise level for all networks). \(\xi\sim\mathcal{N}(\mu_{\xi}=0,\sigma_{\xi}=0.05)\) was added to each gradient update, yielding the following update equations for noisy SGD of the network's weights
\[\Delta w_{m}= -\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)+\xi_{w_ {m}}, \tag{3}\]
and gates,
\[\Delta g_{m}= -\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)\] \[-\alpha\mathrm{sign}(\mathrm{g}_{m})+\xi_{\mathrm{g}_{m}} \tag{4}\]
where we have not notated the dependence of all quantities on trial index \(t\) for clarity; and analogous equations hold for colour weights and gates with all noise factors \(\xi_{g_{m}}\,\xi_{w_{m}}\) etc, following the same distribution.
Using this setup, we studied whether L1-regularisation would lead the network to show key characteristics of insight-like behaviour. Specifically, we reasoned that L1-regularisation of the gate weights would introduce competitive dynamics between the input channels that can lead to non-linear learning dynamics. We focused on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to L2-regularisation, which is less aggressive in particular once gates are already very small. While the multiplicative nature of the weights and gates results in non-linear quadratic and cubic gradient dynamics, applying L1-regularisation will lead to a sustained suppression period before the fast transition (see Methods).
Networks received an extended pre-task training phase of 6 blocks, but then underwent a training curriculum precisely matched to the human task (2 blocks of 100 trials in the _motion phase_ and 5 blocks in the _motion and colour phase_, see Fig. 1D). We adjusted direction specificity of motion inputs (i.e. difference in distribution means from which \(x_{m}\) was drawn for left vs right trials) separately for each participant and coherence condition, such that performance in the motion phase was equated between each pair of human and network (Fig.3A, see Methods). Moreover, the colour and
Figure 2: Humans: task performance and insight-like strategy switches **(A)** Accuracy (% correct) during the _motion phase_ increases with increasing motion coherence. N = 99, error bars signify standard error of the mean (SEM). **(B)** Accuracy (% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (_motion and colour phase_), second dashed vertical line the “instruction” about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. **(C)** Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) subjects. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. **(D)** Illustration of the sigmoid function for different slope steepness parameters. **(E)** Difference between BICs of the linear and sigmoid function for each human subject. N = 99. **(F)** Distributions of fitted slope steepness at inflection point parameter for control experiment and classified insight and no-insight groups. **(G)** Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).
motion input sequences used for network training were sampled from the same ten input sequences that humans were exposed to. A learning rate of \(\alpha=0.6\) (same for all participants) was selected to match average learning speed.
### L1-regularised Neural Networks
Networks learned the motion direction-response mapping well in the training phase, during which colour inputs changed randomly and output should therefore depend only on motion inputs (_motion phase_, 75% correct, t-test against chance: \(t(98)=33.1\), \(p<.001\), the accuracy of humans in this phase was \(M=76\pm 6\%\)). As in humans, adding noise to the motion inputs (_motion phase_) resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: \(\chi^{2}\)(1) = 165.61, \(p<.001\); N = 99, Fig.3A), as expected given that input distributions were set such that network performance would equate to human accuracy (Fig.3A-B). Networks also exhibited low and relatively stable performance levels in the two lowest coherence conditions (58% and 60%, paired t-test to assess stability in the _motion phase_: \(t(98)=-0.7\), \(p=0.49\), \(d=0.02\)), and had a large performance difference between high vs low coherence trials (\(M=88\%\pm 6\%\) vs. \(M=74\pm 13\%\), \(t(137.3)=9.6\), \(p<.001\), \(d=1.36\) for high, i.e. \(\geq\) 20% coherence, vs. low trials). Finally, humans and networks also performed comparably well at the end of learning (last block of the _colour and motion phase_: \(M(nets)=79\%\pm 17\%\) vs. \(M(humans)=82\pm 17\%\), \(t(195.8)=1.1\), \(p=0.27\), \(d=0.16\), Fig. S8C), suggesting that at least some networks did start to use colour inputs. Hence, networks' baseline performance and learning were successfully matched to humans.
To look for characteristics of insight in network performance, we employed the same approach used for modelling human behaviour, and investigated suddenness, selectivity, and delay. To identify sudden performance improvements, we fitted each network's time course of accuracy on low coherence trials by (1) a linear model and (2) a non-linear sigmoid function, which would indicate gradual performance increases or insight-like behaviour, respectively. As in humans, network performance on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative "insight networks" (BIC sigmoid function: \(M=-10\), \(SD=1.9\), protected exceedance probability: 1, BIC linear function: \(M=-9\), \(SD=2.4\), protected exceedance probability: 0)(Fig.3D).
We then tested whether insight-like behaviour occurred only in a subset of networks (selectivity) by assessing in how many networks the steepness of the performance increase exceeded a chance level defined by a baseline distribution of the steepness. As in humans, we ran simulations of 99 control networks with the same architecture, which were trained on the same task except that during the _motion and colour phase_, the two inputs remained uncorrelated. About half of networks (48/99, 48.5%) had steepness values larger than the 100% percentile of the control distribution, matching exactly the value we observed in the human sample. The L1-networks that showed sudden performance improvements were not matched to insight humans more often than chance (\(\chi^{2}\)(47) = 27.9, \(p=0.99\)), suggesting that network variability did not originate from baseline performance levels or trial orders. Hence, a random subset of networks showed sudden performance improvements comparable to those observed during insight moments in humans (Fig.3E).
For simplicity reasons in comparing network behaviour to humans, we will refer to the two groups as "insight and no-insight networks". Analysing behaviour separately for the insight and no-insight networks showed that switches to the colour strategy improved the networks' performance on the lowest coherence trials once the _motion and colour phase_ started, as compared to networks that did not show a strategy shift (\(M=83\pm 11\%\), vs. \(M=64\pm 9\%\), respectively, \(t(89.8)=9.2\), \(p<.001\), \(d=1.9\), see Fig.3C). The same performance difference between insight and no-insight networks applied when all coherence levels of the _motion and colour phase_ were included (\(M=88\pm 7\%\) vs. \(M=77\pm 6\%\), \(t(93.4)=7.8\), \(p<.001\), \(d=1.57\)). Unexpectedly, insight networks performed slightly worse on low coherence trials in the motion phase, i.e. before the change in predictiveness of the features, (\(t(97)=-3.1\), \(p=0.003\), \(d=0.62\)) (insight networks: \(M=58\pm 8\%\); no-insight networks: \(M=64\pm 9\%\)), and in contrast to the lack of pre-insight differences we found in humans.
Finally we asked whether insight-like behaviour occurred with random delays in neural networks, again scrutinising the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). Time-locking the data to these individually fitted switch points verified that, as in humans, the insight-like performance increase was particularly evident around the switch points: accuracy was significantly increased between the halved task blocks preceding and following the insight-like behavioural switch, for colour switching networks (\(M=66\pm 8\%\) vs. \(M=86\pm 7\%\), \(t(91.6)=-12.7\), \(p<.001\), \(d=2.6\), see Fig.3C, Fig. SSB).
Among insight networks, the delay distribution ranged from 1 to 4 blocks after the start of the _motion and colour phase_, and did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(48)=0.13\), \(p=0.85\)). The average delay of insight-like switches was 1.75 task blocks (\(\pm 1.05\)), corresponding to 175 trials (Fig.3F). The insight networks' delay was thus slightly longer than for humans (\(M=130\pm 95\) trials vs. \(M=175\pm 105\) trials, \(t(92.7)=-2.1\), \(p=0.04\), \(d=0.42\)). The variance of insight induced strategy switch onsets as well as the relative variance in the abruptness of the switch onsets thus qualitatively matched our behavioural results observed in human participants. The behaviour of L1-regularised neural networks therefore showed all characteristics
of human insight: sudden improvements in performance that occurred selectively only in a subgroup with variable random delays.
### L2-regularised Neural Networks
Following our observation that L1-regularised networks exhibited human-like insight behaviour, we investigated whether this was specific to the form of regularisation. We therefore trained otherwise identical networks with a L2-regularisation term on the gate weights. We hypothesised that L2-regularisation would also lead to competitiveness between input nodes, but to a lower extent than L1-regularisation. We reasoned that in particular the fact that during the _motion phase_ the networks motion weights would not shrink as close to 0 would lead to more frequent and earlier insight-like behavioural switches.
While L2-regularised gate weights led to switches that were similar to those previously observed in their abruptness (Fig. S7C), such insight-like behaviours were indeed much more frequent and clustered: 96% of networks switched to a colour strategy, with a switch point distribution that was much more centred around the onset of the colour predictiveness (Fig. S7F, average delay of 1 task block (\(SD=1.1\)) corresponding to 100 trials after onset of the colour correlation (_motion and colour phase_). This was significantly shorter than for L1-regularised networks (\(M=1.05\pm 1.1\) vs. \(M=1.75\pm 1.05\), \(t(59.6)=4\), \(p<0.001\), \(d=0.9\)) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(95)=0.26\), \(p=0.005\)). Additionally, performance on the lowest coherence level in the last block of the _colour and motion phase_ before colour instruction was centred just below ceiling and thus did not indicate a range of colour use like humans and L1-regularised networks (\(M(L2-networks)=97\%\pm 2\%\) vs. \(M(humans)=82\pm 17\%\), \(t(101.6)=-8.8\), \(p<.001\), \(d=1.25\), Fig. S8C).
While L2-regularised networks thus showed abrupt behavioural transitions, they failed to show the other two key characteristics of insight: selectivity and delay.
Figure 3: L1-regularised neural networks: task performance and insight-like strategy switches **(A)** Accuracy (% correct) during the _motion phase_ increases with increasing motion coherence. N = 99, error bars signify SEM. Grey line is human data for comparison. **(B)** Accuracy (% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (_motion and colour phase_), second dashed vertical line the “instruction” about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. **(C)** Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) networks. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. **(D)** Difference between BICs of the linear model and sigmoid function for each network. **(E)** Distributions of fitted slope steepness at inflection point parameter for control networks and classified insight and no-insight groups. **(F)** Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).
### Non-regularised Neural Networks
In non-regularised networks, the effects observed in L2-regularised networks are enhanced. 99% of the networks started using colour inputs (Fig. S8A), but colour use occurred in a more linear, less abrupt way than for L1- or L2-regularised networks. Additionally, there was very little delay of only 0.7 task blocks (70 trials, (\(\pm 0.25\))) between onset of the _motion and colour phase_ and the start of the networks making use of the colour input predictiveness (Fig. S8B). As for L2-networks, this delay was significantly shorter than for L1-regularised networks (\(M=0.7\pm 0.55\) vs. \(M=1.75\pm 1.05\), \(t(49.3)=6.6\), \(p<0.001\), \(d=1.6\)) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(98)=0.35\), \(p<.001\)). Similarly, performance on the lowest coherence level in the last block indicated that all networks used colour inputs (\(M=100\%\pm 0.3\%\) vs. \(M=82\pm 17\%\), \(t(98)=-10.4\), \(p<.001\), \(d=1.5\), Fig. S8C). Thus non-regularised networks also did not show the insight key behavioural characteristics of selectivity and delay.
### Origins of Insight-like Behaviour in Neural Networks
Having established the behavioural similarity between L1-networks and humans in an insight task, we asked what gave rise to insight-like switches in some networks, but not others. We therefore investigated the dynamics of gate weights and the effects of noise in insight vs. no-insight networks, and the role of regularisation strength parameter \(\lambda\).
### Colour Gradients Increase after Colour Becomes Predictive
Our first question was how learning about stimulus colour differed between insight and no-insight L1 networks, as expressed by the dynamics of network gradients. We time-locked the time courses of gradients to each network's individual switch point. Right when the switch occurred (at t of the estimated switch), colour gate weight gradients were significantly larger in insight compared to no-insight L1-networks (\(M=0.06\pm 0.06\) vs. \(M=0.02\pm 0.03\), \(t(73.2)=5.1\), \(p<.001\), \(d=1.05\)), while this was not true for motion gate weight gradients (\(M=0.18\pm 0.16\) vs. \(M=0.16\pm 0.16\), \(t(97)=0.7\), \(p=0.5\), \(d=0.13\)).
Notably, insight networks had larger colour gate weight gradients even before any behavioural changes were apparent, right at the beginning of the _motion and colour phase_ (first 5 trials of _motion and colour phase_: \(M=0.05\pm 0.07\) vs. \(M=0.01\pm 0.01\); \(t(320)=8.7\), \(p<.001\)), whereas motion gradients did not differ (\(t(576.5)=-0.1\), \(p=0.95\)). This increase in colour gate weight gradients for insight networks happened within a few trials after correlation onset (colour gradient last trial of _motion phase_: \(M=0\pm 0\) vs. 5th trial of _motion and colour phase_: \(M=0.06\pm 0.08\); \(t(47)=-5.6\), \(p<.001\), \(d=1.13\)), and suggests that insight networks start early to silently learn more about colour inputs compared to their no-insight counterparts. A change point analysis considering the mean and variance of the gradients confirmed the onset of the _motion and colour phase_ to be the change point of the colour gradient mean, with a difference of \(0.04\) between the consecutive pre-change and change time points for insight networks vs \(0.005\) for no-insight networks (with a change point detected two trials later), indicating considerable learning about colour for insight networks.
### "Silent" Colour Knowledge Precedes Insight-like Behaviour
A core feature of our network architecture is that inputs were multiplied by two factors, a gate \(g\), and a weight \(w\), but only gates were regularised. This meant that some networks might have developed larger colour weights, but still showed no signs of colour use, because the gates were very small. This could explain the early differences in gradients reported above. To test this idea, we investigated the absolute size of colour gates and weights of insight vs no-insight L1-networks before and after insight-like switches had occurred.
Comparing gates at the start of learning (first trial of the _motion and colour phase_), there were no differences between insight and no-insight networks for either motion or colour gates (colour gates: \(M=0\pm 0.01\) vs. \(M=0\pm 0.01\); \(t(95.3)=0.8\), \(p=0.44\), motion gates: \(M=0.5\pm 0.3\) vs. \(M=0.6\pm 0.3\); \(t(93.1)=-1.7\), \(p=0.09\), see Fig.4A, Fig.4H,J). Around the individually fitted switch points, however, the gates of insight and no-insight networks differed only for colour gates (colour gates: \(0.2\pm 0.2\) vs \(0.01\pm 0.02\) for insight vs no-insight networks, \(t(48)=6.7\), \(p<0.001\), \(d=1.4\), motion gates: \(0.5\pm 0.3\) vs \(0.5\pm 0.3\) for insight vs no-insight networks, \(t(95.6)=0.2\), \(p=0.9\), \(d=0.04\)). Insight networks' increased use of colour inputs was particularly evident at the end of learning (last trial of the _motion and colour phase_) and reflected in larger colour gates (\(0.7\pm 0.3\) vs \(0.07\pm 0.2\) for insight vs no-insight networks, \(t(73.7)=13.4\), \(p<0.001\), \(d=2.7\)) while the reverse was true for motion gates (\(M=0.2\pm 0.2\) vs \(M=0.5\pm 0.3\), respectively, \(t(81)=-7.5\), \(p<0.001\), \(d=1.5\), see Fig.4B, Fig.4H,J). Hence, differences in gating between network subgroups were only present after, but not before learning, and did not explain the above reported gradient differences or which network would show insight-like behaviour.
A different pattern emerged when investigating the weights of the networks. Among insight networks colour weights were significantly larger already at the start of learning (first trial of the _motion and colour phase_), as compared to no-insight networks (insight: \(M=1.2\pm 0.6\); no-insight: \(M=0.4\pm 0.3\), \(t(66.2)=8.1\), \(p<.001\), \(d=1.7\), see Fig.4C, Fig.4G,I). This was not true for motion weights (insight: \(M=3.4\pm 0.7\); no-insight: \(M=3.5\pm 0.5\), \(t(89.5)=-1.1\), \(p=0.3\), \(d=0.2\), see Fig.4C, Fig.4G,I). Thus, colour information appeared to be encoded in the weights of insight networks already before any insight-like switches occurred. Because the colour gates were suppressed through the L1-regularisation mechanism before learning, the networks did not differ in any observable colour sensitivity. An increase of colour gates reported above could then unlock the "silent knowlegde" of colour relevance.
To experimentally test the effect of pre-learning colour weights, we ran a new sample of L1-networks (\(N=99\)), and adjusted the colour and motion weight of each respective network to the mean absolute colour and motion weight size we observed in insight networks at start of learning (first trial of _motion and colour phase_). Gates were left untouched. This increased the number of insight networks from 48.5% to 70.7%, confirming that encoding of colour information at an early stage was an important factor for later switches, but also not sufficient to cause insight-like behaviour in all networks. Note that before weights adjustments were made, the performance of the new networks did not differ from the original L1-networks (\(M=0.8\pm 0.07\) vs \(M=0.8\pm 0.07\), \(t(195)=0.2\), \(p=0.9\), \(d=0.03\)). In our new sample, networks that would later show insight-like behaviour or not also did not differ from each other (insight: \(M=0.7\pm 0.07\) vs \(M=0.7\pm 0.07\), \(t(100.9)=1.4\), \(p=0.2\), \(d=0.3\), no-insight: \(M=0.8\pm 0.05\) vs \(M=0.8\pm 0.07\), \(t(71)=0.9\), \(p=0.4\), \(d=0.2\)). Weight and gate differences between L1- and L2-networks are reported in the Supplementary Material (see also Fig.4E-F).
#### Noise is Needed For Insight-like Behaviour
One possible factor that could explain the early differences between the weights of network subgroups is noise. The networks were exposed to noise at two levels: on each trial noise was added at the output stage (\(\eta\sim\mathcal{N}(0,\,\sigma_{\eta}^{2})\)), and to the gate and weight gradients during updating (\(\xi\sim\mathcal{N}(0,\,\sigma_{\xi}^{2})\)).
We probed whether varying the level of noise added during gradient updating, i.e. \(\sigma_{\xi}\), would affect the proportion of networks exhibiting insight-like behaviour. Parametrically varying the variance of noise added to colour and motion gates and weights led to increases in insight-like behaviour, from no single insight network when no noise was added to 100% insight networks when \(\sigma_{\xi_{\xi}}\) reached values of larger than approx. 0.05 (Fig.5A). Since gate and weight updates were coupled (see Eq. 4-7), noise during one gradient update could in principle affect other updates as well. We therefore separately manipulated the noise added to updates of colour gates and weights, motion gates and weights, all weights and all gates. This showed that adding noise to only weights during the updates was sufficient to induce insight-like behaviour (Fig.5B). In principle, adding noise to only gates was sufficient for insight-like switches as well, although noise applied to the gates had to be relatively larger to achieve the same effect as applying noise to weight gradients (Fig.5B), presumably due the effect of regularisation. Adding noise only to the gradients of motion gates or weights, but not to the colour gradients, was not sufficient to induce insight-like switches (Fig.5B). On the other hand, noise added only to the colour parameter updates quickly led to substantial amounts of insight-like behavioural switches (Fig.5B).
An analysis of _cumulative_ noise showed that the effects reported above are mostly about momentary noise fluctuations: cumulative noise added to the output did not differ between insight and no-insight networks at either the start (first trial of the _motion and colour phase_) or end of learning (last trial of the _motion and colour phase_) (start: \(M=-0.3\pm 4.7\) vs. \(M=-0.6\pm 3.9\); \(t(91.2)=0.4\), \(p=0.7\), end: \(M=0.6\pm 7.1\) vs. \(M=0.5\pm 7.1\); \(t(96.7)=0.07\), \(p=1\)), and the same was true for cumulative noise added during the gradient updates to weights and gates (see Supplementary Material for details).
We therefore conclude that Gaussian noise added to updates of particularly colour gate weights, in combination with "silent knowledge" about colour information stored in suppressed weights, is a crucial factor for insight-like behavioural changes.
#### Regularisation Parameter \(\lambda\) Affects Insight Delay and Frequency
In our previous results, the regularisation parameter \(\lambda\) was arbitrarily set to \(0.07\). We next tested the effect of of \(\lambda\) on insight-like behaviour. The number of L1-regularised insight networks linearly decreased with increasing \(\lambda\) (Fig.5C). Lambda further had an effect on the delay of the insight-like switches, with smaller \(\lambda\) values leading to decreased average delays of switching to a colour strategy after predictiveness of the inputs had changed (Fig.5D). The regularisation parameter \(\lambda\) thus affects two of the key characteristics of human insight - selectivity and delay.
Figure 4: Gate and weight size differences at the start and end of learning and dynamics. Colour and motion gates at **(A)** the first trial and **(B)** the last trial of the _motion and colour phase_. **(C)** Colour and motion weights at the first trial and **(D)** the last trial of the _motion and colour phase_. Error bars signify SEM. **(E)** Gate weight sizes for colour and motion gate weights at the first trial and **(F)** the last trial of the _motion and colour phase_ for L1- and L2-regularised networks. **(G)** Weights of insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(H)** Gates of insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(I)** Weights of no-insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(J)** Gates of no-insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM.
## Discussion
We investigated insight-like learning behaviour in humans and neural networks. In a binary decision making-task with a hidden regularity that entailed an alternative way to solve the task more efficiently, a subset of regularised neural networks with multiplicative gates of their input channels (as an attention mechanism) displayed spontaneous, jump-like learning that signified the sudden discovery of the hidden regularity - mysterious insight moments boiled down to the simplest expression.
Networks exhibited all key characteristics of human insight-like behaviour in the same task (suddenness, selectivity, delay). Crucially, neural networks were trained with standard stochastic gradient descent that is often associated with gradual learning. Our results therefore suggest that the behavioural characteristics of aha-moments can arise from gradual learning mechanisms, and hence suffice to mimic human insight.
Network analyses identified the factors which caused insight-like behaviour in L1-networks: noise added during the gradient computations accumulated to non-zero weights in some networks. As long as colour information was not useful yet, i.e. prior to the onset of the hidden regularity, close-to-0 colour gates rendered these weights "silent", such that no effects on behaviour can be observed. Once the hidden colour regularity became available, the non-zero colour weights helped to trigger non-linear learning dynamics that arise during gradient updating, and depend on the starting point. Hence, our results hint at important roles of "attentional" gating, noise, and regularisation as the computational origins of sudden, insight-like behavioural changes. We report several findings that are in line with this interpretation: addition of gradient noise \(\xi\) in particular to the colour weights and gates, pre-learning adjustment of colour weights and a reduction of the regularisation parameter \(\lambda\) all increased insight-like behaviour. We note that our networks did not have a hidden layer, witnessing the fact that no hidden layer is needed to produce non-linear learning dynamics.
Our findings have implications for the conception of insight phenomena in humans. While present-day machines clearly do not have the capacity to have aha-moments due to their lack of meta-cognitive awareness, our results show that the remarkable behavioural signatures of insights by themselves do not necessitate a dedicated process. This raises
Figure 5: Influence of Gaussian noise distribution variance \(\sigma_{\xi}\) and regularisation parameter \(\lambda\) on insight-like switches in L1-regularised networks **(A)** Influence of noise standard deviation (\(\sigma_{\xi}\)) applied to all gradient updates on the frequency of switches to a colour strategy (number of networks defined as having “insight”). The frequency of insight-like switches increases gradually with \(\sigma_{\xi}\) until it plateaus. Error bars are SD. We ran 10 x 99 simulations. **(B)** Effects of noise added only to either all weights (\(\sigma_{\xi_{w}}\)), all gates (\(\sigma_{\xi_{0}}\)), all motion parameters (i.e. motion weight and motion gates, \(\sigma_{\xi_{gm},...,m}\)) and all colour parameters (\(\sigma_{\xi_{gm},...,w}\)) on the frequency of insight-like switches when it is only applied to the network _weights_ and/or _gates_. The frequency of insight-like switches increases gradually with \(\sigma_{\xi_{w}}\) until it plateaus (dashed purple line), while it jumps abruptly after relatively high levels of \(\sigma_{\xi_{g}}\) (solid purple line). \(\sigma_{\xi_{gm},...,w}\) on motion alone is not sufficient for insight-like switches (lightest purple shade), but small \(\sigma_{\xi_{g},...,w_{z}}\) is sufficient for the frequency of insight networks to plateau (darkest purple shade). Error bars are SD. We ran 10 x 99 simulations. Colour scheme as in Fig. 1B **(C)** Influence of \(\lambda\) on the frequency of switches to a colour strategy (number of networks defined as having “insight”). The frequency of insight-like switches declines with increasing \(\lambda\) for L1-regularised networks, but is largely unaffected for L2-regularised networks. **(D)** Influence of \(\lambda\) on the averaged switch points. The averaged switch point occurs later in the task with increasing \(\lambda\) for both L1 and L2-regularised networks. Error bars signify SEM.
the possibility that sudden behavioural changes which occur even during gradual learning could in turn lead to the subjective effects that accompany insights Fensch et al. (2003); Esser et al. (2022).
Our results also highlight noise and regularisation as aspects of brain function that are involved in the generation of insights. Cellular and synaptic noise is omnipresent in brain activity Faisal et al. (2008); Waschke et al. (2021), and has a number of known benefits, such as stochastic resonance and robustness that comes with probabilistic firing of neurons based on statistical fluctuations due to Poissonian neural spike timing Rolls et al. (2008). It has also been noted that noise plays an important role in jumps between brain states, when noise provokes transitioning between attractor states Rolls and Deco (2012). Previous studies have therefore noted that stochastic brain dynamics can be advantageous, allowing e.g. for creative problem solving (as in our case), exploratory behaviour, and accurate decision making Rolls and Deco (2012); Faisal et al. (2008); Garrett et al. (2013); Waschke et al. (2021). Our work adds a computationally precise explanation of how noise can lead to insights to this literature. Questions about whether inter-individual differences in neural variability predict insights Garrett et al. (2013), or about whether noise that occurs during synaptic updating is crucial remain an interesting topic for future research.
Previous work has also suggested the occurrence and possible usefulness of regularisation in the brain. Regularisation has for instance been implied in synaptic scaling, which helps to adjust synaptic weights in order to maintain a global firing homeostasis Lee et al. (2019), thereby aiding energy requirements and reducing memory interference Tononi and Cirelli (2014); De Vivo et al. (2017). It has also been proposed that regularisation modulates the threshold for induction of long-term potentiation Lee et al. (2019). These mechanisms therefore present possible synaptic factors that contribute to insight-like behaviour in humans and animals. We note that synaptic scaling has often been linked to sleep Tononi and Cirelli (2014), and regularisation during sleep has also been suggested to help avoid overfitting to experiences made during the day, and therefore generalisation Hoel (2021). Since our experiments were conduced in an uninterrupted fashion during daylight, our findings could not reflect any sleep effects. The findings above nevertheless suggests a possible link between sleep, synaptic scaling and insight Wagner et al. (2004); Lacaux et al. (2021).
On a more cognitive level, regularisation has been implied in the context of heuristics. In this notion, regularisation has been proposed to function as an infinitely strong prior in a Bayesian inference framework Parpart et al. (2018). This infinitely strong prior would work as a sort of attention mechanism and regularise input and information in a way that is congruent with the specific prior, whereas a finite prior would under this assumption enable learning from experience Parpart et al. (2018). Another account regards cognitive control as regularised optimisation Ritz et al. (2022). According to this theory, better transfer learning is supported by effort costs regularising towards more task-general policies. It therefore seems possible that the factors that impact regularisation during learning can also lead to a neural switch between states that might be more or less likely to govern insights.
The occurrence of insight-like behaviour with the same characteristics as found in humans was specific to L1-regularised networks, while no comparable similarity occurred in L2- or non-regularised networks. Although L2-regularised neural networks learned to suppress initially irrelevant colour feature inputs and showed abrupt performance increases reminiscent of insights, only L1 networks exhibited a wide distribution of time points when the insight-like switches occur (delay) as well as a selectivity of the phenomenon to a subgroup of networks, as found in humans. We note that L2- and non-regularised networks technically performed better on the task, because they collectively improve their behavioural efficiency sooner. One important question therefore remains under which circumstances L1 would be the most beneficial form of regularisation. One possibility could be that the task is too simple for L1-regularisation to be beneficial. It is conceivable that L1-regularisation only starts being advantageous in more complex task settings when generalisation across task sets is required and a segregation of task dimensions to learn about at a given time would prove useful.
Taken together, gradual training of neural networks with gate modulation leads to insight-like behaviour as observed in humans, and points to roles of regularisation, noise and "silent knowledge" in this process. These results make an important contribution to the general understanding of learning dynamics and representation formation in environments with non-stationary feature relevance in both biological and artificial agents.
## Methods
### Task
#### Stimuli
We employed a perceptual decision task that required a binary choice about circular arrays of moving dots Rajananda et al. (2018), similar to the spontaneous strategy switch task developed earlier Schuck et al. (2015). Dots were characterised by two features, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a
colour (orange or purple, Fig.1A). The noise level of the motion feature was varied in 5 steps (5%, 10%, 20%, 30% or 45% coherent motion), making motion judgement relatively harder or easier. Colour difficulty was constant, thus consistently allowing easy identification of the stimulus colour. The condition with most noise (5% coherence) occurred slightly more frequently than the other conditions (30 trial per 100, vs 10, 20, 20, 20 for the other conditions).
The task was coded in JavaScript and made use of the jsPsych 6.1.0 plugins. Participants were restricted to use desktops (no tablets or mobile phones) of at least 13 inch width diagonally. Subjects were further restricted to use either a Firefox or Google Chrome browser to run the experiment.
On every trial, participants were presented a cloud of 200 moving dots with a radius of 7 pixels each. In order to avoid tracking of individual dots, dots had a lifetime of 10 frames before they were replaced. Within the circle shape of 400 pixel width, a single dot moved 6 pixel lengths in a given frame. Each dot was either designated to be coherent or incoherent and remained so throughout all frames in the display, whereby each incoherent dot followed a randomly designated alternative direction of motion.
The trial duration was 2000 ms and a response could be made at any point during that time window. After a response had been made via one of the two button presses, the white fixation cross at the centre of the stimulus would turn into a binary feedback symbol (happy or sad smiley) that would be displayed until the end of the trial (Fig.1C). An inter trial interval (ITI) of either 400, 600, 800 or 1000 ms was randomly selected. If no response was made, a "TOO SLOW" feedback was displayed for 300 ms before being replaced by the fixation cross for the remaining time of the ITI.
### Task Design
For the first 400 trials, the _motion phase_, the correct binary choice was only related to stimulus motion (two directions each on a diagonal were mapped onto one choice), while the colour changed randomly from trial to trial (Fig.1D). For the binary choice, participants were given two response keys, "X" and "M". The NW and SE motion directions corresponded to a left key press ("X"), while NE and SW corresponded to a right key press ("M") (Fig.1A). Participants received trial-wise binary feedback (correct or incorrect), and therefore could learn which choice they had to make in response to which motion direction (Fig.1C).
We did not specifically instruct participants to pay attention to the motion direction. Instead, we instructed them to learn how to classify the moving dot clouds using the two response keys, so that they would maximise their number of correct choices. To ensure that participants would pick up on the motion relevance and the correct stimulus-response mapping, motion coherence was set to be at 100% in the first block (100 trials), meaning that all dots moved towards one coherent direction. Participants learned this mapping well and performed close to ceiling (87% correct, t-test against chance: \(t(98)=37.4\), \(p<.001\)). In the second task block, we introduced the lowest, and therefore easiest, three levels of motion noise (20%, 30% and 45% coherent motion), before starting to use all five noise levels in block 3. Since choices during this phase should become solely dependent on motion, they should be affected by the level of motion noise. We assessed how well participants had learned to discriminate the motion direction after the fourth block. Participants that did not reach an accuracy level of at least 85% in the three lowest motion noise levels during this last task block of the pre-training were excluded from the _motion and colour phase_. All subjects were notified before starting the experiment, that they could only advance to the second task phase (_motion and colour phase_, although this was not communicated to participants) if they performed well enough in the first phase and that they would be paid accordingly for either one or two completed task phases.
After the _motion phase_, in the _motion and colour phase_, the colour feature became predictive of the correct choice in addition to the motion feature (Fig.1D). This meant that each response key, and thus motion direction diagonal, was consistently paired with one colour, and that colour was fully predictive of the required choice. Orange henceforth corresponded to a correct "X" key press and a NW/SE motion direction, while purple was predictive of a correct "M" key press and NE/SW motion direction (Fig.1A). This change in feature relevance was not announced to participants, and the task continued for another 400 trials as before - the only change being the predictiveness of colour.
Before the last task block we asked participants whether they 1) noticed a rule in the experiment, 2) how long it took until they noticed it, 3) whether they used the colour feature to make their choices and 4) to replicate the mapping between stimulus colour and motion directions. We then instructed them about the correct colour mapping and asked them to rely on colour for the last task block. This served as a proof that subjects were in principle able to do the task based on the colour feature and to show that, based on this easier task strategy, accuracy should be near ceiling for all participants in the last instructed block.
### Human Participants
Participants between eighteen and 30 years of age were recruited online through Prolific.
Participation in the study was contingent on showing learning of the stimulus classification. Hence, to assess whether participants had learned to correctly identify motion directions of the moving dots, we probed their accuracy on the three easiest, least noisiest coherence levels in the last block of the uncorrelated task phase. If subjects reached an accuracy level of at least 85%, they were selected for participation in the experiment.
Ninety-six participants were excluded due to insufficient accuracy levels after the _motion phase_ as described above. 99 participants learned to classify the dots' motion direction, passed the accuracy criterion and completed both task phases. These subjects make up the final sample included in all analyses. 34 participants were excluded due to various technical problems or premature quitting of the experiment. All participants gave informed consent prior to beginning the experiment. The study protocol was approved by the local ethics committee of the Max Planck Institute for Human Development. Participants received 3\(\upvarepsilon\) for completing only the first task phase and 7\(\upvarepsilon\) for completing both task phases.
#### Neural Networks
#### L1-regularised Neural Networks
We utilise a simple neural network model to reproduce the observations of the human behavioural data in a simplified supervised learning regression setting. We trained a simple neural network with two input nodes, two input gates and one output node on the same decision making task (Fig.1B).
The network received two inputs, \(x_{m}\) and \(x_{c}\), corresponding to the stimulus motion and colour, respectively, and had one output, \(\hat{y}\). Importantly, each input had one associated multiplicative gate (\(g_{m}\), \(g_{c}\)) such that output activation was defined as \(\hat{y}=\mathrm{sign}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta)\) where \(\eta\sim\mathcal{N}(0,\sigma)\) is Gaussian noise (Fig.1B).
To introduce competitive dynamics between the input channels, we added L1-regularisation on the gate weights \(g\), resulting in the following loss function:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\lambda( |g_{m}|+|g_{c}|) \tag{5}\]
The network was trained in a gradual fashion through online gradient descent with Gaussian white noise \(\xi\) added to the gradient update and a fixed learning rate \(\alpha\). Given the loss function, this yields the following update equations for noisy stochastic gradient descent (SGD):
\[\Delta w_{m}=-\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+ \eta-y)+\xi_{w_{m}} \tag{6}\] \[\Delta g_{m}=-\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+ \eta-y)\] \[-\alpha\lambda\mathrm{sign}(\mathrm{g}_{m})+\xi_{\mathrm{g}_{m}}\] (7) \[\Delta w_{c}=-\alpha x_{c}g_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+ \eta-y)+\xi_{w_{c}}\] (8) \[\Delta g_{c}=-\alpha x_{c}w_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+ \eta-y)\] \[-\alpha\lambda\mathrm{sign}(\mathrm{g}_{c})+\xi_{\mathrm{g}_{c}} \tag{9}\]
with \(\lambda\) = 0.07, \(\alpha\) = 0.6 and \(\xi\) = 0.05.
This implies that the evolution of the colour weights and gates will exhibit non-linear quadratic and cubic dynamics, driven by the interaction of \(w_{c}\) and \(g_{c}\). Multiplying the weights \(w\) with the regularised gate weights \(g\) leads to smaller weights and therefore initially slower increases of the colour weights \(w_{c}\) and respective gate weights \(g_{c}\) after colour has become predictive of correct choices.
To understand this effect of non-linearity analytically, we used a simplified setup of the same model without gate weights:
\[\mathcal{L}=[w_{m}x_{m}+w_{c}x_{c}+\eta-y]^{2} \tag{10}\]
Using this model, we observe exponential increases of the colour weights \(w_{c}\) after the onset of the _motion and colour phase_. This confirms that the interaction of \(w_{c}\) and \(g_{c}\), as well as the regularisation applied to \(g_{c}\) are necessary for the insight-like non-linear dynamics including a distribution of insight onsets as well as variety in slope steepness of insight-like switches.
Note that because the regularisation term is non-differentiable at \(0\), we cannot take the limit \(\alpha\to 0\), but averaged over the data instead. To avoid oscillations of the coefficients around \(0\) due to the non-differentiability, we added the following rules after each update of the gates: (1) if the gate \(g^{t}\) was zero before the update, a regularisation term \(-\mathrm{min}(\alpha\lambda,|\mathrm{g}^{t+1}|)\mathrm{sign}(\mathrm{g}^{t+1})\) was added and (2) if the gate changed sign during the update, the value was set to \(0\).
The accuracy is given by:
\[\begin{split}&\mathbb{P}[\hat{y}=y|w_{m},g_{m},w_{c},g_{c}]\\ &=\frac{1}{2}[1+\text{erf}(\frac{\text{g}_{\text{m}}\text{w}_{ \text{m}}\text{x}_{\text{m}}+\text{g}_{\text{c}}\text{w}_{\text{c}}\text{x}_{ \text{c}}}{\sqrt{2((\text{g}_{\text{m}}\text{w}_{\text{m}}\text{x}_{\text{m}}) ^{2}+(\text{g}_{\text{c}}\text{w}_{\text{c}}\text{\sigma}_{c})^{2}+\sigma^{2}) }})]\end{split} \tag{11}\]
We trained the network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks.
Specifically, we simulated the same number of networks than humans were included in the final analysis sample (\(N=99\)). We matched the motion noise based performance variance of a given simulation to a respective human subject using a non-linear COBYLA optimiser. While the mean of the colour input distribution (0.22) as well as the standard deviations of both input distributions were fixed (0.01 for colour and 0.1 for motion), the respective motion input distribution mean values were individually fitted for each single simulation as described above.
The input sequences the networks received were sampled from the same ten input sequences that humans were exposed to in task phase two. This means that for the task part where colour was predictive of the correct binary choice, _motion and colour phase_ (500 trials in total), networks and humans received the same input sequences.
The networks were given a slightly longer _training phase_ of six blocks (600 trials) in comparison to the two blocks _training phase_ that human subjects were exposed to (Fig.1D). Furthermore, human participants first completed a block with 100% motion coherence before doing one block with low motion noise. The networks received six _training phase_ blocks containing the three highest motion coherence levels. Both human subjects and networks completed two blocks including all noise levels in the _motion phase_ before colour became predictive in the _motion and colour phase_.
### L2-regularised Neural Networks
To probe the effect of the aggressiveness of the regulariser on insight-like switch behaviour in networks, we compared our L1-regularised networks with models of the same architecture, but added L2-regularisation on the gate weights \(g\). This yielded the following loss function:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\frac{ \lambda}{2}(|g_{m}|+|g_{c}|)^{2} \tag{12}\]
From the loss function we can again derive the following update equations for noisy stochastic gradient descent (SGD):
\[\Delta w_{m} =-\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)+\xi_{ w_{m}} \tag{13}\] \[\Delta g_{m} =-\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)\] \[\quad-\alpha\lambda\text{sign}(\text{g}_{\text{m}})\text{abs}( \text{g}_{\text{m}})+\xi_{\text{g}_{\text{m}}}\] (14) \[\Delta w_{c} =-\alpha x_{c}g_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+\eta-y)+\xi_{ w_{c}}\] (15) \[\Delta g_{c} =-\alpha x_{c}w_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+\eta-y)\] \[\quad-\alpha\lambda\text{sign}(\text{g}_{\text{c}})(\text{g}_{ \text{m}})\text{abs}(\text{g}_{\text{c}})+\xi_{\text{g}_{\text{c}}} \tag{16}\]
with \(\lambda\) = 0.07, \(\alpha\) = 0.6 and \(\xi\) = 0.05.
The training is otherwise the same as for the L1-regularised networks.
### Modelling of insight-like switches
#### Models of colour use
In order to probe whether strategy switches in low coherence trials occurred abruptly, we compared three different models with different assumptions about the form of the data. First, we fitted a linear model with two free parameters:
\[y=mt+y_{0}\]
where \(m\) is the slope, \(y_{0}\) the y-intercept and \(t\) is time (here, task blocks)(Fig. S2). This model should fit no-insight participants' data well where colour use either increases linearly over the course of the experiment or stays at a constant level.
Contrasting the assumptions of the linear model, we next tested whether colour-based responses increased abruptly by fitting a step model with three free parameters, a switch point \(t_{s}\), the step size \(s\) and a maximum value \(y_{max}\) (Fig. S2), so that
\[y=\begin{cases}y_{max}-s&\text{if }t<t_{s}\\ y_{max}&\text{if }t\geq t_{s}\end{cases}\]
We also included a sigmoid function with three free parameters as a smoother approximation of the step model:
\[y=y_{max}-y_{min}\frac{1}{+}e^{-m(t-t_{s})}+y_{min}\]
where \(y_{max}\) is the fitted maximum value of the function, \(m\) is the slope and \(t_{s}\) is the inflection point (Fig. S2). \(y_{min}\) was given by each individual's averaged accuracy on 5% motion coherence trials in block 3-4.
Comparing the model fits across all subjects using the Bayesian Information Criterion (BIC) and protected exceedance probabilities yielded a preference for the sigmoid function over both a step and linear model, for both humans (Fig.2E) and L1-regularised neural networks (Fig.3D). On the one hand, this supports our hypothesis that insight-like strategy switches do not occur in an incremental linear fashion, but abruptly, with variance in the steepness of the switch. Secondly, this implies that at least a subset of subjects shows evidence for an insight-like strategy switch.
#### Human participants
To investigate these insight-like strategy adaptations, we modelled human participants' data using the individually fitted sigmoid functions (Fig. S3). The criterion we defined in order to assess whether a subject had switched to the colour strategy, was the slope at the inflection point, expressing how steep the performance jump was after having an insight about colour. We obtained this value by taking the sigmoid function's partial derivative of time
\[\frac{\partial y}{\partial t}=(y_{max}-y_{min})\frac{me^{-m(t-t_{s})}}{(1+e^{-m (t-t_{s})})^{2}}\]
and then evaluating the above equation for the fitted switch point, \(t=t_{s}\), which yields:
\[y^{\prime}(t_{s})=\frac{1}{4}m(y_{max}-y_{min})\]
Switch misclassifications can happen that are caused by irregularities and small jumps in the data - irrespective of a colour strategy switch. We therefore corrected for a general fit of the data to the model by subtracting the individually assessed general model fit from the slope steepness at the inflection point. Insight subjects were then classified as those participants whose corrected slope steepness at inflection point parameters were outside of the 100% percentile of a control group's (no change in predictiveness of colour) distribution of that same parameter. By definition, insights about a colour rule cannot occur in this control condition, hence our derived out-of-sample distribution evidences abrupt strategy improvements hinting at insight (Fig.3F).
Before the last task block we asked participants whether they used the colour feature to make their choices. 57.6% of participants indicated that they used colour to press correctly. The 48.5% insight participants we identified using our classification method overlapped to 79.2% with participants' self reports.
#### Neural Networks
We used the same classification procedure for neural networks. All individual sigmoid function fits for L1-regularised networks can be found in the Supplementary Material (Fig. S4).
## Acknowledgements
ATL is supported by the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMPPPSYCH, www.mps.ucl-centre.mpg.de). PMK was funded by the Wellcome Trust (award: 210849/Z18/Z). AMS was supported by a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z19/Z), and the Sainsbury Wellcome Centre Core Grant from Wellcome (219627/Z/19/Z) and the Gatsby Charitable Foundation (GAT3755). AMS is a CIFAR Azrieli Global Scholar in the Learning in Machines & Brains program. CS was funded by the European Research Council (ERC Consolidator awards 725937) and Special Grant Agreement No. 945539 (Human Brain Project SGA). NWS was funded by the Federal Government of Germany and the State of Hamburg as part of the Excellence Initiative, a Starting Grant from the European Union
(ERC-StG-REPLAY-852669), and an Independent Max Planck Research Group grant awarded by the Max Planck Society (M.TN.A.BILD0004). The funding parties had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We thank Robert Gaschler for helpful comments on this manuscript.
## References
* Kohler (1925) Wolfgang Kohler. _The Mentality of Apes_. Kegan Paul, Trench, Trubner & Co. ; Harcourt, Brace & Co., 1925.
* Durstewitz et al. (2010) Daniel Durstewitz, Nicole M. Vittoz, Stan B. Floresco, and Jeremy K. Seamans. Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning. _Neuron_, 66(3):438-448, 2010. ISSN 08966273. doi: 10.1016/j.neuron.2010.03.029. URL [http://dx.doi.org/10.1016/j.neuron.2010.03.029](http://dx.doi.org/10.1016/j.neuron.2010.03.029).
* Stuyck et al. (2021) Hans Stuyck, Bart Aben, Axel Cleeremans, and Eva Van den Bussche. The Aha! moment: Is insight a different form of problem solving? _Consciousness and Cognition_, 90(April 2020):103055, 2021. ISSN 10902376. doi: 10.1016/j.concog.2020.103055. URL [https://doi.org/10.1016/j.concog.2020.103055](https://doi.org/10.1016/j.concog.2020.103055).
* Weisberg (2015) Robert W. Weisberg. Toward an integrated theory of insight in problem solving. _Thinking and Reasoning_, 21(1):5-39, 2015. ISSN 14640708. doi: 10.1080/13546783.2014.886625. URL [http://dx.doi.org/10.1080/13546783.2014.886625](http://dx.doi.org/10.1080/13546783.2014.886625).
* Kounios and Beeman (2014) John Kounios and Mark Beeman. The cognitive neuroscience of insight. _Annual Review of Psychology_, 65:71-93, 2014. ISSN 15452085. doi: 10.1146/annurev-psych-010213-115154.
* Jung-Beeman et al. (2004) Mark Jung-Beeman, Edward M. Bowden, Jason Haberman, Jennifer L. Frymiare, Stella Arambel-Liu, Richard Greenblatt, Paul J. Reber, and John Kounios. Neural activity when people solve verbal problems with insight. _PLoS Biology_, 2(4):500-510, 2004. ISSN 15449173. doi: 10.1371/journal.pbio.0020097.
* Danek et al. (2014) Amory H. Danek, Thomas Fraps, Albrecht von Muller, Benedikt Grothe, and Michael Ollinger. It's a kind of magic-what self-reports can reveal about the phenomenology of insight problem solving. _Frontiers in Psychology_, 5(DEC):1-11, 2014. ISSN 16641078. doi: 10.3389/fpsyg.2014.01408.
* Kounios and Beeman (2015) John Kounios and Mark Beeman. _The eureka factor: Aha moments, creative insight, and the brain_. Random House, New York, 2015. ISBN 9781400068548.
* Shen et al. (2018) Wangbing Shen, Yu Tong, Feng Li, Yuan Yuan, Bernhard Hommel, Chang Liu, and Jing Luo. Tracking the neurodynamics of insight: A meta-analysis of neuroimaging studies. _Biological Psychology_, 138(January):189-198, 2018. ISSN 18736246. doi: 10.1016/j.biopsycho.2018.08.018. URL [https://doi.org/10.1016/j.biopsycho.2018.08.018](https://doi.org/10.1016/j.biopsycho.2018.08.018).
* Tik et al. (2018) Martin Tik, Ronald Sladky, Caroline Di Bernardi Luft, David Willinger, Andre Hoffmann, Michael J. Banissy, Joydeep Bhattacharya, and Christian Windischberger. Ultra-high-field fMRI insights on insight: Neural correlates of the Aha!-moment. _Human Brain Mapping_, 39(8):3241-3252, 2018. ISSN 10970193. doi: 10.1002/hbm.24073.
* Schuck et al. (2015) Nicolas W. Schuck, Robert Gaschler, Dorit Wenke, Jakob Heinzle, Peter A. Frensch, John Dylan Haynes, and Carlo Reverberberi. Medial prefrontal cortex predicts internally driven strategy shifts. _Neuron_, 86(1):331-340, 2015. ISSN 10974199. doi: 10.1016/j.neuron.2015.03.015. URL [http://dx.doi.org/10.1016/j.neuron.2015.03.015](http://dx.doi.org/10.1016/j.neuron.2015.03.015).
* Schuck et al. (2022) Nicolas W. Schuck, Amy X. Li, Dorit Wenke, Destina S. Ay-Bryson, Anika T. Loewe, Robert Gaschler, and Yee Lee Shing. Spontaneous discovery of novel task solutions in children. _PloS ONE_, 17(5):e0266253, 2022. doi: 10.1371/journal.pone.0266253. URL [http://dx.doi.org/10.1371/journal.pone.0266253](http://dx.doi.org/10.1371/journal.pone.0266253).
* Gaschler et al. (2019) Robert Gaschler, Nicolas W. Schuck, Carlo Reverberi, Peter A. Frensch, and Dorit Wenke. Incidental covariation learning leading to strategy change. _PLoS ONE_, 14(1):1-32, 2019. ISSN 19326203. doi: 10.1371/journal.pone.0210597.
* Gaschler et al. (2013) Robert Gaschler, Bianca Vaterrodt, Peter A. Frensch, Alexandra Eichler, and Hilde Haider. Spontaneous Usage of Different Shortcuts Based on the Commutativity Principle. _PLoS ONE_, 8(9):1-13, 2013. ISSN 19326203. doi: 10.1371/journal.pone.0074972.
* Gaschler et al. (2015) Robert Gaschler, Julian N. Marewski, and Peter A. Frensch. Once and for all--How people change strategy to ignore irrelevant information in visual tasks. _Quarterly Journal of Experimental Psychology_, 68(3):543-567, 2015. ISSN 17470226. doi: 10.1080/17470218.2014.961933. URL [http://dx.doi.org/10.1080/17470218.2014.961933](http://dx.doi.org/10.1080/17470218.2014.961933).
* Bowden et al. (2005) Edward M. Bowden, Mark Jung-Beeman, Jessica Fleck, and John Kounios. New approaches to demystifying insight. _Trends in Cognitive Sciences_, 9(7):322-328, 2005. ISSN 13646613. doi: 10.1016/j.tics.2005.05.012.
* Metcalfe and Wiebe (1987) Janet Metcalfe and David Wiebe. Intuition in insight and noninsight problem solving. _Memory & Cognition_, 15(3):238-246, 1987. ISSN 0090502X. doi: 10.3758/BF03197722.
* Tik et al. (2018)
* Karlsson et al. (2012) Mattias P. Karlsson, Dougal G.R. Tervo, and Alla Y. Karpova. Network resets in medial prefrontal cortex mark the onset of behavioral uncertainty. _Science_, 338(6103):135-139, 2012. ISSN 10959203. doi: 10.1126/science.1226518.
* Miller and Katz (2010) Paul Miller and Donald B. Katz. Stochastic transitions between neural states in taste processing and decision-making. _Journal of Neuroscience_, 30(7):2559-2570, 2010. ISSN 02706474. doi: 10.1523/JNEUROSCI.3047-09.2010.
* Allegra et al. (2020) Michele Allegra, Shima Seyed-Allaei, Nicolas W. Schuck, Daniele Amati, Alessandro Laio, and Carlo Reverberi. Brain network dynamics during spontaneous strategy shifts and incremental task optimization. _NeuroImage_, 217(January):116854, 2020. ISSN 10959572. doi: 10.1016/j.neuroimage.2020.116854. URL [https://doi.org/10.1016/j.neuroimage.2020.116854](https://doi.org/10.1016/j.neuroimage.2020.116854).
* Friston et al. (2017) Karl J Friston, Marco Lin, Christopher D Frith, Giovanni Pezzulo, J. Allan Hobson, and Sasha Ondobaka. Active Inference, Curiosity and Insight. _Neural Computation_, 29:2633-2683, 2017. doi: 10.1162/neco.
* Ohlsson (1992) Stellan Ohlsson. Information-processing explanations of insight and related phenomena. In _Advances in the Psychology of Thinking_. Harvester Wheatseaf, 1992.
* Power et al. (2022) Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets. _arXiv_, pages 1-10, 2022. URL [http://arxiv.org/abs/2201.02177](http://arxiv.org/abs/2201.02177).
* Saxe et al. (2014) Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In _Proceedings of the International Conference on Learning Representations 2014.,_, pages 1-22, 2014.
* Saxe et al. (2019a) Andrew M. Saxe, James L. McClelland, and Surya Ganguli. A mathematical theory of semantic development in deep neural networks. _Proceedings of the National Academy of Sciences of the United States of America_, 166(23):11537-11546, 2019a. ISSN 10916490. doi: 10.1073/pnas.1820226116.
* Schapiro and McClelland (2009) Anna C. Schapiro and James L. McClelland. A connectionist model of a continuous developmental transition in the balance scale task. _Cognition_, 110(3):395-411, 2009. ISSN 00100277. doi: 10.1016/j.cognition.2008.11.017.
* McClelland and Rogers (2003) James L. McClelland and Timothy T. Rogers. The parallel distributed processing approach to semantic cognition. _Nature Reviews Neuroscience_, 4(4):310-322, 2003. ISSN 14710048. doi: 10.1038/nrn1076.
* Flesch et al. (2022) Timo Flesch, Keno Juechems, Tsvetomira Dumbalska, Andrew Saxe, and Christopher Summerfield. Orthogonal representations for robust context-dependent task performance in brains and neural networks. _Neuron_, 110(7):1258-1270, 2022. ISSN 10974199. doi: 10.1016/j.neuron.2022.01.005. URL [https://doi.org/10.1016/j.neuron.2022.01.005](https://doi.org/10.1016/j.neuron.2022.01.005).
* Saxe et al. (2019b) Andrew M. Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D. Tracey, and David D. Cox. On the information bottleneck theory of deep learning. _Journal of Statistical Mechanics: Theory and Experiment_, 2019(12), 2019b. ISSN 17425468. doi: 10.1088/1742-5468/ab3985.
* Bengio et al. (2009) Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum Learning. In _In: Proceedings of International Conference on Machine Learning_, pages 41-48, 2009. URL [http://arxiv.org/abs/1611.06204](http://arxiv.org/abs/1611.06204).
* Flesch et al. (2018) Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, and Christopher Summerfield. Comparing continual task learning in minds and machines. _Proceedings of the National Academy of Sciences of the United States of America_, 115(44):E10313-E10322, 2018. ISSN 10916490. doi: 10.1073/pnas.1800755115.
* Mehrer et al. (2020) Johannes Mehrer, Courtney J Spoerer, Nikolaus Kriegeskorte, and Tim C Kietzmann. Individual differences among deep neural network models. _Nature Communications_, 11(5725):1-12, 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-19632-w. URL [http://dx.doi.org/10.1038/s41467-020-19632-w](http://dx.doi.org/10.1038/s41467-020-19632-w).
* Liu et al. (2020) Shengchao Liu, Dimitris Papailiopoulos, and Dimitris Achlioptas. Bad global minima exist and SGD can reach them. _Advances in Neural Information Processing Systems_, 2020-Decem(NeurIPS), 2020. ISSN 10495258.
* Bishop (2006) Chrisopher M. Bishop. _Pattern Recognition and Machine Learning_. Springer US, 2006. ISBN 9780387310732. doi: 10.1007/978-3-030-57077-4_11.
* Krishnamurthy et al. (2022) Kamesh Krishnamurthy, Tankut Can, and David J. Schwab. Theory of Gating in Recurrent Neural Networks. _Physical Review X_, 12(1):11011, 2022. ISSN 21603308. doi: 10.1103/PhysRevX.12.011011. URL [https://doi.org/10.1103/PhysRevX.12.011011](https://doi.org/10.1103/PhysRevX.12.011011).
* Jozefowicz et al. (2015) Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of Recurrent Network architectures. _32nd International Conference on Machine Learning, ICML 2015_, 3:2332-2340, 2015.
* Groschner et al. (2022) Lukas N. Groschner, Jonatan G. Malis, Birte Zuidinga, and Alexander Borst. A biophysical account of multiplication by a single neuron. _Nature_, 603(7899):119-123, 2022. ISSN 14764687. doi: 10.1038/s41586-022-04428-3.
* Poggio et al. (1985) Tomaso Poggio, Vincent Torre, and Christof Koch. Computational vision and regularization theory. _Nature_, 317(6035):314-319, 1985. ISSN 00280836. doi: 10.1038/317314a0.
* Poggio et al. (2018)
* Costa et al. (2017) Rui Ponte Costa, Yannis M. Assael, Brendan Shillingford, Nando De Freitas, and Tim P. Vogels. Cortical microcircuits as gated-recurrent neural networks. _Advances in Neural Information Processing Systems_, 2017-Decem(Nips 2017):272-283, 2017. ISSN 10495258.
* Rajananda et al. (2018) Sivananda Rajananda, Hakwan Lau, and Brian Odegaard. A random-dot kinematogram for web-based vision research. _Journal of Open Research Software_, 6(1), 2018. ISSN 20499647. doi: 10.5334/jors.194.
* Fensch et al. (2003) P. A. Fensch, H. Haider, D. Runger, U. Neugebauer, S. Voigt, and J. Werg. The route from implicit learning to verbal expression of what has been learned: Verbal report of incidentally experienced environmental regularity. In L. Jimenez, editor, _Attention and implicit learning_, pages 335-366. John Benjamins Publishing Company, 2003.
* Esser et al. (2022) Sarah Esser, Clarissa Lustig, and Hilde Haider. What triggers explicit awareness in implicit sequence learning? Implications from theories of consciousness. _Psychological Research_, 86(5):1442-1457, 2022. ISSN 14302772. doi: 10.1007/s00426-021-01594-3.
* Faisal et al. (2008) A. Aldo Faisal, Luc P.J. Selen, and Daniel M. Wolpert. Noise in the nervous system. _Nature Reviews Neuroscience_, 9(4):292-303, 2008. ISSN 1471003X. doi: 10.1038/nrn2258.
* Waschke et al. (2021) Leonhard Waschke, Niels A. Kloosterman, Jonas Obleser, and Douglas D. Garrett. Behavior needs neural variability. _Neuron_, 109(5):751-766, 2021. ISSN 10974199. doi: 10.1016/j.neuron.2021.01.023. URL [https://doi.org/10.1016/j.neuron.2021.01.023](https://doi.org/10.1016/j.neuron.2021.01.023).
* Rolls et al. (2008) Edmund T. Rolls, James M. Tromans, and Simon M. Stringer. Spatial scene representations formed by self-organizing learning in a hippocampal extension of the ventral visual system. _European Journal of Neuroscience_, 28(10):2116-2127, 2008. ISSN 0953816X. doi: 10.1111/j.1460-9568.2008.06486.x.
* Rolls and Deco (2012) Edmund T. Rolls and Gustavo Deco. _The Noisy Brain: Stochastic dynamics as a principle of brain function_. Oxford University Press, 2012. ISBN 9780191702471. doi: 10.1093/acprof:oso/9780199587865.001.0001.
* Garrett et al. (2013) Douglas D Garrett, Gregory R Samanez-larkin, Stuart W S Macdonald, Ulman Lindenberger, Anthony R Mcintosh, and Cheryl L Grady. Neuroscience and Biobehavioral Reviews Moment-to-moment brain signal variability : A next frontier in human brain mapping? _Neuroscience and Biobehavioral Reviews_, 37(4):610-624, 2013. ISSN 0149-7634. doi: 10.1016/j.neubiorev.2013.02.015. URL [http://dx.doi.org/10.1016/j.neubiorev.2013.02.015](http://dx.doi.org/10.1016/j.neubiorev.2013.02.015).
* Lee et al. (2019) Hey-kyoung Lee, Alfredo Kirkwood, and Hey-kyoung Lee. Mechanisms of Homeostatic Synaptic Plasticity in vivo. _PNAS_, 13(December):1-7, 2019. doi: 10.3389/fncel.2019.00520.
* Tononi and Cirelli (2014) Giulio Tononi and Chiara Cirelli. Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration. _Neuron_, 81(1):12-34, 2014. ISSN 08966273. doi: 10.1016/j.neuron.2013.12.025. URL [http://dx.doi.org/10.1016/j.neuron.2013.12.025](http://dx.doi.org/10.1016/j.neuron.2013.12.025).
* De Vivo et al. (2017) Luisa De Vivo, Michele Bellesi, William Marshall, Eric A. Bushong, Mark H. Ellisman, Giulio Tononi, and Chiara Cirelli. Ultrastructural evidence for synaptic scaling across the wake/sleep cycle. _Science_, 355(6324):507-510, 2017. ISSN 10959203. doi: 10.1126/science.aah5982.
* Hoel (2021) Erik Hoel. The overfitted brain : Dreams evolved to assist generalization. _Patterns_, 2(5):100244, 2021. ISSN 2666-3899. doi: 10.1016/j.patter.2021.100244. URL [https://doi.org/10.1016/j.patter.2021.100244](https://doi.org/10.1016/j.patter.2021.100244).
* Wagner et al. (2004) Ullrich Wagner, Steffen Gais, Hilde Haider, Rolf Verleger, and Jan Born. Sleep inspires insight. _Nature_, 427(6972):352-355, 2004. ISSN 00280836. doi: 10.1038/nature02223.
* Lacaux et al. (2021) Celia Lacaux, Thomas Andrillon, Celeste Bastoul, Yannis Idir, Alexandrine Fonteix-galet, Isabelle Arnulf, and Delphine Oudiette. Sleep onset is a creative sweet spot. _Science Advances_, 5866(December):1-10, 2021.
* Parpart et al. (2018) Paula Parpart, Matt Jones, and Bradley C Love. Heuristics as Bayesian inference under extreme priors. _Cognitive Psychology_, 102(March):127-144, 2018. ISSN 0010-0285. doi: 10.1016/j.cogpsych.2017.11.006. URL [https://doi.org/10.1016/j.cogpsych.2017.11.006](https://doi.org/10.1016/j.cogpsych.2017.11.006).
* Ritz et al. (2022) Harrison Ritz, Xianin Leng, and Amitai Shenhav. Cognitive control as a multivariate optimization problem. _Journal of Cognitive Neuroscience_, 34(4):569-591, 2022.
## Hidden layer model
In order to verify that our results were not merely an artefact of the oversimplified models we used, we tested the task on a more complex neural network model that had one additional hidden layer of fully connected linear units.
The linear neural network received two inputs, \(x_{m}\) and \(x_{c}\), corresponding to the stimulus motion and colour, respectively, and had two output nodes, \(\hat{y}\), as well as one hidden layer of 48 units. Importantly, each weight connecting the inputs with a hidden unit had one associated multiplicative gate \(g\). To introduce competitive dynamics between the input channels, we again applied L1-regularisation on the gate weights \(g\).
The network was trained on the Cross Entropy loss using stochastic gradient descent with \(\lambda\) = 0.002 and \(\alpha\) = 0.1.
As for the one-layer network, we trained this network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks (see Methods).
We employed the same analysis approach to detect insight-like behaviour (see Methods for details) by running simulations of a "control" network of the same architecture, but without correlated features and therefore without colour predictiveness in the _motion and colour phase_. We found that when we applied L1-regularisation with a regularisation parameter of \(\lambda=0.002\) on the gate weights, 18.2% of the networks exhibited _abrupt_ and _delayed_ learning dynamics, resembling insight-like behaviour in humans (Fig.1A) and thereby replicating the key insight characteristics suddenness and selectivity. Insight-like switches to the colour strategy thereby again improved the networks' performance significantly. Using the same parameters, experimental setup and analyses, but applying L2-regularisation on the gate weights \(g\), yielded an insight-like switch rate of 51.5% (Fig.1B).
We again also observed a wider distribution of delays, the time point when the switches in the _motion and colour phase_ occurred in insight networks, for L1-regularised networks with a hidden layer (Fig.1C-D).
Taken together, these results mirror our observations from network simulations with a simplified setup. We can thereby confirm that our results of L1-regularised neural networks' behaviour exhibiting all key characteristics of human insight behaviour (suddenness, selectivity and delay) are not an artefact of the one-layer linearity.
## Weight and Gate Differences between L1- and L2-regularised Networks
At correlation onset (first trial of _motion and colour phase_), neither motion nor colour weights differed (motion: \(M=3.5\pm 0.6\) vs \(M=3.4\pm 0.5\), \(t(192.7)=1.2\), \(p=0.2\), \(d=0.2\), colour: \(M=0.8\pm 0.6\) vs \(M=0.8\pm 0.5\), \(t(189.2)=0.4\), \(p=0.7\), \(d=0.1\)). After learning, however, i.e. at the last trial of the _motion and colour phase_, the average absolute size of the colour weights was higher in L2- compared to L1-networks (\(M=2.6\pm 2.2\) vs \(M=4.7\pm 0.7\), \(t(115.1)=-9\), \(p<.001\), \(d=1.3\)), while the reverse was true for motion weights (\(M=3.4\pm 0.7\) vs \(M=2.8\pm 0.6\), \(t(194.9)=5.6\), \(p<.001\), \(d=0.8\)). For gate weights, differences between L1- and L2-networks are already apparent at correlation onset (first trial of _motion and colour phase_), where the mean of the motion gate was 0.53 for L1-networks and 0.58 for L2-networks, and hence lower in L1 networks,
albeit not significantly (\(t(195.1)=-1\), \(p=0.3\), \(d=0.1\), see Fig. 4E). In addition, the average absolute size of the colour gate weights was higher in L2- compared to L1-networks (\(M=0.04\pm 0.05\) vs \(M=0.002\pm 0.006\), respectively, \(t(100.6)=-7.2\), \(p<0.001\), \(d=1\)). The respective distributions also reflected these effects. L1-networks had a much more narrow distribution for colour gates and just slightly narrower distribution for motion gates (L1: colour gates: 0 to 0.04, motion gates: 0 to 1.3, L2: colour gates: 0 to 2, motion gates: 0 to 1.4) After learning, i.e. at the last trial of the _motion and colour phase_, the mean colour gate size still was lower in L1- compared to L2-regularised networks (\(M=0.4\pm 0.4\) vs \(M=0.8\pm 0.2\), \(t(169.1)=-9.3\), \(p<0.001\), \(d=1.3\)), while the reverse was true for motion gates (\(M=0.3\pm 0.3\) vs \(M=0.2\pm 0.2\), \(t(152.4)=3.9\), \(p<0.001\), \(d=0.6\), see Fig. 4F). This was again also reflected in the respective distributions with L1-networks having much wider distributions for motion and slightly shorter width for colour gates (L1: colour gates: 0 to 1.2, motion gates: 0 to 1.3, L2: colour gates: 0 to 1.3, motion gates: 0 to 0.7).
## Gaussian Noise Differences at Weights and Gates between Insight and No-Insight Networks
Comparing Gaussian noise \(\xi\sim\mathcal{N}(0,\,\sigma_{\xi}^{2})\) at the weights and gates around the individually fitted switch points revealed no differences between insight and no-insight networks for either motion or colour weights (colour weights: \(M=-0.08\pm 1\) vs. \(M=0.04\pm 0.8\); \(t(89.5)=-0.6\), \(p=0.5\), motion weights: \(M=0.5\pm 0.3\) vs. \(M=0.6\pm 0.3\); \(t(93.1)=-1.7\), \(p=0.09\)) or gates (colour gates: \(M=-0.1\pm 0.9\) vs. \(M=0.1\pm 0.9\); \(t(95.3)=0.8\), \(p=0.44\), motion gates: \(M=0.2\pm 0.6\) vs. \(M=-0.3\pm 0.8\); \(t(94.4)=2\), \(p=0.05\)). There also were no \(\sigma_{\xi}\) differences at either the start of learning (first trial of the _motion and colour phase_) (colour weights: \(M=-0.06\pm 0.8\) vs. \(M=-0.03\pm 0.5\); \(t(78.1)=-0.2\), \(p=0.8\), motion weights: \(M=0.08\pm 0.7\) vs. \(M=0.07\pm 0.7\); \(t(96.7)=1\), \(p=0.3\), colour gates: \(M=0\pm 0.6\) vs. \(M=-0.2\pm 0.7\); \(t(97)=1.6\), \(p=0.1\), motion gates: \(M=-0.04\pm 0.6\) vs. \(M=-0.07\pm 0.7\); \(t(97)=0.2\), \(p=0.8\)) or end of learning (last trial of the _motion and colour phase_)(colour weights: \(M=0.05\pm 1.3\) vs. \(M=0.08\pm 1.1\); \(t(92.7)=-0.1\), \(p=0.9\), motion weights: \(M=0\pm 1.2\) vs. \(M=-0.02\pm 1.1\); \(t(95.6)=0.04\), \(p=1\), colour gates: \(M=0.2\pm 1.1\) vs. \(M=-0.2\pm 1.2\); \(t(97)=1.7\), \(p=0.09\), motion gates: \(M=-0.1\pm 1.3\) vs. \(M=0.05\pm 1.3\); \(t(96)=-0.7\), \(p=0.5\)).
Figure 1: Switch-aligned performance and switch point distributions for L1- and L2-regularised neural networks with a 48 unit hidden layer each. Blocks shown are halved task blocks (50 trials each). Error shadows signify SEM. **(A)** Switch-aligned performance for insight (18/99) and no-insight groups (81/99) respectively for L1-regularised networks with a hidden layer. **(B)** Switch-aligned performance for insight (51/99) and no-insight (48/99) L2-regularised neural networks with a hidden layer. **(C)** Switch point distributions for L1-regularised insight networks with a hidden layer. Dashed vertical line marks onset of colour predictiveness. **(D)** Switch point distributions for L2-regularised insight neural networks. Dashed vertical line marks onset of colour predictiveness.
Fig. 2: Illustrations of models and respective parameters. **(A)** Linear function with free parameters intercept \(y_{0}\) and slope \(m\). **(B)** Step function with free parameters inflection point \(t_{s}\) and function maximum \(y_{max}\). **(B)** Generalised logistic regression function with free parameters slope \(m\), inflection point \(t_{s}\) and function maximum \(y_{max}\). |
2307.13398 | The missing radial velocities of Gaia: a catalogue of Bayesian estimates
for DR3 | In an earlier work, we demonstrated the effectiveness of Bayesian neural
networks in estimating the missing line-of-sight velocities of Gaia stars, and
published an accompanying catalogue of blind predictions for the line-of-sight
velocities of stars in Gaia DR3. These were not merely point predictions, but
probability distributions reflecting our state of knowledge about each star.
Here, we verify that these predictions were highly accurate: the DR3
measurements were statistically consistent with our prediction distributions,
with an approximate error rate of 1.5%. We use this same technique to produce a
publicly available catalogue of predictive probability distributions for the
185 million stars up to a G-band magnitude of 17.5 still missing line-of-sight
velocities in Gaia DR3. Validation tests demonstrate that the predictions are
reliable for stars within approximately 7 kpc from the Sun and with distance
precisions better than around 20%. For such stars, the typical prediction
uncertainty is 25-30 km/s. We invite the community to use these radial
velocities in analyses of stellar kinematics and dynamics, and give an example
of such an application. | Aneesh P. Naik, Axel Widmark | 2023-07-25T10:35:28Z | http://arxiv.org/abs/2307.13398v2 | # The missing radial velocities of _Gaia_: a catalogue of Bayesian estimates for DR3
###### Abstract
In an earlier work, we demonstrated the effectiveness of Bayesian neural networks in estimating the missing line-of-sight velocities of _Gaia_ stars, and published an accompanying catalogue of blind predictions for the line-of-sight velocities of stars in _Gaia_ DR3. These were not merely point predictions, but probability distributions reflecting our state of knowledge about each star. Here, we verify that these predictions were highly accurate: the DR3 measurements were statistically consistent with our prediction distributions, with an approximate error rate of 1.5%. We use this same technique to produce a publicly available catalogue of predictive probability distributions for the 185 million stars up to a \(G\)-band magnitude of 17.5 still missing line-of-sight velocities in _Gaia_ DR3. Validation tests demonstrate that the predictions are reliable for stars within approximately 7 kpc from the Sun and with distance precisions better than around 20%. For such stars, the typical prediction uncertainty is 25-30 km/s. We invite the community to use these radial velocities in analyses of stellar kinematics and dynamics, and give an example of such an application.
keywords: Galaxy: kinematics and dynamics - catalogues - techniques: radial velocities - methods: statistical
## 1 Introduction
Our understanding of the Milky Way and its stellar dynamics has made great leaps in recent years, and will continue to do so in the near future. Most significantly, the _Gaia_ mission (Gaia Collaboration et al., 2016) has publicly released astrometric measurements of more than a billion Milky Way stars, yielding unprecedented insights into our Galaxy and its structure, composition and history.
In the _Gaia_ catalogue, the vast majority of objects have proper motions but no measurements for the radial (line-of-sight) velocity \(v_{\rm los}\). The latter measurements are gathered by observing the redshifts of stellar spectra using the Radial Velocity Spectrometer (Cropper et al., 2018), which has a significantly lower limiting magnitude than the astrometer obtaining parallaxes and proper motions. In June 2022, _Gaia_ issued its third data release (DR3; Gaia Collaboration et al., 2022), which increased the number of objects with radial velocities to 33.8 million from the previous 7.2 million in Data Release 2 and Early Data Release 3 (DR2, EDR3; Gaia Collaboration et al., 2018, 2021; see also Katz et al. (2023) for a description of the DR3 radial velocities). While this is a significant increase, it is still only a small fraction of the 1.47 billion objects with parallax and proper motion measurements.
This missing velocity component places strong limitations on what can be learned from _Gaia_ data, as only five of the six phase space coordinates are available for most stars. In these cases, it is often useful to guess or predict \(v_{\rm los}\), preferably while also accounting for the uncertainty of such a prediction. This is useful when studying individual objects in the _Gaia_ catalogue, as well as for larger scale stellar population analysis. Prediction uncertainties can then be accounted for in statistical inference (e.g., Bayesian marginalisation). For example, Widmark et al. (2022) used predicted radial velocities in this manner when studying the phase-space spiral using the _Gaia_ EDR3 proper motion sample.
The six phase space coordinates of _Gaia_ stars are correlated with each other via the distribution function (DF) of the local stellar population. For a given star without a radial velocity measurement, a predictive probability distribution is given by conditioning the distribution function on the star's five remaining coordinates. The goal then is to obtain an approximation or representation of this conditional DF, so that one can generate predictions in this manner across all such stars. Supervised deep learning techniques such as neural networks provide one way to achieve this.
The first effort to use neural networks to predict radial velocities from the other five phase-space coordinates was that of Drupulic et al. (2021), which demonstrated that an artificial neural network can predict \(v_{\rm los}\) and its associated uncertainty for a population of simulated stars drawn from a mock _Gaia_ catalogue (specifically, that of Rybizki et al., 2018). Subsequently, they applied such a network to _Gaia_ EDR3 and demonstrated that the technique is capable of identifying substructures in phase space (Drupulic et al., 2023). The deep learning architecture that we employ in this work differs from that of Drupulic et al. (2021, 2023) in that we use Bayesian neural networks (BNNs; Titterington, 2004; Goan and Fookes, 2020; Jospin et al., 2022). BNNs differ from classical neural networks in that the model parameters (i.e., the 'weights' and 'biases' of the neural network) are not fixed quantities but are instead randomly drawn from probability distributions; the process of training a BNN involves optimising the model parameter probability distributions. As such,
a BNN represents a Bayesian posterior probability over the space of network models, rather than just a best-fit point in model space, thus making them less prone to overfitting. The flexibility of BNNs also allow us to produce full posterior probability distributions for each \(v_{\text{los}}\) prediction; these \(v_{\text{los}}\) posterior distributions are flexible in shape, allowing them to be, for example, non-Gaussian, skewed, and even multi-modal.
In Naik and Widmark (2022, hereafter Paper I), we applied BNNs to the _Gaia_ EDR3 data set, and produced blind predictions for the _Gaia_ DR3 radial velocity data, the latter being released shortly after the publication of our predictions. The first goal of the present article is to provide a comparison of our predictions with the DR3 measurements.
Having thus validated our methodology, we have trained a new BNN model on the DR3 data and used it to generate a novel catalogue of _Gaia_ DR3 radial velocity predictions for 185 million stars up to a \(G\)-band apparent magnitude of 17.5. The second goal of this article is to present this catalogue and describe its construction and validation.
Following the publication of this article, our catalogue will be made publicly available via the _Gaia_ mirror archive hosted by the Leibniz-Institut fur Astrophysik Potsdam (AIP), where it can be queried alongside the main _Gaia_ DR3 catalogue.1 Some guidance for the usage of the catalogue, as well the trained BNN model, data queries, plotting scripts, and source code are all also made publicly available.2
Footnote 1: DOI: 10.17876/gaia/dr.3/116; prior to publication the data will instead be available for bulk download at [http://cuillin.roe.ac.uk/~anaik/MissingRV3CRalogue/](http://cuillin.roe.ac.uk/~anaik/MissingRV3CRalogue/)
This work is structured as follows. In Sec. 2, we compare the blind predictions of Paper I with the radial velocity measurements of _Gaia_ DR3. In Sec. 3, we describe the training of a new BNN model using the DR3 data, focussing particularly on the methodological differences between this work and that of Paper I. We validate the model using a subset of 6D _Gaia_ stars reserved for the purpose in Sec. 4. The main result of this work is the catalogue of radial velocity predictions for the 5D stars in _Gaia_ DR3, which is presented in Sec. 5, before some concluding remarks in Sec. 6.
## 2 Pre-Dr3 predictions
In this section, we compare the blind predictions of Paper I for the _Gaia_ DR3 radial velocities with the published values. Our catalogue provided 250 posterior samples each for 16 487 640 stars: all 5D stars in _Gaia_ DR2/EDR3 in the magnitude range \(6<G<14.5\) with accompanying photo-astrometric distance estimates from the StarHorse code (Anders et al., 2022). Of these, we find 14 581 884
Figure 1: Measured radial velocities and predictive posterior distributions generated by the _Gaia_ EDR3-trained BNN for 100 000 _Gaia_ 6D DR3 stars randomly chosen from the 11 million DR3 stars for which the radial velocity uncertainty is less than 5 km/s and which have entries in the blind prediction catalogue accompanying Paper I. Each horizontal coloured band represents the predictive posterior distribution for the line-of-sight velocity of a given star, with darker shading indicating greater probability density. The stars are ordered vertically by their true line-of-sight velocities, which are indicated by the yellow line. By eye, it appears that the posteriors are statistically consistent with the true radial velocities, with performance comparable to that achieved in the tests of Paper I.
(88%) appear in _Gaia_ DR3 with measurements for their radial velocities. We further restrict this set to the 10 953 995 with \(v_{\rm los}\) measurement uncertainties less than 5 km/s. We compare our predictions for the radial velocities of these stars with the actual observations.
Figure 1 shows, for a randomly chosen subset of 100 000 stars, our published prediction distributions and the DR3 measurements. The stars are ordered by their measured \(v_{\rm los}\) (indicated by the yellow line) and vertically stacked. The shaded regions represent the prediction distributions (with darker tones indicating higher frequency density). This figure gives a first indication that our experiment has been successful: the prediction distributions generally track the measured velocities. Indeed, the appearance of this figure is very similar to the analogous figures shown in Paper I comparing the BNN predictions with mock data and the _Gaia_ EDR3 0D test set (Figs. 2 and 5 respectively in that article), suggesting that the performance of the BNN procedure on the EDR3 5D sample is comparable to that achieved in tests on the 6D test sample and on mock data. As discussed in Paper I, there is some evidence of'regression toward the mean' in the figure: outliers with respect to the intrinsic phase space distribution (naturally pulled towards the top and bottom of the figure) have prediction distributions centred around less extreme velocities.
The success of the predictions is verified more quantitatively in Figure 2, which shows the quantile distribution for all 11 million stars (not only a random subset as in the previous figure), meaning the histogram of the positions \(F\) of the measured radial velocities \(v_{\rm true}\) within their predictive posterior distributions \(p(v)\),
\[F(v_{\rm true}|{\rm model})=\int_{-\infty}^{v_{\rm true}}p(v)dv. \tag{1}\]
For example, \(F=0.5\) indicates that a given radial velocity measurement sits exactly at the median of our prediction distribution, while \(F=0\) (1) indicates that the star sits at the extreme low (high) end of the distribution. For a set of prediction distributions that are statistically consistent with the measurement outcomes, one expects a distribution of \(F\) values close to a uniform distribution, i.e. \(x\)% of observations are in the bottom \(x\)% of the predictive posteriors, \(\forall x\). Indeed, the distribution of \(F\) plotted in Fig. 2 is very close to the uniform distribution (indicated by the dotted line), indicating that our predictions are generally consistent with the measurements.
There are two small caveats to this success. First, the extremes (\(F\approx 0\) and \(F\approx 1\)) are somewhat underpopulated, suggesting a degree of systematic underconfidence in our predictions: a dearth of data lying at the extremes of probability distributions indicates the distributions are wider than strictly necessary. This underconfidence is a shortcoming, as it means that the precision of our predictions is not optimal. Nonetheless, it is a relatively minor issue as these underpopulated wings are rather small. Moreover, we believe this outcome is better than the opposite case of overconfidence, in which high precisions are achieved at the cost of degraded accuracy.
The second issue apparent in the distribution of \(F\) values in Fig. 2 is the small bulge at \(F\approx 0.2\). This suggests that there is a subset of stars for which the radial velocities are overestimated (typically, these are approaching stars for which the posteriors are incorrectly centred on positive values). We find that this effect does not appear to correlate with any other variables: the bulge in the histogram does not vanish upon restricting the dataset to stars with certain magnitudes, sky positions, proper motions, or distances (or the uncertainties on these quantities).
Given this quantile distribution, one can estimate an 'error rate' as
Figure 3: Spatial distribution of the 6D training set. From top to bottom, the three panels respectively show histograms of the stellar density along the Galactocentric \(X/Y/Z\) directions. The position of the Sun in these coordinates is assumed to be (8.122, 0, 0.0208) kpc, indicated by the horizontal dashed lines. The characteristic width of the spatial distribution is around 1 kpc in the directions along the disc plane (\(X,Y\)) and 0.2 kpc in the perpendicular direction (\(Z\)).
Figure 2: Distribution of ‘quantiles’: positions of the published DR3 radial velocities within the EDR3-derived posterior distributions, for all 11 million DR3 stars for which the radial velocity uncertainty is less than 5 km/s and which have entries in the blind prediction catalogue accompanying Paper I. The hatched region indicates the area bounded between the quantile distribution and the uniform distribution (the latter indicated with the yellow dashed line), which is used to calculate an approximate error rate (see Eq. 2 and surrounding discussion). The distribution is close to uniform indicating good consistency between predictions and measurements (error rate of around 1.5%), albeit with some evidence of underconfidence.
half the total area between the quantile histogram \(h\) and the uniform distribution (the hatched region in Fig. 2),
\[E=\frac{1}{2}\int_{0}^{1}|h(x)-1|dx\approx\frac{1}{2}\sum_{i}|h_{i}-1|\Delta x, \tag{2}\]
where the sum is over bins with width \(\Delta x\) within the interval [0, 1], and \(h_{i}\) represents the frequency density of stars with \(F\) falling in bin \(i\). This gives an estimate of the proportion of stars affected by some systematic overestimation or underestimation, with a division by two to account for double-counting. Applying Eq. 2 to the quantile histogram displayed in Fig. 2, we find an error rate of around 1.5%, approximately constant with respect to the number of bins used. Thus, the two systematic effects described above prove to be minor.
In summary, our blind predictions for the radial velocities published in _Gaia_ DR3 were overall highly successful, with an approximate error rate of 1.5%. The main purpose of this exercise of generating blind predictions was to validate our technique and advertise it to the community before using it to create a larger catalogue of predictions for radial velocities still missing from DR3. This latter object and its construction is the focus of the remainder of this article.
## 3 DR3 model
This section describes the construction of a new model trained on the data from _Gaia_ DR3. The subsections describe in turn the 6D training data and pre-processing (Sec. 3.1), the BNN implementation (Sec. 3.2), the training procedure (Sec. 3.3), and the post-training generation of prediction distributions (Sec. 3.4).
### 6D Input Data
There are 33 653 049 6D stars in _Gaia_ DR3, meaning stars with astrometric measurements as well as radial velocity measurements from the _Gaia_ Radial Velocity Spectrometer. We employ these stars to train and test our BNN model.
Whereas in Paper I, we used stellar distance estimates from the StarHorse code (Anders et al., 2022), here we instead sample distance estimates from the 'geometric' posterior described by Bailer-Jones et al. (2021): a generalised gamma distribution prior with parameters depending on sky location, multiplied by a Gaussian likelihood for the _inverse_ distance centred on \(\varpi-\varpi_{\rm rp}\), where \(\varpi\) is the stellar parallax and \(\varpi_{\rm rp}\) is the parallax zero-point. For the latter quantity, we use the magnitude, colour- and position-dependent values tabulated by Lindegren et al. (2021) where available, and a uniform value of -0.017 mas otherwise. We sample 10 distance estimates per star, to be used in generating random realisations of the dataset as described below.
We perform an 80/20 split on the 6D set to give training and test sets with 26 922 439 and 6 730 610 stars respectively. We further apply a series of quality cuts to the training set alone, retaining stars for which:
1. Radial velocity uncertainty \(<8.5\) km/s.
2. Uncertainty in both proper motions \(<0.07\) mas/yr.
3. Fractional uncertainty in distance (estimated as the standard deviation of the 10 distance samples divided by the mean distance sample) \(<5\%\).
4. \(f>0.5\), where \(f\) is the 'astrometric fidelity' (an estimate of the quality of a given astrometric solution) as tabulated by Rybizki et al. (2022).
Following these cuts, 16 243 507 stars remain in the training set. We use these stars to train the model (Sec. 3.3), while the test set remains entirely unseen throughout the training and is subsequently used to validate the trained model (Sec. 4).
Figure 3 plots the spatial distribution of the training set in three histograms in Galactocentric Cartesian co-ordinates \((X,Y,Z)\), adopting a right-handed system in which the Sun is at \((-8.122,0,0.0208)\) kpc (GRAVITY Collaboration et al., 2018; Bennett and Bovy, 2019), as indicated by the vertical lines in the figure. Within the disc-plane, 50/90/99% of stars in the training set lie within distances \(\sqrt{(X-X_{\odot})^{2}+Y^{2}}=0.9/2.4/3.9\) kpc, while perpendicular to the disc-plane 50/90/99% of stars lie within distances \(|Z-Z_{\odot}|=0.2/0.6/1.4\) kpc. These bounding distances would be larger in the absence of the quality cuts made above, which have restricted the training set to the more precise measurements obtained from nearer stars. The spatial limits of our training set are worth bearing in mind when utilising our catalogued predictions: predictions for stars well beyond the range of the training set will necessarily result from a degree of uncertain extrapolation; we explore this further in Sec. 4.
### Bnn
We utilise the same BNN methodology as in Paper I, which has since been made into a publicly available software package: banyan.3 We refer the reader to that article for a more detailed description of this technique and underlying theory. Here, we merely describe some minor changes in the architecture and computation with respect to that of Paper I:
Footnote 3: [https://github.com/aneeshmaik/banyan](https://github.com/aneeshmaik/banyan)
1. Whereas in Paper I we trained and generated predictions from a single network, here we train an _ensemble_ of 16 networks. Pre-training, each member of the ensemble is assigned a different random initialisation of optimisation parameters (recall that with BNNs, these are not the network weights and biases, but the parameters of the probability distributions from which the weights and biases are drawn). Moreover, during training, each ensemble member is given a different random realisation of the dataset (i.e., different shufflings of the data and different draws from the error distributions, cf. Sec. 3.3). Consequently, each member ends its training at a different local extremum of the optimisation objective. Post-training, we generate a set of \(16N\) radial velocity posterior samples for a given star by generating \(N\) samples with each ensemble member (Sec. 3.4). We find that this ensemble procedure leads to visually smoother, less noisy sample distributions than the single network case, although we note no appreciable change in precision or accuracy.
2. Because the entire training set can not fit on GPU memory at once, the data are necessarily divided into 'batches' which are passed through the network sequentially. On each forward pass through the network, the network parameters (weights and biases) are randomly sampled from their optimised probability distributions. In the implementation of Paper I, this would be done once per batch, so all data in a batch would see the same network parameters on a given forward pass. However, in some test applications we have found that this can lead to symptoms of underfitting, such as spurious correlations between the output distributions of different inputs. That being said, we have found no evidence of this occurring in our published pre-DR3 predictions (Sec. 2). Nonetheless, to avoid this undesirable behaviour in future, we have changed the sampling procedure. Now, the network parameters are sampled separately for each data point,
so each star in a batch sees different network parameters on a given forward pass.
Because the new sampling procedure described above diminishes the danger of underfitting, we have found through extensive testing that we are able to reduce the size of the network architecture without any appreciable decrease in accuracy or precision. We halve the number of hidden layers in the network from 8 to 4, and the number of neurons per layer from 64 to 32. This yields a significant reduction in computational cost, although the total computational cost is still increased with respect to the architecture in Paper I due to the 16-fold increase in ensemble size.
### Training
There are also a few minor changes in the training procedure with respect to that of Paper I. For clarity, we outline the whole procedure here, highlighting the changes.
Training a BNN consists of a series of training 'epochs' in which the data are fed to the network in a series of batches. At the start of each epoch, the entire dataset is randomly shuffled, and each star has its phase space position randomly drawn from an error distribution: sky positions and proper motions are drawn from a 4D Gaussian centred on the measured values with the published errors and correlations filling the 4\(\times\)4 covariance matrix, radial velocities are drawn from a 1D Gaussian centred on the measured value with width equal to the measurement error, and distances are randomly chosen from the 10 samples from the Bailer-Jones et al. (2021) posterior (cf. Sec. 3.1). This distance sampling differs from that described in Paper I, in which we sampled distances randomly from a Gaussian centred on the StarHorse value.
From here, training proceeds largely as in Paper I: the positions are converted to Galactocentric Cartesian coordinates and concatenated with the proper motions to give a length-5 input vector \((X,Y,Z,\mu_{\alpha},\mu_{\beta})\) alongside a length-1 output vector \((v_{\text{los}})\). These inputs / outputs are rescaled by subtracting means of \((-8\text{ kpc},0,0,0,0)/(0)\) and converting into units of (1.5 kpc, 1.5 kpc, 0.6 kpc, 15 mas/yr, 15 mas/yr)/(40 km/s). Note the \(Z\) unit has changed from 0.4 kpc in Paper I, reflecting the larger heights above/below the Galactic disc plane probed by DR3.
The dataset is then split into batches of 400 stars each to be fed to the network sequentially. The batch size has been reduced from 6000 in Paper I as a result of the increased memory footprint of the altered network parameter sampling procedure described in Sec. 3.2. At each batch, the network generates 256 posterior samples for each star's radial velocity (cf. 250 in Paper I). These samples are then smoothed with a logistic kernel into probability distributions, which are then scored against the measured values of \(v_{\text{los}}\) using the optimisation objective described in Paper I (Eq. 4 in that article), and the network parameters are updated via a gradient descent step (more specifically, we use the stochastic gradient descent algorithm adam; Kingma & Ba 2015). We use an initial learning rate of 0.0002. This is significantly reduced from the 0.01 used in Paper I, compensating for the reduction in batch size (fast learning with small batches can lead to unstable gradient descent). The learning rate is reduced by a factor of 2 once 10 training epochs pass without an appreciable change in the optimisation objective, and the training is ended when the learning rate crosses a threshold value of \(5\times 10^{-6}\) (cf. \(10^{-5}\) in Paper I).
The procedure above describes the training of a single network. As noted in Sec. 3.2, we now train an ensemble of 16 networks rather than a single network. This procedure is therefore repeated for each member of the ensemble, with different random initialisations of network parameters and different random data shuffling and sampling.
### Generating Predictions
After undertaking the training described in the previous subsection, the final step is to use trained model to generate \(v_{\text{los}}\) predictions for the 6D test set (Sec. 4) and the 5D set (Sec. 5). The procedure in both cases is identical: for each star in the set, 64 posterior samples are generated by each of the 16 trained networks in the ensemble, yielding 1024 posterior samples per star. As in the training stage, each network in the ensemble receives a different error-sampling of the data as an input. These 1024 samples are then converted into a smooth, continuous probability distribution via a logistic (sech\({}^{2}\)) smoothing kernel as described in Paper I (Eq. 2 in that article). For each star, this probability distribution is the posterior predictive distribution for \(v_{\text{los}}\).
To obtain simpler representations of these probability distributions, we fit them with four-component 1D Gaussian mixtures, so that the posterior for \(v_{\text{los}}\) of a given star is represented as
\[p(v_{\text{los}})=\sum_{i=1}^{4}w_{i}\,\frac{\exp\left(-\frac{(v_{\text{los}}- \mu_{i})^{2}}{2\sigma_{i}^{2}}\right)}{\sqrt{2\pi\sigma_{i}^{2}}}, \tag{3}\]
which comprises 12 parameters: the weights \(w_{\{1,4\}}\), the means \(\mu_{\{1,4\}}\), and the variances \(\sigma_{\{1,4\}}^{2}\) of the Gaussians. Only 11 of these parameters are independent: \(\sum_{i}w_{i}=1\) so setting three weights fixes the fourth. Nonetheless, our published catalogue (Sec. 5) provides all 12 parameters for simplicity. We find that four Gaussian components are sufficient to give a good fit to the posteriors in all cases, as verified by 1-sample Kolmogorov-Smirnov tests between the BNN posterior samples and the Gaussian mixture CDF.
As well as facilitating digital storage of the posteriors in a lightweight form, this Gaussian mixture approach is additionally beneficial in that it is subsequently easy to manipulate the distributions, e.g., calculate summary statistics from them, generate further samples from them, and marginalise over them.
## 4 Model Validation
In this section, we test and validate the network predictions on the DR3 test set, i.e., the set of 6 730 610 6D stars in _Gaia_ DR3 not used for network training.
Figure 4 shows various summary comparisons between the BNN predictions and the measured radial velocities of the 6D test set. The left-most panel is the analogue of Fig. 2, showing the quantile distribution of the true \(v_{\text{los}}\) measurements within the network's prediction distributions. The distribution is very close to uniform: the error rate calculated with Eq. 2 is 1.4%, similar to the error rate computed in Sec. 2 for the pre-DR3 predictions. Nonetheless, there is a slight trend visible at the low and high ends, where the histogram value drops to lower values, giving rise to a box shape with rounded corners. This is similar to the feature seen in Fig. 2, and we offer the same interpretation (cf. Sec. 2): the far tails of the BNN-generated predictive posterior distributions are, on average, somewhat heavier than necessary; in other words, the network is somewhat overly cautious in terms of the extreme \(v_{\text{los}}\) outliers. While this trend is not ideal, we nonetheless consider it a sign of healthy caution: it is better to produce somewhat under-confident predictions than over-confident
ones. The success of the predictions is confirmed by the middle panel, which show histograms of the true \(v_{\rm los}\) value and randomly drawn prediction posterior realisations; the two distributions overlap almost exactly.
In the rightmost panel of Fig. 4, we plot a histogram of prediction uncertainties (posterior widths). For a given predictive posterior distribution, the uncertainty is defined as half the difference between the 84.1\({}^{\rm th}\) and 15.9\({}^{\rm th}\) percentiles. The median prediction uncertainty is 29.0 km/s, although this is a distribution with a long tail to high values (see discussion below) such that the mode is around 24 km/s and there is even a sizable fraction of stars with prediction uncertainties smaller than 20 km/s. These uncertainties are significantly smaller than the variation in measured values (the standard deviation of the complete _Gaia_ DR3 test set \(v_{\rm los}\) distribution is 43.6 km/s). This quantifies the improvement in predictivity afforded by our technique over more basic techniques such as drawing samples from a Gaussian distribution with width equal to the variation in measured values.
It is worth reiterating that the quality cuts used in generating the training set were not applied to the test set (cf. Sec. 3.1). As a consequence, there are stars in the test set at larger distances and with larger measurement uncertainties than those probed by the training set, allowing us to test the extrapolation and error propagation of our technique respectively, by ensuring that the prediction uncertainties for such stars are inflated correspondingly. We begin to see a hint of this behaviour in the histogram of prediction uncertainties in rightmost panel of Fig. 4, where there are some stars for which the prediction uncertainty is very high; even higher than the variation in measured values (43.6 km/s).
We investigate these effects more directly in Figure 5, which plots the prediction uncertainty against distance and fractional distance error. In each case, the stars of the test set were divided into 16 bins of the relevant variable, and within each bin we calculated the median and the standard deviation of the posterior widths, depicted in the figure as points and error bars respectively. For each star's distance, we used the mean of the 10 distance samples, while the absolute distance error is given by the standard deviation of the 10 samples (so that the fractional distance error plotted is the ratio of the mean to the standard deviation). As a function of distance (left panel), the \(v_{\rm los}\) uncertainty slowly increases for a few kpc before rising quite quickly beyond roughly 3 kpc to \(\sim\)100 km/s levels. This change at 3 kpc approximately coincides with the spatial extent of the training set, and demonstrates that the trained model is capable of detecting input data beyond the range of its training data and exercising commensurate caution. A similar trend is seen in the behaviour of the \(v_{\rm los}\) uncertainty as a function of fractional distance error (right panel): the posterior widths rise slowly to around 0.1, beyond which they rapidly grow quite large. Uncertainties on the input data are
Figure 4: Various summaries of the BNN predictions for the radial velocities of the DR3 6D test set. _Left:_ Quantile distribution of measurements within the BNN prediction distributions (analogue of Fig. 2). The yellow dashed line indicates a perfect uniform distribution. _Middle:_ histograms of the radial velocities using the measured values (yellow) and a single posterior draw for each star (teal). The two distributions overlap almost exactly. _Right:_ Distribution of prediction uncertainties. These are calculated as half the difference between the 84.1\({}^{\rm th}\) and 15.9\({}^{\rm th}\) percentile values in the predictive posterior distribution. The vertical dashed line indicates the median uncertainty, and the vertical yellow line indicates the variation in measured \(v_{\rm los}\) values. The left and middle panels demonstrate the good agreement between the predictions and the measurements, while the right panel demonstrates the predictive power of our technique.
Figure 5: Prediction uncertainties (posterior widths) as a function of distance (_Left_) and fractional distance error (_right_). Points and errorbars respectively represent the median and standard deviation of the prediction uncertainties in each bin of the relevant variable. The vertical dashed line in the left-hand panel indicates the heliocentric distance containing 95% of the training set. This figure demonstrates that the BNN model is able to exercise caution when extrapolating beyond the bounds of its training data, and that it is able to successfully propagate uncertainties on its input data.
successfully propagated to uncertainties on the predictions. There is a correlation between distance and distance error in the _Gaia_ stars, but we have verified that the two trends observed in Fig. 5 hold separately: posterior width increases with distance at fixed distance precision and with distance precision at fixed distance, although we have not tested for formal independence.
In Figure 6, we show how the error rate (Eq. 2) for the 6D test set varies across the Galactic disc and across the sky. In the disc, we split the test set into square bins of width 400 pc in \(X\) and \(Y\), discarding any bins with fewer than 50 stars. Within each bin, we construct a quantile histogram as in the left panel of Fig. 4 and from it calculate an error rate via Eq. 2. Each histogram is calculated in just 10 bins in \(F\) to avoid low-number statistics which might artificially inflate the error rate. These error rates across the disc plane are plotted in the left panel of Fig. 6 (more detail is given in Appendix A, which explicitly shows the quantile histograms in different disc locations, rather than just these error rate summaries). There is a circle around the Sun with a radius of approximately 7 kpc within which the error rate is quite low: around a few percent. Recalling that most of the training set stars were situated within 3-4 kpc (a black dotted line enclosing 99% of the training set is shown in the figure), this again suggests that a degree of extrapolation by a few kpc is'safe'. However, at the edges of this circle and beyond, the error rate starts to grow. This can be ascribed in part to small number statistics at these large distances: small bin counts distort the quantile histograms, leading to inflated error rates. However, this is not the case everywhere. For example, the bin counts are relatively high in the regions around \((X,Y)\approx(0,\pm 4)\) where the error rate is also very high. There is also such a feature at \((X,Y)\approx(-6,-10)\), which corresponds to the Large Magellanic Cloud (LMC). Although the LMC is much farther away, the very large distance errors for its constituent stars in _Gaia_ DR3 mean that our distance sampling procedure can'scatter' these stars into regions much closer to the Sun. These regions with large error rates vanish entirely when restricting to only the stars in the test set with small relative distance uncertainties (e.g., less than 10%), suggesting that imprecise distance estimates are part of the issue in addition to excessive extrapolation.
The right-hand panel of Fig. 6 provides a similar plot, now showing binned error rates as a function of \(I\) and \(b\), dividing the sky into 3072 equal-area pixels. For almost the whole sky, the error rate is low (i.e. the quantile histogram is close to uniform), with median and mean values of 4.0% and 4.3%. Note that these values are larger than the overall test set error rate of 1.4%. This discrepancy is primarily due to the low-number noise induced by partitioning the stars into sky pixels, exacerbated by the choice of equal-area bins rather than equal-population bins: the majority of bins lie in sparsely populated regions away from the Galactic disc plane. These issues notwithstanding, the error rate is quite low across the sky, with two key exceptions: the Large and Small Magellanic Clouds. For the reasons discussed in the previous paragraph, the error rates for the stars in these directions is greatly increased. As in the case of the left-hand panel of Fig. 6, these regions vanish when when restricting to stars with good distance precision.
In Fig. 7, we show the \((U,V)\)-velocity field of the DR3 6D test set
Figure 6: _Left_: Error rates (Eq. 2) calculated for the 6D test set in 2D pixels (width 400 pc) in the Galactic plane. Pixels with fewer than 50 stars are discarded, and the error rate is calculating using 10 quantile bins in each pixel. The cross indicates the position of the Sun, the gridlines give lines of equal Galactocentric radius and azimuth with spacings of 2 kpc and 22.5\({}^{\circ}\) respectively, and the thick dotted line indicates the (approximately circular) region enclosing 99% of the training set. _Right_: Error rates calculated in pixels across the sky. The sky is divided into a Healpix grid of order 4, with 3072 sky pixels and a resolution of 3.66\({}^{\circ}\). The two panels demonstrate that within around 7 kpc and across most of the sky, the error rates are quite low, with problematic regions emerging only at larger distances.
stars in a very local spatial volume (within 250 pc), where \(U\) and \(V\) are respectively the velocities in the \(X\) and \(Y\) directions in the solar rest frame. We compare the \((U,V)\)-velocity field when using the actual _Gaia_\(v_{\rm los}\) measurements and randomly drawn samples from the BNN-generated posteriors (taking one sample for each star). This figure is included to illustrate the network's merits and limitations in terms of resolving velocity sub-structures. As shown in the top panel, this very local volume is well resolved in the 6D data and exhibits a complex structure of moving groups, streams, and clusters (cf. Gaia Collaboration et al., 2018, Fig. 22 in that article). The bottom panel shows the same velocity field, but using a random prediction posterior sample for each star. Comparing the two panels, we see that the small scale structure is washed out, but otherwise the general shape of the 2D velocity distribution is retained. We highlight that this very nearby spatial region constitutes a miniscule part of the training set and the _Gaia_ data in general, and that velocity sub-structures are typically not this highly resolved. Hence, resolving such sub-structures is not what the network model is specialised to do. Still, the predicted \(v_{\rm los}\) values give a robust account of the general structure of the velocity distribution, including higher order statistics like the skew and kurtosis. Moreover, using the _Gaia_ 5D sample with our \(v_{\rm los}\) predictions can still be very useful in terms of discovering and resolving phase-space sub-structures in cases where the proper motions are the more informative quantities. We demonstrate such an application in the following section.
## 5 DR3 Radial Velocity Catalogue
This section describes the main outcome of this work: a BNN-generated catalogue of predictive posterior distributions for the radial velocities of a large number of stars in _Gaia_ for which direct measurements are not available. The catalogue is made available via the _Gaia_ mirror archive gaia.aip.de hosted by the Leibniz-Institut fur Astrophysik Potsdam (AIP), where it can be queried alongside the main _Gaia_ DR3 catalogue (cf. Footnote 1). The two catalogues can be joined using the source_id field.
Included in this catalogue are all _Gaia_ stars without radial velocities but with 5D astrometry and \(G\)-band magnitude measurements \(G\), up to \(G=17.5\). In total, this is 184 905 491 stars. We imposed this magnitude cut in order to avoid extrapolating too far beyond the training data (the vast majority of the 6D stars in _Gaia_ DR3 have \(G<16\)) and also to avoid straying beyond the reach of our external validation catalogues (see discussion below). Ultimately there is a degree of arbitrariness in this choice of magnitude cut: the phase space distribution function of the _Gaia_ stars has a strong dependence on spatial position and only a weak dependence on absolute magnitude (indeed, we found no improvement in predictivity when including magnitude as a network input), so there will be dim, nearby stars excluded from our catalogue for which the BNN could nonetheless have produced accurate guesses, and bright faraway stars included in the catalogue for which the posteriors are less accurate.
As described in Sec. 3.4, we refit the BNN-generated posteriors with four-component Gaussian mixtures to allow a lightweight digital representation and also more easily facilitate 'downstream' statistics (e.g., sampling, summary statistics, marginalisation). The final catalogue thus has 20 columns, as enumerated in Table 1. The first of these is the _Gaia_ DR3 source_id for each star, an integer uniquely identifying each DR3 star. The next 12 columns are the weights, means and variances of the four Gaussian components (cf. Eq. 3). The remaining columns provide some summary statistics. Columns 14-18 give the 5\({}^{\rm th}\), 15.9\({}^{\rm th}\), 50\({}^{\rm th}\) (median), 84.1\({}^{\rm th}\), and 95\({}^{\rm th}\) percentiles
\begin{table}
\begin{tabular}{c c c} \hline Column name & Units & Description \\ \hline source\_id & - & _Gaia_ DR3 source\_id \\ w\_{0,1,2,3\} & - & Weights of 4 Gaussian components \\ mu\_{0,1,2,3\} & km/s & Means of 4 Gaussian components \\ var\_{0,1,2,3\} & (km/s)\({}^{2}\) & Variances of 4 Gaussian components \\ q\_050 & km/s & 5\({}^{\rm th}\) percentile \\ q\_159 & km/s & 15.9\({}^{\rm th}\) percentile \\ q\_500 & km/s & 50\({}^{\rm th}\) percentile \\ q\_841 & km/s & 84.1\({}^{\rm th}\) percentile \\ q\_950 & km/s & 95\({}^{\rm th}\) percentile \\ sample\_mean & km/s & Posterior mean \\ sample\_std & km/s & Posterior standard deviation \\ \hline \end{tabular}
\end{table}
Table 1: Columns of the BNN-generated \(v_{\rm los}\) prediction catalogue for _Gaia_ DR3. The data in all columns are 32-bit floats except the source_id column, which comprises 64-bit integers. A more detailed description of the various columns is given in the main text (Sec. 5).
Figure 7: Histogram of velocities parallel to the Galactic plane (\(U\) and \(V\)) for _Gaia_ 6D test set stars within a distance of 250 pc. The top panel shows the velocities when using the _Gaia_ RVS \(v_{\rm los}\) measurements, while the bottom panel shows the velocities using predicted \(v_{\rm los}\) values, drawing a single posterior sample for each star. The complex velocity substructure visible in the top panel is not resolved by our velocity predictions, although the overall shape of the distribution is recovered well.
\(v_{\rm los}\) values computed from each posterior, and columns 19-20 are the overall posterior mean and standard deviation.
While the vast majority of stars appearing in this catalogue have never had their radial velocities measured by any instrument, a small subset do have available measurements from other ground-based surveys, allowing us to perform some further validation tests. These are described in Appendix B.
The remainder of this section describes some preliminary explorations of the catalogue. Figure 8 plots the median \(v_{\rm los}\), computed in 50 pc bins in the Galactic disc plane then smoothed with a Gaussian filter of width 150 pc, for the DR3 6D test set (upper panel) and for
Figure 8: Map of median \(v_{\rm los}\) across the Galactic disc for stars in the DR3 6D test set (upper panel) and the 5D stars appearing in our BNN-generated catalogue, drawing one posterior sample per star (lower panel). The maps are computed in 50 pc bins in the Galactic disc plane then smoothed with a Gaussian filter of width 150 pc. Some quality cuts are made to the data as described in the main text (Sec. 5). In each panel, the cross indicates the position of the Sun, the gridlines give lines of equal Galactocentric radius and azimuth with spacings of 2 kpc and 22.5\({}^{\circ}\) respectively, and the thick dotted line indicates the (approximately circular) region enclosing 99% of the training set. Our catalogue enables kinematical analysis with many more stars and to a greater spatial extent than the _Gaia_ RVS sample, although excessive extrapolation should be avoided.
Figure 9: Maps of vertical velocity \(W\) across the Galactic disc using 5D stars appearing in our BNN-generated catalogue, drawing one posterior sample per star. In both panels, the sample is limited to \(|Z|<600\) pc and fractional distance uncertainty better than 20%. The disc is divided into square pixels with side-length 500 pc and pixels with fewer than 50 stars are masked. For better visibility, each map is then convolved with a 2D Gaussian filter, width 300 pc. The cross indicates the position of the Sun, while the gridlines give lines of equal Galactocentric radius and azimuth, with spacings of 2 kpc and 22.5\({}^{\circ}\) respectively. _Top panel:_ Mean Galactocentric vertical velocity \(\overline{W}+W_{\circ}\) (assuming \(W_{0}=7.78\) km/s). _Bottom panel:_ Mean vertical velocity difference \(W_{N}-W_{S}\) between northern and southern Galactic hemispheres. These maps suggest the presence of large-scale bending and breathing disequilibrium modes in the Galactic disc. This serves as an example of a scientific application of our catalogue.
the 5D stars in our catalogue with a single \(v_{\rm los}\) sampled from the posterior of each star (lower panel). A couple of quality cuts have been made to produce this figure: for both samples, we included only stars with fractional distance uncertainty less than 0.25; for the 5D sample alone, we included only stars with posterior widths less than 80 km/s. This leaves 6 111 408 stars in the 6D set and 133 438 798 stars in the 5D set. This figure, which is the analogue of Fig. 8 in Paper I, illustrates that the BNN-predicted velocities reproduce the bulk velocity structure of the Galaxy very well, and our catalogue thus enables kinematical analyses with many more stars and to greater distances than possible with the _Gaia_ RVS sample alone. That being said, excessive extrapolation must be avoided. When we relax the quality cuts and include stars with greater distance errors (and by extension, stars at greater distances) we find that at distances \(\gtrsim 7\) kpc, the network-predicted Galactic rotation does not quite align with the observed Galactic rotation. This reconfirms our finding with the validation set (Sec. 4; in particular Fig. 6) that extrapolation starts to become somewhat unsafe beyond approximately 7 kpc.
Figure 9 provides a second demonstration of the catalogue, mapping the vertical (i.e., perpendicular to the Galactic disc) Galactocentric velocity \(W+W_{\odot}\) in the Galactic plane, including only stars with \(|Z|<600\) pc and fractional distance uncertainty less than 20%. The assumed vertical velocity of the Sun is 7.78 km/s (Drimmel & Poggio, 2018). In the most distant regions of this plot, the \(v_{\rm los}\) predictions are extrapolated beyond the training sample, and so the prediction uncertainties are correspondingly large (cf. Fig. 5). However, the vertical velocity field at such distances is mainly informed by the latitudinal proper motion, and so the contribution of \(v_{\rm los}\) to the total error budget is subdominant. The upper panel plots the mean \(\overline{W}+W_{\odot}\) in 2D bins in \(X\) and \(Y\). Here, there is evidence of vertical bulk motions in the disc. The positive slope of \(\overline{W}+W_{\odot}\) with Galactocentric radius suggests a large-scale bending mode and is in agreement with the results of various recent studies mapping the vertical velocity field (Widmark et al., 2022; Nelson & Widrow, 2022; Wang et al., 2023).
The lower panel of Fig. 9 plots the difference in mean vertical velocity above and below the disc mid-plane (\(\overline{W}_{N}-\overline{W}_{S}\)). Here, there is evidence of a significant breathing mode. Most notably, in the region at roughly \((X,Y)=(-13,-2)\) kpc, the disc is undergoing a contraction (i.e. the north and south have a mean velocity towards each other). This structure could be influenced by systematic errors, in either distance or predicted radial velocity. However, it seems improbable that such errors would be able to fully account for the observed bending mode and breathing mode. Furthermore, we begin to see the same structure, albeit with the spatial extent reduced, in both the 6D test set and when making a stronger 10% distance precision cut.
## 6 Conclusions
In our previous work, Paper I, we proposed using Bayesian neural networks to generate estimates for the line-of-sight velocities of _Gaia_ stars for which direct measurements are unavailable. The inputs of the model are the five available dimensions (three positions, two proper motions). The outputs are not merely point predictions for the line-of-sight velocities, but full probability distributions which we can understand in a Bayesian sense as predictive posterior distributions. This is the advantage of the Bayesian neural network approach.
The aims of the present work are two-fold. The first is to follow up on Paper I, in which we employed a Bayesian neural network technique on the eve of the third _Gaia_ data release (DR3) to generate a set of blind predictions for the radial velocities that would appear in that data release. Here, we compared the predictions with the published measurements, finding very good agreement and an approximate error rate of 1.5%.
The second purpose of this work is to train a new model with the DR3 measurements, and generate a catalogue of prediction distributions for the radial velocities still missing in DR3. To that end, we used the 6D sample from DR3, comprising around 34 million stars. We split this into a training set and a test set, and applied a series of quality cuts to the training set alone. The training set, finally comprising 16 243 507 stars, was then used to optimise the parameters of the model, while the test set was used to validate the model post-training. This validation was successful, yielding an overall error rate of around 1.4%. There appeared to be a mild degree of underconfidence in the predictions, i.e., the posteriors are wider than necessary, so that a disproportionate number of measurements were sitting near the centres of their prediction distributions. While this behaviour is suboptimal in the sense that we are not achieving the maximum precision attainable, it is still a better outcome than overconfidence. Because of the applied quality cuts, the bulk of the stars of the training set were situated within 3-4 kpc of the Sun. However, we found that the model was capable of a degree of extrapolation, generating accurate posteriors for stars out to around 7 kpc. In general, it achieves this by cautiously inflating the posterior width for stars beyond the range of its training data. However, this behaviour begins to break down beyond this distance, and the predictions grow increasingly less accurate.
Having trained and validated the model, we applied it to all _Gaia_ stars without radial velocities but with 5D astrometry and \(G\)-band magnitude measurements \(G\), up to a limit \(G=17.5\). This is 184 905 491 stars in total. For each star, we refitted its BNN-generated posterior distribution with a four-component Gaussian mixture model to give a lightweight representation. The fitted parameters of these Gaussian components, alongside the _Gaia_ source_id of each star and some summary statistics, then form our final catalogue (see Table 1), which we have made publicly available.
After constructing this catalogue and validating it against some measurements from ground-based surveys (Appendix B), we performed some preliminary explorations in order to provide some illustrations of its capabilities. We plotted the median \(v_{\rm los}\) in the Galactic disc plane, revealing the rotation structure of the disc, out to larger distances than possible with the _Gaia_ 6D data alone. We also plotted the mean vertical (perpendicular to the disc) velocity \(\overline{W}\) and north-south asymmetry in \(\overline{W}\) to large distances in the disc plane, revealing vertical bending and breathing mode disturbances in the disc at large Galactocentric radius (12-14 kpc).
It is worth noting that for a given star, the prediction distribution is typically quite broad (the typical width is 25-30 km/s), as it is a convolution of the intrinsic width of the conditional distribution function at that point in phase space with the epistemic uncertainty arising from the lack of training data. As such, when \(v_{\rm los}\) is the primary object of interest in a given analysis, these predictions are no real substitute for actual measurements. Instead, we anticipate the primary utility of our catalogue being in areas where the interesting dynamics are governed primarily by proper motions, e.g. vertical motions in distant regions of the disc. In such analyses, valuable information is lost when stars with missing \(v_{\rm los}\) measurements are discarded. With our catalogue, one can retain these stars in the analysis and marginalise over the missing sixth dimension.
## Acknowledgements
We are grateful to Bokyoung Kim for assistance with the DESI data, and the _Gaia_ mirror archive at the Leibniz-Institut fur Astrophysik Potsdam (AIP) for hosting our catalogue. APN is supported by an Early Career Fellowship from the Leverhulme Trust. Any acknowledges support from the Carlsberg Foundation via a Semper Aredens grant (CF15-0384). This work used the Cirrus UK National Tier-2 HPC Service at EPCC ([http://www.cirrus.ac.uk](http://www.cirrus.ac.uk)) funded by the University of Edinburgh and EPSRC (EP/P020267/1). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
## Data Availability
All data used in this work are publicly available. The trained BNN model, data queries, plotting scripts, and source code can be accessed at [https://github.com/aneeshnaik/MissingRVSDR3](https://github.com/aneeshnaik/MissingRVSDR3). Following the publication of this article, our catalogue of radial velocity predictions will be made available via the _Gaia_ mirror archive hosted by the Leibniz-Institut fur Astrophysik Potsdam (AIP); DOI: 10.17876/gaia/dr.3/110. Prior to publication, the catalogue will instead be available for bulk download at [http://cuillin.roe.ac.uk/~anaik/MissingRVSDR3Catalogue/](http://cuillin.roe.ac.uk/~anaik/MissingRVSDR3Catalogue/). The _Gaia_ data used to train our model are available at the ESA _Gaia_ archive ([https://gea.esac.esa.int](https://gea.esac.esa.int)) as well as various mirror archives. The LAMOST and DESI data used to validate our catalogue are available for download at [http://www.lamost.org/dr8/v2.0/](http://www.lamost.org/dr8/v2.0/) and [https://data.desi.lbl.gov](https://data.desi.lbl.gov) respectively.
|
2310.01353 | Scaling Up Music Information Retrieval Training with Semi-Supervised
Learning | In the era of data-driven Music Information Retrieval (MIR), the scarcity of
labeled data has been one of the major concerns to the success of an MIR task.
In this work, we leverage the semi-supervised teacher-student training approach
to improve MIR tasks. For training, we scale up the unlabeled music data to
240k hours, which is much larger than any public MIR datasets. We iteratively
create and refine the pseudo-labels in the noisy teacher-student training
process. Knowledge expansion is also explored to iteratively scale up the model
sizes from as small as less than 3M to almost 100M parameters. We study the
performance correlation between data size and model size in the experiments. By
scaling up both model size and training data, our models achieve
state-of-the-art results on several MIR tasks compared to models that are
either trained in a supervised manner or based on a self-supervised pretrained
model. To our knowledge, this is the first attempt to study the effects of
scaling up both model and training data for a variety of MIR tasks. | Yun-Ning Hung, Ju-Chiang Wang, Minz Won, Duc Le | 2023-10-02T17:16:47Z | http://arxiv.org/abs/2310.01353v1 | # Scaling up Music Information Retrieval Training with Semi-Supervised Learning
###### Abstract
In the era of data-driven Music Information Retrieval (MIR), the scarcity of labeled data has been one of the major concerns to the success of an MIR task. In this work, we leverage the semi-supervised teacher-student training approach to improve MIR tasks. For training, we scale up the unlabeled music data to 240k hours, which is much larger than any public MIR datasets. We iteratively create and refine the pseudo-labels in the noisy teacher-student training process. Knowledge expansion is also explored to iteratively scale up the model sizes from as small as less than 3M to almost 100M parameters. We study the performance correlation between data size and model size in the experiments. By scaling up both model size and training data, our models achieve state-of-the-art results on several MIR tasks compared to models that are either trained in a supervised manner or based on a self-supervised pretrained model. To our knowledge, this is the first attempt to study the effects of scaling up both model and training data for a variety of MIR tasks.
Yun-Ning Hung\({}^{1}\), Ju-Chiang Wang\({}^{1}\), Minz Won\({}^{1}\), Duc Le\({}^{1}\)\({}^{1}\) SAMI, ByteDance, San Jose, CA, USA Semi-supervised learning, large-scale training, music information retrieval (MIR) benchmark
## 1 Introduction
Large-scale datasets together with increasing model sizes have been recognized as one of the critical factors to build strong machine learning systems. This direction has led to rapid success in many domains, such as natural language processing [1], computer vision [2], and automatic speech recognition (ASR) [3]. Plenty of resources from Internets and publicly available datasets enable the models to scale up to billions of parameters and achieve state-of-the-art results on several downstream tasks.
In the domain of music information retrieval (MIR), the lack of labeled datasets has hindered the advances of several downstream tasks [4, 5]. Although large-scale datasets such as MSD [6] and Jamendo [7] have contributed to significant progress in music auto-tagging, they are still considered smaller-scale datasets in the aforementioned domains. Other MIR tasks, such as chord recognition and structure segmentation, have an even smaller amount of training data due to the labeling difficulty. For example, labeling time-varying musical events at a millisecond scale such as beats requires specialized training and is very labor-intensive.
To tackle this problem, several prior works [8, 9] have investigated leveraging self-supervised approaches to train on large-scale data and transfer the knowledge to downstream tasks. Castellon et al. [10] has shown that probing the pretrained Jukebox model can achieve better results in emotion recognition and comparable results in music auto-tagging compared to supervised approaches. Li et al. [9] demonstrated that using large-scale data to train a large-scale self-supervised model, the pretrained MERT can outperform the supervised setting in several downstream tasks. Due to the characteristics of different MIR downstream tasks, pretrained models have not been observed to outperform some supervised approaches for tasks such as key detection.
In this work, we attempt to scale up the training data and model sizes from a different angle. We leverage the semi-supervised learning to train on both labeled and unlabeled datasets. To scale up the model size, we use the training framework of teacher-student knowledge expansion to iteratively expand the size of student models. Teacher-student training has demonstrated its usefulness in several MIR tasks [4, 22, 5]. We push this idea to a new limit by expanding the models from as small as less than 3 million to almost 100 million parameters. To scale up the data size, we collect labeled datasets for various MIR tasks and mix them together, as shown in Table 1. To further expand the training data, two internal datasets (with 1.3 million and 2.4 million full songs respectively) are included, yielding 300 times more than any other publicly available datasets (e.g. Jamendo [7] only has 3760 hours) in the MIR domain.
\begin{table}
\begin{tabular}{l l} \hline \hline Data Sources & Hours \\ \hline RWC [11], Ballroom [12], Beatles [13], USPOP [14], Billboard [15], Hainsworth [16], SMC [12], \\ Isophonics [17], HookTheory [18], HJDB [19], \\ SALAMI [15], Simac [20], HarmonixSet[21], \\ Internal-Structure, Internal-Beat/Key & \\ \hline Audio-92K & 92,000 \\ \hline Music-151K & 151,000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of each data source.
In the experiments, we investigate the performance correlation between data size and model size. We test the proposed method on four MIR tasks and show that by scaling up both model and training data, the proposed approaches can outperform not only existing self-supervised approaches but also achieve state-of-the-art results on these MIR tasks.
## 2 Methodology
### Training Scheme
We use the noisy student training following previous works [24]. The learning process includes four steps:
1. Train a supervised teacher \(M\) using small training set.
2. Use the trained teacher model to assign pseudo-labels to unlabeled samples.
3. Apply data augmentation to the audio of pseudo-labeled samples, then mix them with the supervised samples to create a new training set.
4. Train a student model with the same or larger size using the new training set.
After the student model is well-trained and outperforms the teacher, we replace the old teacher with the student and repeat steps 2 - 4 to train a new student model.
### Model architecture
Fig. 1 depicts the proposed model architecture. The input audio is first transformed into spectrogram. We use six trainable harmonic filters [25, 26] to enhance the spectrogram. Then, \(R\) residual units [27] followed by a stack of spectro-temporal max-pooling layers are used to produce a time-frequency representation. Each residual unit has a kernel size of 3 and \(F\) feature maps, where \(F\) is the spectral dimension in Table 2. We use PerceiverTF [23], one from the Transformer family, as the model architecture. PerceiverTF has shown state-of-the-art performance in multi-instrument music transcription [23], demonstrating its ability to characterize useful information for modeling both pitch- and time-related events. More importantly, PerceiverTF incorporates the Perceiver structure [28] to improve training scalability, making it a good fit for this work over SpecTNT [29]. We use exactly the same Perceiver TF blocks as in [23]. Lastly, a dense layer is added to predict the frame-level labels from the PerceiverTF output. In each training round, we gradually scale up the parameters of PerceiverTF until it is close to 100M, as shown in Table 2.
### Data Augmentation
We use the following augmentations to train the noisy student:
1. Pitch shifting (A1): we randomly shift the pitches in a range of \(\pm 3\) semitones in the experiments.
2. Torchaudio_augmentations1 (A2): with a probability \(p=0.8\), we add audio transformations including white noise (SNR in the range between 0.3 and 0.5 dB), random gains, high/low pass filters, and polarity inversion. Footnote 1: [https://github.com/Spijkervet/torchaudio-augmentations](https://github.com/Spijkervet/torchaudio-augmentations)
3. Frequency masking (A3): we apply two frequency channels masking blocks; each randomly masks out 0-25% of the input frequency bins.
4. Time masking (A4): we apply two time steps masking blocks; each randomly masks out 0-25% of the frames.
We use torchaudio library2 for the frequency and time masks.
Footnote 2: [https://pytorch.org/audio/stable/transforms.html](https://pytorch.org/audio/stable/transforms.html)
## 3 Data Preparation
### Data Sources
As summarized in Table 1, we used the datasets in the first row for supervised training. Except for publicly available datasets, we include two in-house datasets for supervised training for two reasons. First, we want to maximize the size of labeled datasets, as Transformer-based architecture is data-hungry. Second, a strong teacher model can build a better foundation to boost the semi-supervised learning process. If the teacher can generate high-quality pseudo-labels, requirements for post-processing and post-filtering can be reduced. Our in-house dataset for structure segmentation (called _Internal-Structure_) contains around 160 hours of music across a variety of styles. Our in-house dataset for downbeat and key detections (called _Internal-Beat/Key_) contains 50k clips and a total size of 580 hours. This dataset covers a wide variety of genres.
For unlabeled datasets, we categorize them into three types. _Datasets of other MIR tasks (Other-MIR)_: for example, songs in downbeat datasets do not have chord annotations, so they can be part of the unlabeled dataset for chord recognition. _Audio-92K_: audio recordings collected internally, covering a wide range of languages, genres, and styles. _Music-151K_: music recordings collected internally from a different source than Audio-92K. It contains mostly English songs. All the datasets used in this work are for research purposes only.
### Pseudo-labeling
Data filtering during the teacher-student learning process is critical to the success of training a student model. In this work, we use the "soft labels" as the pseudo-labels. Specifically, the teacher's output probabilities are used directly as supervision signal for the student, using Cross-Entropy loss. We observe that when the teacher is more confident in a binary prediction,
Figure 1: Model architecture overview.
it tends to output a probability close to one or zero. If the teacher is less confident, it tends to output a probability around 0.5. Using soft labels enables the student model to learn this "degree of uncertainty" information from the teacher as well, so a rule-based filtering scheme is not needed.
## 4 Experiment
### Benchmark Tasks
**Downbeat Tracking** aims to predict the first beat of each bar, and is typically conducted together with beat tracking. In this work, we focus on downbeat tracking, because it still remains a challenge compared to beat tracking. Following previous works [26], the model outputs frame-level probabilities of beat, downbeat, and non-beat per 50 ms. A dynamic Bayesian network implemented in madmom [30] is used as post-processing to decode the timestamps of downbeats. Ballroom, Beatles, Hainsworth, HJDB, RWC, Simac, Harmonix, SMC, and Internal-Beat/key are used as supervised datasets. The training process randomly samples a chunk of 6 seconds from each training song. We select 74 songs from HarmonixSet as validation set. GTZAN [31] is used as test set. We use F-measure implemented in mir_eval[32] for evaluation metric.
**Chord Recognition** is a challenging MIR task due to the lack of training data as well as the complexity of modeling the harmonic relationship in music. The model outputs frame-level probabilities of 25 classes (12 pitches of major and minor chords plus one "none") per 125 ms. We use Isophonics, Billboard, HookTheory training set, RWC, and USPOP for training, and the 2000 songs of HookTheory test set for validation. We use JayChou, one of the benchmark datasets in MIREX,3 as our testing set. JayChou is not used for training by the compared models in the experiments. The evaluation metric is major/minor weighted accuracy in mir_eval[32].
Footnote 3: [https://www.music-ir.org/mirex/wiki/MIREX_HOME](https://www.music-ir.org/mirex/wiki/MIREX_HOME)
**Key Detection** aims to predict the tonal scale and pitch relation across the entire song. It has been a task in MIREX and studied for years. The model outputs the frame-level probabilities of 25 classes (12 major and 12 minor keys plus one "none") per 2 seconds. We use Isophonics, Billboard, HookTheory training set, and Internal-Key/Beat for training, and the 2000 songs of HookTheory test set for validation. Giantsteps [33] is used as test set. The evaluation metric is a refined accuracy (weighted accuracy) in mir_eval[32], with error tolerance that gives partial credits to reasonable errors.
**Structure Segmentation** aims to partition a music recording into non-overlapping segments and predict the functional label (e.g.'verse' and 'chorus') for each segment. Following previous work [34, 35], the model outputs frame-level label probabilities of 7 classes and a frame-level boundary probability, with a frame hop of 192 ms. We use Billboard, SALAMI, RWC, HarmonixSet, and Internal-Structure for training. 200 pieces of songs are randomly picked for validation. Isophonics is used for testing. We use the F-measure of boundary hit rate at 0.5 seconds (_HR.5F_ in mir_eval[32]) for evaluation.
### Experiment Details
As summarized in Table 2, the parameters are chosen as suggested in previous works or through hyperparameters tuning based on the first round of supervised training. We use the same model architecture in the first round to train the supervised teacher model \(M\). Input audio is downsampled into 16k Hz. We use Adam optimizer, 0.9 weight decay, and 20 epochs of patience. Each epoch runs 500 mini-batches, and early stopping is applied when validation scores do not decrease for 80 epochs. Model with the best validation result is used for testing. We train the models using Nvidia V100-32G GPUs. Total three rounds of training are conducted with parameters shown in Table 2. We consider three different data sizes for each round of training: (1) \(D1\) is the Other-MIR dataset de
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & Round & Params & \(R\) & Pool & Hop & Layer & Spec D. & Temp D. & Heads & L. Arr & Augment \\ \hline \hline \multirow{4}{*}{Beat} & 1st & 11M & & & 5 & 32 & 128 & 4 & 4 & A1, A2 \\ & 2nd & 43.3M & 1 & (2, 2) & 160 & 5 & 64 & 256 & 4 & 4 & A1, A2, A4 \\ & 3rd & 91.3M & & & & 6 & 256 & 256 & 4 & 4 & A1, A2, A4 \\ \hline \multirow{4}{*}{Chord} & 1st & 2.9M & & & & 5 & 64 & 64 & 4 & 8 & A1, A2 \\ & 2nd & 11.4M & 2 & (4, 4) & 500 & 5 & 128 & 128 & 4 & 8 & A1, A2, A3 \\ & 3rd & 91.9M & & & & 6 & 256 & 256 & 8 & 8 & A1, A2, A3 \\ \hline \multirow{4}{*}{Key} & 1st & 1.5M & & & & 5 & 128 & 32 & 4 & 4 & A1, A2 \\ & 2nd & 5.9M & 3 & (4, 100) & 320 & 5 & 256 & 64 & 4 & 4 & A1, A2, A3 \\ & 3rd & 42.8M & & & & 8 & 512 & 128 & 8 & 8 & A1, A2, A3 \\ \hline \multirow{4}{*}{Structure} & 1st & 2.9M & & & & 5 & 64 & 64 & 4 & 8 & A1, A2 \\ & 2nd & 11.4M & 1 & (4, 6) & 512 & 5 & 128 & 128 & 4 & 8 & A1, A2, A3 \\ \cline{1-1} & 3rd & 91.9M & & & & 6 & 256 & 256 & 8 & 8 & A1, A2, A3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of each task for three rounds of training. “Pool”: the maxpooling sizes (F, T) for frequency and time axes. “Hop”: hop size. “Spec D.” and “Temp D.”: the latent dimensions for spectral and temporal Transformers, respectively [23]. “Heads”: number of attention heads. “L. Arr”: number of latent arrays. “Augment”: augmentation methods (see Section 2.3).
pending on the task; (2) _D2_ is D1 plus Audio-92K; (3) _D3_ is D2 plus Music-151K.
### Baseline System
We compare the proposed work with two types of baseline systems: one is the supervised teacher \(M\) trained from scratch with the labeled data only; the other is based on the pre-trained model _MERT-330M_[9], a state-of-the-art large-scale self-supervised model for music audio.
For the baseline systems based on MERT-330M, two training schemes are considered, namely _MERT-P_ and _MERT-F_. In MERT-P, we freeze the MERT model to extract the deep embeddings, and then add two GRU layers for frame-level classification. We use GRU layers instead of Multilayer Perception (MLP) because they show better performance. In MERT-F, we add a dense layer (i.e., the same as the proposed model) and then fine-tune the MERT model for a fair comparison.
## 5 Results and Discussion
Figure 2 shows the results with different sizes of models and data on four MIR tasks. For most of the tasks, we find the performance is improved as the training round unfolds. Larger models are more likely to benefit from more data (see D2 or D3). Models trained on D3 perform the best except for chord recognition. This could be due to data mismatch: our D3 comprises mainly English songs, while JayChou contains only Chinese-pop songs. Nevertheless, it achieves the largest relative improvement in chord recognition as compared to its supervised teacher \(M\). Since we do not have larger internal datasets for chord recognition, transformer could not perform well in the supervised setting. Among the tasks, key detection benefits the least from the proposed approach. As key is a relatively global musical event that does not vary with finer temporal resolution, there is less temporal dependency that the temporal Transformer of PerceiverTF could take advantage of. Due to this constraint, we also couldn't see any benefits when scaling up the model more.
Table 3 summarizes our best results compared to the baselines. It is clear that the proposed models all outperform their supervised teachers \(M\) as well as their self-supervised counterparts, demonstrating the effectiveness of the semi-supervised framework. The proposed models also achieve state-of-the-art performance compared to existing SOTA systems, showing the proposed approach is task-independent. MERT performs well on downbeat tracking and structure segmentation that require more temporal information, but is less successful on chord recognition and key detection that require capturing harmonic information. Fine-tuning MERT seems to be a better choice than probing MERT. The improvement is larger especially on chord recognition and key detection since it allows the pre-trained MERT to learn task-dependent information.
## 6 Conclusion
In this paper, we investigate the effectiveness of large-scale semi-supervised MIR training by scaling up models to almost 100M parameters and training data to 240k hours. Addressing the challenge of scarce labeled MIR data, our experiment demonstrate that our proposed approach offers a better solution for advancing MIR performance beyond supervised or self-supervised learning approaches. For future research, we aim to include more downstream tasks. Similar to other domains, we also plan to explore the impact of further scaling up the training data and model size.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Downbeat & Key & Chord & Structure \\ \hline \hline SpecTNT-TCN [26] & 0.756 & - & - & - \\ CNN [36] & - & 0.743 & - & - \\ JLCX1 [37] & - & - & 0.845 & - \\ MuSFA [35] & - & - & - & 0.598 \\ \hline MERT-P & 0.736 & 0.664 & 0.723 & 0.616 \\ MERT-F & 0.769 & 0.717 & 0.797 & — \\ \hline Supervised \(M\) & 0.753 & 0.731 & 0.802 & 0.594 \\ Proposed & 0.776 & 0.764 & 0.862 & 0.632 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for four MIR tasks. Fine-tuning MERT on structure could not fit into a V100 GPU.
Figure 2: Results of four MIR tasks with different sizes of training data and model parameters. Yellow dashed line represents the results of supervised teacher \(M\). Red dotted line represents the SOTA results of non-semi-supervised training. |
2310.08411 | Revealing the Microscopic Mechanism of Displacive Excitation of Coherent
Phonons in a Bulk Rashba Semiconductor | Changing the macroscopic properties of quantum materials by optically
activating collective lattice excitations has recently become a major trend in
solid state physics. One of the most commonly employed light-matter interaction
routes is the displacive mechanism. However, the fundamental contribution to
this process remains elusive, as the effects of free-carrier density
modification and raised effective electronic temperature have not been
disentangled yet. Here we use time-resolved pump-probe spectroscopy to address
this issue in the Rashba semiconductor BiTeI. Exploring the conventional regime
of electronic interband transitions for different excitation wavelengths as
well as the barely accessed regime of electronic intraband transitions, we
answer a long-standing open question regarding the displacive mechanism: the
lattice modes are predominantly driven by the rise of the effective electronic
temperature. In the intraband regime, which allows to increase the effective
carrier temperature while leaving their density unaffected, the phonon
coherence time does not display significant fluence-dependent variations. Our
results thus reveal a pathway to displacive excitation of coherent phonons,
free from additional scattering and dissipation mechanisms typically associated
with an increase of the free-carrier density. | Peter Fischer, Julian Baer, Moritz Cimander, Volker Wiechert, Oleg Tereshchenko, Davide Bossini | 2023-10-12T15:26:26Z | http://arxiv.org/abs/2310.08411v2 | # Time-resolved Broadband Spectroscopy of Coherent Lattice Dynamics in a Rashba Material
###### Abstract
Spintronics aims at establishing an information technology relying on the spins of electrons. Key to this vision is the spin-to-charge conversion (and vice versa) enabled by spin-orbit coupling, which represents the intimate bond between spins and charges. Here, we set the foundations to manipulate the spin-orbit coupling in Rashba systems in a coherent way at THz frequencies without parasitic electronic dissipation. Our approach relies on the generation of coherent lattice dynamics in BiTeI. Using ultrashort laser pulses ranging from the visible to the mid-infrared spectral range we establish quantitatively the conditions for an effective generation of coherent THz phonons, with the required symmetry for a manipulation of the Rashba coupling. Furthermore, we conclude an open discussion on the possibility of photoinducing of a spin current in BiTeI, proving that this is not the case. We envision a direct application of our pumping scheme in a photoemission experiment, which can directly disclose the THz dynamics of the Rashba splitting.
## I Introduction
The central role of spin-orbit coupling in spintronics arguably makes it the most relevant physical interaction and process in this research field. This includes that in extended systems spin-orbit interaction can lift the spin-degeneracy of the electron dispersion [1]. This phenomenon is the Rashba effect and it stems from both the atomic spin-orbit coupling and an effective electric field \(E_{z}\) that breaks the crystal inversion symmetry. In the one-particle Hamiltonian describing the electron motion within the crystal potential, the Rashba effect is taken into account by the interaction term
\[H_{\rm R}=-\alpha_{\rm R}(\mathbf{k}\times\mathbf{\sigma})\cdot\mathbf{\hat{z}}, \tag{1}\]
where \(\alpha_{\rm R}\), \(\mathbf{k}\), \(\mathbf{\sigma}\) and \(\mathbf{\hat{z}}\) represent the Rashba constant, the electron wave vector, the Pauli matrices and the unit vector along the direction of the inversion symmetry breaking, respectively. The Rashba effect has been widely employed in the investigation of magneto-transport effects [2; 3]. Recently, even photoinduced modifications of the entire Rashba energy in Eq. (1) (i.e. dynamics of \(\mathbf{k}\), \(\mathbf{\sigma}\) and \(\alpha_{\rm R}\)) have been reported [4]. This impressive amount of experimental and theoretical work has so far addressed heterostructures, where the interface inherently breaks the inversion symmetry. However, heterostructures require non-trivial engineering and fabrication technology, as the quality of the interface determines the functionality of a device [2; 3]. Moreover, from the experimental point of view, heterostructures can be studied almost exclusively by surface-sensitive techniques. However, the Rashba effect was originally predicted for bulk materials. As a matter of fact, bulk semiconductors exhibiting a huge Rashba effect have been discovered, such as BiTeI [5]. The research field of spintronics would benefit greatly from a coherent control of the Rashba coupling in bulk systems on the ultrashort timescale. This scenario would allow arbitrary suppression and enhancement of the Rashba coupling and thus of the spin-to-charge conversion capability. In fact, it would directly open the doors to bulk-compatible spintronic devices able to perform tasks at THz operational frequencies and thus faster than ever before. Furthermore, a fully coherent approach would also not be restricted by parasitic thermal energy dissipation. In relation to the expression of the Rashba Hamiltonian (Eq. (1)), this concept implies a pure modification of \(\alpha_{\rm R}\) whereby the electronic and magnetic degrees of freedom remain unperturbed. Hence, an urgent fundamental question arises: how can a fully coherent control of the Rashba coupling on the ultrashort timescale be realised?
We propose that an excitation of coherent longitudinal optical phonons with an ionic displacement along the symmetry-breaking axis, can modulate the effective electric field generating the Rashba coupling. As a first step towards this unprecedented concept, here we experimentally and quantitatively establish the conditions that allow efficient excitation of coherent lattice dynamics with adequate symmetry in BiTeI. In addition, we investigate the optical generation of a spin polarisation in BiTeI, as this concept is also highly relevant for spintronics. The literature reports contrasting claims about this issue [6; 7; 8]. Our experimental results definitely set the debate, demonstrating that no spin polarisation can be photoinduced in BiTeI on viable timescale.
## II Methods and Materials
For our experiments we choose the nonmagnetic polar semiconductor BiTeI. Our specimen was grown by a modified Bridgmen method using a rotating heat field.
Its layered structure is built from covalently bound trilayers stacked by van der Waals forces. The opposite partial charges of iodine and tellurium ion cores break the inversion symmetry along the stacking direction, generating the electric field \(E_{z}\) responsible for the Rashba effect (see Fig. 1). Hence, not only the surface states exhibit a giant Rashba splitting but also the bulk states [5]. The excitation of a specific A\({}_{\rm 1g}\) mode, i.e., a coherent displacement of iodine and tellurium ion cores in opposite directions but parallel to the stacking, as shown in Fig. 1 (a), is expected to modulate \(E_{z}\) and thus the Rashba splitting for the bulk states. The eigenfrequency of this specific phonon has already been identified via spontaneous Raman scattering [9] and it amounts to 2.7 THz. Since the A\({}_{\rm 1g}\) mode in BiTeI classifies as longitudinal optical phonon, a direct excitation with THz radiation is forbidden by selection rules. However, Raman-active modes can be coherently excited via laser pulses, provided that their duration is shorter than the period of the phonon [10]. Aiming at establishing the most favourable conditions to realise such an excitation, we set out to perform time-resolved pump-probe reflection spectroscopy on BiTeI with a broad spectral range of pump wavelengths. We thus employ two different light sources, both amplified laser systems emitting fs laser pulses at kHz repetition rates (details in the Supplementary Materials). Having access to both of these light sources grants us a broadband tunability of the central photon energy of the excitation beam, ranging from the visible to the mid-infrared. Thus, we are able to employ photon energies both above and below the fundamental band gap of BiTeI. In both setups, the central wavelength of the probe beam is 1.20 \(\rm\upmu m\). However, the duration of the probe pulses differs significantly between the two systems and is 20 fs for the mid-infrared measurements, while it is 90 fs for the other measurements. A quantitative analysis of our data must take into account this pivotal difference (see Supplementary Materials).
The pump-probe technique employed to investigate the coherent lattice dynamics in BiTeI is sketched in Fig. 1 (b). To track the dynamics of the normalised reflection coefficient \(\Delta R/R_{0}\), the probe pulses are reflected off the sample surface with a variable time delay with respect to the arrival of the pump pulses. The intensity of the reflected probe beam is measured with a photo diode in a lock-in scheme. The dynamics of \(\Delta R/R_{0}\) were measured both at room temperature and 10 K. We have also performed time-resolved magneto-optical experiments to investigate a possible photoinduction of a spin polarization in BiTeI. These methods are known to be sensitive to such excitations in semiconductors [11]. The rotation of the polarisation is measured by installing a quarter-wave plate and a Wollaston prism in the optical path of the probe pulse after the reflection.
## III Time-resolved broadband optical spectroscopy
We first measure the transmissivity of a thin film of BiTeI exfoliated from the bulk sample. As shown in Fig. 2 (a) the spectrum is dominated by two transitions. The transmissivity is smaller than 5% in the energy range below \(\beta=0.37\) eV (red) from which it sharply increases. However, near \(\gamma=0.52\) eV (blue) a sharp drop of the transmissivity is observed. The spectrum in Fig. 2 is consistent with the data reported in the literature [14]. Although the highest valence and the lowest conduction bands in BiTeI are all symmetrically of the same character, optical transition are nevertheless allowed due to the presence of Rashba effect [12] even between the Rashba-split conduction bands. In fact, BiTeI is a degenerate n-type semiconductor, which implies that the Fermi energy \(E_{\rm F}\) is above the conduction-band minimum [5]. Thus, \(\beta\) represents the highest energy at which such a transition is possible, while \(\gamma\) is the lowest energy at which electrons can be promoted from the highest valence band to the lowest conduction band. The latter is therefore the effective band-gap energy shifted from the fundamental band gap by \(\approx 0.17\) eV due to the Moss-Burstein effect [15]. Note that the energy corresponding to \(\gamma\) is strongly dependent on the density of carriers in the conduction band, which determines \(E_{\rm F}\). Comparing to the literature [14] we estimate the intrinsic conduction-band carrier density in our sample to be \(n_{\rm i}=6\cdot 10^{19}\) cm\({}^{\rm 3}\). On the other hand, due to the linear dispersion of the Rashba-split conduction bands for \(|k|>0.1\) A\(\beta\) is far less sensitive to \(E_{\rm F}\) and should only change significantly if \(E_{\rm F}\) is lowered near the conduction band minimum.
Fig. 2 (b) shows the absorption coefficient \(\alpha_{\rm abs}\) of BiTeI digitized from data in the literature [13]. We note that the intrinsic carrier density corresponding to the data shown in Fig. 2 (b) was estimated to be 6 times
Figure 1: (a) Atomic displacement of the 2.7-THz A\({}_{\rm 1g}\) phonon mode in BiTeI. (b) Sketch of the pump-probe setup. The pump pulse (pink) excites coherent lattice dynamics in BiTeI. The probe pulse (cyan) is reflected off the surface of the sample with a variable time delay, allowing to track the dynamics of the reflection coefficient \(\Delta R/R_{0}\).
smaller than in our sample [13]. However, this difference affects only the exact value of the energy corresponding to \(\gamma\). The values of the wavelength chosen for the pump beam in our experiments are marked in Fig. 2 (b). For the interband transitions (0.52 um to 2.08 um) \(\alpha_{\text{abs}}\) is of the order of \(10^{5}\) cm\({}^{-1}\). These considerations about the absorption coefficient of BiTeI are relevant to our objective of impulsive excitation of coherent lattice dynamics. In particular, this process can be successfully induced in two different light-matter interaction regimes. If dissipation is negligible (i.e. negligible absorption of the pump beam), impulsive stimulated Raman scattering (ISRS) of elementary excitations can be realised [10; 16]. On the other hand, the absorption of light by electrons can also lead to an impulsive force triggering Raman-active phonons. In this case a displacive excitation of coherent phonons (DECP) is achieved [10]. We note that the ISRS scenario seems favourable in view of our goal, as it allows to exclusively excite the quasiparticles of interest, without parasitic electronic dissipation. As these two mechanisms were established in dielectrics and semiconductors with a much more simple band structure than BiTeI, a spectrally broad tunability of the pump-photon energy is key for our purposes. In particular, it can be expected that the interband transitions in BiTeI may allow a DECP generation [10; 17] of the A\({}_{\text{1g}}\) lattice mode. However, the role of the incoherent electronic background must be quantitatively assessed by an experimental procedure, to realise the conditions under which such background does not hinder the coherent lattice dynamics [18]. The situation for the intraband transitions is less obvious, as it has not yet been explored. The intraband absorption coefficient is an order of magnitude smaller than in the case of interband processes (33 times lower at 7.5 um than at 0.52 um). Importantly, it is not straightforward to predict how the vastly different natures of intraband and interband processes affect the photoinduced coherent lattice dynamics.
Fig. 3 (a) shows the dynamics of the normalised reflection coefficient \(\Delta R/R_{0}\) photoinduced by laser pulses with a wavelength of 0.52 um. The pulse duration is 198 fs and the fluence is set to \(\Phi=0.77\) mJ/cm\({}^{2}\). The signal consists of two components, (i) coherent oscillations with a frequency of 2.7 THz superimposed to (ii) an exponentially decaying background with an amplitude of \(\approx 15\cdot 10^{-3}\). We ascribe the incoherent component of the observed signal to the relaxation dynamics and thermalisation processes of the photoinduced electrons reported in the literature [7]. As we are interested in the coherent response of the lattice, we subtract the incoherent background (details of data analysis in Supplementary Materials) and analyse the oscillatory pattern depicted in Fig. 3 (b). The peak-to-peak amplitude of the oscillations amounts to \(\approx 1\cdot 10^{-3}\) and decays in approximately 7 ps. A Fourier transform of the time trace in Fig. 3 (b) reveals the spectrum of the oscillations displayed in Fig. 3 (c), which we fit with a Lorentzian function (see Supplementary Materials). As a result we estimate the center frequency to be \(f_{0}=2.76\) THz and the lifetime to be \(\tau=1.82\) ps. These values are consistent with the particular A\({}_{\text{1g}}\) mode, in which the atomic displacement has the correct symmetry to modulate \(E_{z}\), as previously determined by spontaneous Raman scattering [9].
We investigate the dependence of the electronic background and the oscillation amplitude on the pump wavelength. We perform measurements of \(\Delta R/R_{0}\) employing the pump wavelengths indicated in Fig. 2 (b). For each pump wavelength we analyze the dependence of electronic-background and oscillation amplitude, \(A_{\text{e}}\) and \(A_{\text{osci}}\), on the excitation fluence \(\Phi\). The results are reported in Fig. 4 (a) and (b), in which intraband and interband processes are color coded. We calculate the values of the absorbed fluence for every pump wave
Figure 2: (a) Transmissivity measurement performed with an FTIR on a μm-thin film exfoliated from a bulk specimen of BiTeI. The inset shows the band structure of BiTeI obtained by first-principles calculations [12]. The transition \(\beta\) (red dashed line at 0.37 eV) represents the highest energy for which a transition between the Rashba-split bands is possible and \(\alpha\) (green) the lowest. The transition \(\gamma\) (blue dashed line at 0.52 eV) represents the lowest energy for which electrons can be excited from the valence band to a free state above the Fermi energy \(E_{\text{F}}\) (dashed black line). In the narrow window between \(\beta\) and \(\gamma\) lies a range of increased transmission. (b) Spectral absorption coefficient of BiTeI digitized from [13]. The triangular markers indicate the values of the wavelength of the pump beam employed in our experiment.
length, relying on the literature [13] (see Supplementary Materials).
First, we focus on the interband excitation, highlighted in the figure by the blue background. For each pump wavelength we find a linear dependence of \(A_{\mathrm{e}}\) on \(\Phi\), indicating a strictly linear excitation of the electronic system. As a general observation, Fig. 4 (a) shows that the slope of the linear fluence dependence of \(A_{\mathrm{e}}\) decreases as the excitation wavelength increases. This observation is consistent with the spectral absorption coefficient shown in Fig. 2, with the longest wavelength employed in our experiment (2.08 um) approaching the transparency range of BiTeI (\(\gamma\) transition, Fig. 2). The data set obtained for 0.75 um deviates from this trend. We observe that the duration of the pump pulses for this wavelength is 4 times shorter than in all the other cases.
In Fig. 4 (b) the dependence of \(A_{\mathrm{osci}}\) on \(\Phi\) is depicted, in which \(A_{\mathrm{osci}}\) is obtained as in Fig. 3 (c) for each measurement. We find linear fluence dependencies of \(A_{\mathrm{osci}}\) with different slopes for different excitation wavelengths. These results are fully consistent with the DECP mechanism [10]. Also in the case of \(A_{\mathrm{osci}}\), the data obtained by pumping BiTeI with 0.75 um laser pulses reveal a stronger signal. The already mentioned shorter pulse duration for this excitation wavelength plays a major role in the discussion of \(A_{\mathrm{osci}}\). In fact both ISRS and DECP are impulsive processes and thus tend to be more efficient as the excitation pulses approach the ideal temporal profile of a delta function [10]. Additional to this consideration, the spectrum of the Raman cross section of the A\({}_{\mathrm{1g}}\) mode directly determines the spectral dependence of \(A_{\mathrm{osci}}\). Although we do not observe sharp resonances in Fig. 4 (b), we ascribe the weakest response observed at 1.04 um to a minimum of the cross section.
Next we turn to the light-matter interaction based on intraband processes highlighted in Fig. 4 by the red background. The laser source for the mid-infrared excitation (central wavelength 7.5 um) is completely self-built and has recently been characterised [19]. For the dependence of \(A_{\mathrm{e}}\) on \(\Phi\) we find a linear dependence. Considering the high value of the fluence employed (up to 10 mJ/cm\({}^{2}\)), it is rather surprising that no traces of saturation are observed. In fact, the absorbed fluence corresponds to an estimated excited carrier density of
Figure 4: Analysis of the dependence of the amplitudes of electronic background \(A_{\mathrm{e}}\) and 2.7-THz oscillations \(A_{\mathrm{osci}}\) on the fluence \(\Phi\) in the \(\Delta R/R_{0}\) measurement signal. Pump wavelengths (pulse durations) are indicated in the top panel. For each measurement series, \(A_{\mathrm{e}}\) (a) and \(A_{\mathrm{osci}}\) (b) show a linear dependence on the fluence. The dotted lines serve as guides to the eye.
Figure 3: (a) Dynamics of the normalised reflection coefficient of BiTeI photoinduced by 0.52 μm laser pulses. The fluence was set to 0.77 mJ/cm\({}^{2}\). (b) Coherent component of the transient reflection coefficient isolated from the incoherent background.(c) Spectrum of the time trace shown in panel (b), obtained via Fourier transform. The solid black line represents a fit with a Lorentzian function.
the order of \(\approx 10^{21}\) cm-\({}^{3}\) (see Supplementary Materials), which is two orders of magnitudes larger than the estimated conduction-band charge-carrier density. However, in the literature [6; 7] it has been reported that photoexcited excited electrons scatter on a time scale shorter than 80 fs. As the pulse duration of the 7.5-\(\upmu\)m pump pulses is 180 fs, the conduction-band electrons are rapidly redistributed in momentum space within the duration of the optical excitation, thus preventing the absorption from saturating. In addition, we note that at the lowest fluence of 3.7 mJ/cm\({}^{2}\) the electronic background possesses an amplitude of \(5\cdot 10^{-3}\). Although this value is much smaller than for interband excitation at comparable fluences, it is not negligible. It follows that also in this case significant electronic dissipation takes place, implying that the light-matter interaction regime falls in the DECP description. We thus conclude that the pure ISRS scenario cannot be realised in BiTeI, in view of the photoinduced dynamics of the electrons already present in the conduction bands (quantified by \(A_{\text{e}}\)).
It is now crucial to compare the coherent lattice dynamics observed pumping BiTeI in the mid-infrared spectral range, with the results obtained for interband transitions. Also in the case of intraband processes \(A_{\text{osci}}\) exhibits a linear dependence on \(\Phi\), consistent with the DECP picture [17]. The increase of \(A_{\text{osci}}\) with \(\Phi\) is significantly flatter as in the 2.08-\(\upmu\)m measurement. We observe that in the time traces of \(\Delta R/R_{0}\) obtained pumping with 7.5 \(\upmu\)m laser pulses, a second eigenmode is visible. The Fourier analysis of this additional feature allows to estimate its frequency to be 4.5 THz (see Supplementary Materials). The value matches another A\({}_{\text{1g}}\) lattice mode [9], whose atomic displacement cannot modulate \(E_{z}\). The frequency of this mode demands sub-100-fs resolution for its successful detection. This condition is fulfilled by our mid-infrared optical setup while it is not by the visible and near-infrared system. Comparing all data sets presented in Fig. 4 (a) and (b) we conclude that \(\Delta R/R_{0}\) shows a coherent 2.7-THz A\({}_{\text{1g}}\) mode induced via DECP, even for pump photon energies smaller than the band gap, with \(A_{\text{e}}\gg A_{\text{osci}}\). Although we have already ruled out the possibility of pure ISRS excitation, our broadband time-resolved spectroscopy and its quantitative analysis suggest that it is far more favorable and efficient to excite coherent phonons in BiTeI in the visible spectral region. This statement challenges the commonly accepted wisdom, based on the idea to minimize the optical absorption as much as possible to induce only the elementary excitations of interest, while leaving unperturbed the other degrees of freedom. Our conclusion, while counter-intuitive, is based on the quantitative analysis of our results: the same oscillatory amplitude \(A_{\text{osci}}\) induced by mid-infrared laser pulses can also be observed employing visible pump pulses (0.52 \(\upmu\)m) at an 3 times lower excitation fluence. Our experiments provide this conclusion, despite the striking observation that in BiTeI \(\alpha_{\text{abs}}\) for 0.52 \(\upmu\)m is more than 33 times higher than for 7.5 \(\upmu\)m (Fig.2 (b)).
Throughout all of our datasets the frequency of the A\({}_{\text{1g}}\) phonon is constant. The lifetime shows minor deviations for \(\Phi>1.26\) mJ/cm\({}^{2}\), ascribed to electron-phonon scattering (see Supplementary Materials). The photoinduced coherent lattice dynamics in BiTeI lies thus completely in a linear regime. In addition, we have also performed measurements for visible and near-infrared pump pulses at a sample temperature of 10 K. The analysis of these data, presented in the Supplementary Materials does not reveal significant differences compared to the room-temperature measurements.
At last we investigate the possibility of generating a photoinduced spin polarization by exciting BiTeI with circularly polarized light. In the literature, this concept is controversial. In fact, on the one hand time-resolved photo-emission spectroscopy (tr-ARPES) investigations [6; 7] have shown that no spin polarization could be detected. On the other hand, optical spectroscopy has revealed traces of a spin polarisation on a 100 fs timescale [8]. As different methods report different results, we set out to settle this debate by measuring time-resolved magneto-optical effects, which are sensitive to the spin polarisation [11]. The measurements of the rotation of the probe polarisation \(\Delta\Theta\) induced by different polarisation states of the pump beam are shown in Fig. 5. We compare the excitation of left (\(\sigma_{-}\), red) and right (\(\sigma_{+}\), red) circularly polarized light with linearly (\(\sigma_{0}\), black) polarized light and focus in the 100-fs time scale. No polarization-dependent excitation occurs. We observe that the report of a photoinduced spin polarisation relied on purely optical spectroscopy [8]. However, because of the selection rules of optical transitions this method is not sensitive to detect a net accumulation of spins.
Figure 5: Probe-pulse polarization rotation upon the excitation with left (\(\sigma_{-}\), red) and right (\(\sigma_{+}\), red) circularly polarized light as well as linearly (\(\sigma_{0}\), black) polarized light. No asymmetry is observed that would indicate a photoinduced spin polarization.
Conclusion
We have successfully demonstrated the coherent excitation of the 2.7-THz A\({}_{1\text{g}}\) mode in BiTeI in a wide spectral range reaching from the visible to the mid-infrared. The symmetry of this particular mode, i.e., the displacement of iodine and tellurium atoms parallel to the stacking but in different directions, has the potential to manipulate the Rashba spin splitting in the band structure. Thus, the main, albeit counter-intuitive, conclusion of our work is that excitation in the mid-infrared with photon energies smaller than the band gap cannot compete with excitation in the visible spectral range, although it is associated with the lowest values of the absorption coefficient. In addition, working with visible laser pulses is both technically much simpler and economically more favorable, since, in contrast to the still demanding mid-infrared range, standard components can be used. To observe the coherent modulation of the Rashba splitting in BiTeI, we propose a combination of our pumping scheme with tr-ARPES. The electronic background appearing in our data would not hinder the tr-ARPES detection of the dynamics of the Rashba splitting, owing to the momentum-selectivity of this technique.
|
2310.07722 | Realizing algebraic 2-complexes by cell complexes | The realization theorem asserts that for a finitely presented group G, the
D(2) property and the realization property are equivalent as long as G
satisfies a certain finiteness condition. We show that the two properties are
in fact equivalent for all finitely presented groups. | Wajid Mannan | 2023-08-24T16:13:07Z | http://arxiv.org/abs/2310.07722v1 | # Realizing algebraic 2-complexes by cell complexes
# Realizing algebraic 2-complexes by cell complexes
W. H. Mannan
**MSC**: 57M20, 20F05, 16E05, 16E10
**Keywords** 2-complex, Wall's D(2) problem, Realisation problem.
**Abstract** The realization theorem asserts that for a finitely presented group \(G\), the D(2) property and the realization property are equivalent as long as \(G\) satisfies a certain finiteness condition. We show that the two properties are in fact equivalent for all finitely presented groups.
## 1 Introduction
An open question in low dimensional topology is: Given a finite cell complex of cohomological dimension \(n\) (with respect to all coefficient bundles) must it be homotopy equivalent to a finite \(n\)-complex (that is a finite cell complex whose cells are all of dimension less than or equal to \(n\))? In 1965 Wall introduced the problem and showed that a counter example must have \(n\leq 2\) (see [4]). The Stallings-Swan theorem (proved a few years later - see [3]) implied that \(n>1\). So a counter example must be (up to homotopy equivalence) a finite 3-complex of cohomological dimension 2, which is not homotopy equivalent to any finite 2-complex. The question of whether or not such a complex exists is known as 'Wall's D(2) problem'.
This question may be parameterized by fundamental group. That is, we say a finitely presented group \(G\) satisfies the D(2) property if every connected finite 3-complex with fundamental group \(G\) is homotopy equivalent to a finite 2-complex. Wall's D(2) problem then becomes: "Does every finitely presented group \(G\) satisfy the D(2) property?".
We say a module over \(\mathbb{Z}[G]\) is f.g. free if it has a finite basis and we say a module \(S\) is f.g. stably free if \(S\oplus F_{1}\cong F_{2}\), where \(F_{1}\) and \(F_{2}\) are f.g. free.
Another property which a finitely presented group \(G\) may satisfy is the realization property.
**Definition 1**.: _An algebraic 2-complex over \(G\) is an exact sequence of f.g. stably free modules over \(\mathbb{Z}[G]\):_
\[S_{2}\stackrel{{\partial_{2}}}{{\rightarrow}}S_{1}\stackrel{{ \partial_{2}}}{{\rightarrow}}S_{0}\]
_where coker(\(\partial_{1}\))\(\cong\mathbb{Z}\)._
We say a finitely presented group \(G\) satisfies the realization property if every algebraic 2-complex over \(G\) is 'geometrically realizable'. That is, it is chain homotopy equivalent to \(C_{*}(X)\), the chain complex (over \(\mathbb{Z}[G]\)) arising from the universal cover of a finite 2-complex \(X\), with \(\pi_{1}(X)=G\).
This property may be stated in purely algebraic terms. Given a connected finite 2-complex \(X\) we may quotient out a maximal tree in its 1-skeleton to attain a Cayley complex \(Y\) in the same homotopy type as \(X\). The process of going from a presentation of \(G\) to \(C_{*}(Y)\) (where \(Y\) is the Cayley complex of the presentation) is a formulaic application of Fox's free differential calculus (see [1], SS48). Hence a finitely presented group \(G\) satisfies the realization property precisely when every algebraic 2-complex over \(G\) arises (up to chain homotopy equivalence) from a finite presentation of \(G\).
Our main result is that the 'geometric' D(2) property is equivalent to the 'algebraic' realization property:
**Theorem 1.1**.: _A finitely presented group \(G\) satisfies the D(2) property if and only if it satisfies the realization property._
In one direction this is proved in [1], Appendix B:
**Weak Realization Theorem:** The realization property for \(G\) implies the D(2) property for \(G\).
In the other direction, it is sufficient to show that every algebraic 2-complex \(\mathcal{A}\) over \(G\) is chain homotopy equivalent to \(C_{*}(Y)\) for some finite cell complex \(Y\). Then the D(2) property for \(G\) would imply that \(Y\) was homotopy equivalent to a finite 2-complex \(X\), which would geometrically realize \(\mathcal{A}\).
Johnson does this in [1], Appendix B, subject to the existence of a truncated resolution:
\[F_{3}\to F_{2}\to F_{1}\to F_{0}\rightarrow\mathbb{Z}\to 0\]
where the \(F_{i}\) are f.g. free modules over \(\mathbb{Z}[G]\). In the next section we use a stronger "eventual stability" result ([2], theorem 1.1), to show it may be done for any finitely presented group \(G\).
Realizing algebraic 2-complexes by finite cell complexes
Let \(G\) be a finitely presented group and let \(\mathcal{A}\) denote following the algebraic 2-complex over \(G\):
\[S_{2}\stackrel{{ d_{2}}}{{\rightarrow}}S_{1}\stackrel{{ d_{1}}}{{ \rightarrow}}S_{0}\]
**Theorem 2.1**.: _We may realize \(\mathcal{A}\) up to chain homotopy equivalence by a finite geometric 3-complex._
Proof.: Let \(Y_{1}\) denote the Cayley complex of a finite presentation of \(G\). Then \(C_{*}(Y_{1})\) is an algebraic \(2\)-complex:
\[C_{2}\stackrel{{\partial_{2}}}{{\rightarrow}}C_{1}\stackrel{{ \partial_{1}}}{{\rightarrow}}C_{0}\]
and the \(C_{i}\) are f.g. free modules over \(\mathbb{Z}[G]\).
Let \(C\cong S_{2}\oplus C_{1}\oplus S_{0}\) and let \(S\cong C_{2}\oplus S_{1}\oplus C_{0}\). Also let \(Q\) be a f.g. free module such that \(C\oplus Q\) and \(S\oplus Q\) are f.g. free. Let \(\vec{e_{1}},\cdots,\vec{e_{n}}\) denote a basis for \(C\oplus Q\) and let \(Y_{2}\) denote the wedge of \(Y_{1}\) with \(n\)\(2\)-spheres. Hence \(C_{*}(Y_{2})\) is:
\[C_{2}\oplus(C\oplus Q)\stackrel{{\partial_{2}\oplus 0}}{{ \rightarrow}}C_{1}\stackrel{{\partial_{1}}}{{\rightarrow}}C_{0}\]
Let
\[\mathcal{A}^{\prime}=S\oplus Q\stackrel{{\iota}}{{\rightarrow}}S_ {2}\oplus(S\oplus Q)\stackrel{{ d_{2}\oplus 0}}{{\rightarrow}}S_{1} \stackrel{{ d_{1}}}{{\rightarrow}}S_{0}\]
where \(\iota\) denotes the natural inclusion into the second summand. The natural inclusion
\(\mathcal{A}\hookrightarrow\mathcal{A}^{\prime}\) is a simple homotopy equivalence.
By [2], theorem 1.1 we have a chain homotopy equivalence:
\[\left\{S_{2}\oplus S\stackrel{{ d_{2}\oplus 0}}{{\rightarrow}}S_ {1}\stackrel{{ d_{1}}}{{\rightarrow}}S_{0}\right\}\stackrel{{ \iota}}{{\rightarrow}}\left\{C_{2}\oplus C\stackrel{{ \partial_{2}\oplus 0}}{{\rightarrow}}C_{1}\stackrel{{\partial_{1}}}{{ \rightarrow}}C_{0}\right\}\]
By extending this to the identity on \(Q\) we get a chain homotopy equivalence:
\[\begin{array}{cccc}S_{2}\oplus(S\oplus Q)&\stackrel{{ d_{2} \oplus 0}}{{\rightarrow}}S_{1}&\stackrel{{ d_{1}}}{{ \rightarrow}}&S_{0}\\ \downarrow\phi_{2}&\downarrow\phi_{1}&\downarrow\phi_{0}\\ C_{2}\oplus(C\oplus Q)&\stackrel{{\partial_{2}\oplus 0}}{{ \rightarrow}}C_{1}&\stackrel{{\partial_{1}}}{{\rightarrow}}&C_{0} \end{array}\]
We pick a basis \(\vec{f}_{1},\cdots,\vec{f}_{m}\) for \(S\oplus Q\). Then for each \(i\) we have \(\phi_{2\iota}\vec{f}_{i}\in H_{2}(\tilde{Y_{2}};\mathbb{Z})\). The Hurewicz isomorphism theorem implies an isomorphism \(h:H_{2}(\tilde{Y_{2}};\mathbb{Z})\tilde{\to}\pi_{2}(\tilde{Y_{2}})\) and the covering map induces an isomorphism \(p:\pi_{2}(\tilde{Y_{2}})\tilde{\to}\pi_{2}(Y_{2})\).
For each \(i\in\{1,\cdots,m\}\), let \(E_{i}\) be a 3-ball and let \(\psi_{i}:\partial E_{i}\to Y_{2}\) denote a map representing the homotopy element \(ph\phi_{2\iota}\vec{f}_{i}\), where we identify \(S^{2}\) with \(\partial E_{i}\).
Let \(Y\) denote the 3-complex constructed by attaching the 3-cells \(E_{i}\) to \(Y_{2}\) via the attaching maps \(\psi_{i}\). Then \(C_{*}(Y)\) is given by:
\[S\oplus Q\stackrel{{\phi_{2\iota}}}{{\to}}C_{2}\oplus(C\oplus Q )\stackrel{{\partial_{2}\oplus 0}}{{\to}}C_{1}\ \stackrel{{\partial_{1}}}{{\to}}C_{0}\]
We now have a chain homotopy equivalence \(\mathcal{A}^{\prime}\tilde{\to}C_{*}(Y)\):
\[\begin{array}{ccccc}S\oplus Q&\stackrel{{\iota}}{{\to}}S_{2} \oplus(S\oplus Q)&\stackrel{{ d_{2}\oplus 0}}{{\to}}S_{1}&\stackrel{{ d_{1}}}{{\to}}&S_{0}\\ \downarrow 1&\downarrow\phi_{2}&\downarrow\phi_{1}&\downarrow\phi_{0}\\ S\oplus Q&\stackrel{{\phi_{2\iota}}}{{\to}}C_{2}\oplus(C\oplus Q )&\stackrel{{\partial_{2}\oplus 0}}{{\to}}C_{1}&\stackrel{{ \partial_{1}}}{{\to}}&C_{0}\end{array}\]
Thus \(\mathcal{A}\) is geometrically realized by \(Y\).
In particular, if the D(2) property holds for \(G\), then we may find a finite 2-complex, \(X\) in the homotopy type of \(Y\), as \(Y\) is cohomologically 2-dimensional (the cohomology of \(Y\) is an invariant of the chain homotopy type of \(C_{*}(Y)\)). Thus \(\mathcal{A}\) is geometrically realized by the 2-complex \(X\).
Hence the D(2) property for \(G\) implies the realization property for \(G\). From the Weak Realization Theorem ([1], Appendix B) we know that the realization property for \(G\) implies the D(2) property for \(G\). Our proof of theorem 1.1 is therefore complete.
|
2308.06676 | Sum of two rare sets in a category base can be absolutely non-baire | In this paper, we give generalized version in category bases of a result of
Kharazishvili dealing with absolute nonmeasurability of the Minkowski sum of
certain universal measure zero sets which were based on an earlier result of
Erdos, Kunen and Mauldin in the real line. | Sanjib Basu, abhit Chandra Pramanik | 2023-08-13T03:45:35Z | http://arxiv.org/abs/2308.06676v1 | # Sum of two rare sets in a category base can be absolutely non-bire
###### Abstract.
In this paper, we give generalized version (in category bases) of a result of Kharazishvili dealing with absolute nonmeasurability of the Minkowski's sum of certain universal measure zero sets which were based on an earlier result of Erdos, Kunen and Mauldin in the real line.
Key words and phrases:Polish topological vector space, continuum hypothesis, Minkowski's sum, perfect base, perfect translation base, linearly invariant family, countable chain condition (c.c.c), complementary bases, equivalent bases, rare sets, Bernstein sets, meager sets 2020 Mathematics Subject Classification: 28A05, 54A05, 54E52 The second author thanks the CSIR, New Delhi - 110001, India, for financial support.
###### Abstract
We consider the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the following problem of the problem of the following problem of
Based on Theorem 1.2, Khrazishvili further proved [6] (see also [5]) that
**Theorem 1.4**.: _Under Martin's Axiom, in any uncountable Polish topological vector spaces. there exist generalized Luzin sets \(L_{1}\) and \(L_{2}\) such that \(L_{1}+L_{2}\) is a Bernstein set._
Luzin set [8] (see also [3]) in the real line was constructed by Luzin in 1914 using continuum hypothesis. Although a smiliar construction was given earlier in 1913 by Mahalo [8], in literature this set is commonly known as the Luzin set probabaly because Luzin investigated these sets more thoroughly and proved a number of its important properties. The construction of a generalized Luzin set (under Martin's Axiom) in \(\mathbb{R}\) imitates the classical construction of Luzin.
The dual of a Luzin set in \(\mathbb{R}\) is the Sierpinski set [8] (see also [3]) constructed by Sierpinski in 1924 using the same continuum hypothesis. It is an uncountable set of reals having countable intersection with every set of Lebesgue measure zero. The construction of a generalized Sierpinski set (under Martin's construction) [8] (see also [3]) imitates the classical construction of Sierpinski.
The concept of a Luzin set and its dual the Sierpinski set can be unified under the general concept of 'rare set' [9] in category bases. In this article, we formulate under continuum hypothesis, generalized versions of Theorem 1.2 and Theorem 1.4 in category bases with the underlying set having Polish topological vector space structure. Since under continuum hypothesis, a generalized rare set is a rare set, we state our results interms of rare sets instead of generalized rare sets.
## 2. Preliminaries and Results
The concept of a category base is a generalization of both measure and topology. Its main objective is to present both measure and Baire category (topology) and also some other aspects of point set classification within a common framework. It was introduced by J.C.Morgan II [9] in the seventies of the last century and since then has been developed through a series of papers [10], [11], [12], [13] etc.
**Definition 2.1** ([9]).: A pair \((X,\mathcal{C})\) where \(X\) is a non-empty set and \(\mathcal{C}\) is a family of subsets of \(X\) is called a category base if the non-empty members of \(\mathcal{C}\) called regions satisfy the following set of axioms :
1. Every point of \(X\) is contained in at least one region; i.e., \(X=\cup\mathcal{C}\).
2. Let \(A\) be a region and \(\mathcal{D}\) be any non-empty family of disjont regions having cardinality less than the cardinality of \(\mathcal{C}\). 1. If \(A\cap(\cup\mathcal{D})\) contains a region, then there exists a region \(D\in\mathcal{D}\) such that \(A\cap D\) contains a region. 2. If \(A\cap(\cup\mathcal{D})\) contains no region, then there exists a region \(B\subseteq A\) which is disjoint from every region in \(\mathcal{D}\).
Several examples of category bases may be found in [9] and also in other references as stated in the above paragraph.
**Definition 2.2** ([9]).: In a category base \((X,\mathcal{C})\), a set is called'singular' if every region contains a subregion which is disjoint from the set itself. A set which can be expressed as a countable union of singluar sets is called'meager'. Otherwise, it is called 'abundant'. A set is called 'Baire set' if every region contains a subregion in which either the set or its complement is meager. The class of all meager (resp, Baire) sets in a category base \((X,\mathcal{C})\) is here denoted by the symbol \(\mathcal{M}(\mathcal{C})\) (resp, \(\mathcal{B}(\mathcal{C})\)).
**Definition 2.3** ([9]).: A category base \((X,\mathcal{C})\) is called
1. 'point-meager' if every singleton set in it is meager.
2. a 'Baire base' if every region in it is abundant.
3. a 'perfect base' if \(X=\mathbb{R}^{n}\) and \(\mathcal{C}\) is a family of perfect subsets of \(\mathbb{R}^{n}\) such that for every region \(A\) in \(\mathcal{C}\) and for every point \(x\in A\), there is a descending sequence \(\{A_{n}\}_{n=1}^{\infty}\) of regions such that \(x\in A_{n}\) and \(\mathrm{diam}(A_{n})\leq\frac{1}{n}\), \(n=1,2,...\). and
4. a 'translation base' if \(X=\mathbb{R}\) with 1. \(\mathcal{C}\) as a translation invariant family, i.e., \(E\in\mathcal{C}\) and \(r\in\mathbb{R}\) implies \(E(r)\in\mathcal{C}\) where \(E(r)=\{x+r:x\in E\}\), and 2. if \(A\) is any region and \(D\) is a countable everywhere dense set in \(\mathbb{R}\), then \(\bigcup\limits_{r\in D}A(r)\) is abundant everywhere.
To this, we also add that
**Definition 2.4** ([9]).: A category base \((X,\mathcal{C})\)
1. satisfies countable chain condition (c.c.c) if every subfamily of mutually disjoint regions is atmost countable.
2. is linearly invariant if \(X=\mathbb{R}\), \(\mathcal{C}\) is translation invariant and for every \(E\in\mathcal{C}\), \(\lambda(\neq 0)\in\mathbb{R}\), we have \(\lambda E\in\mathcal{C}\).
**Theorem 2.5**.: _Suppose continuum hypothesis hold. Then in any point-meager base (\(\mathbb{R},\mathcal{C}\)) satisfying c.c.c where \(\mathcal{C}\) is a linearly invariant family having cardinality \(\leq 2^{\aleph_{0}}\) and such that \(\mathbb{R}\) is abundant, we can construct sets \(E\) and \(F\) such that_
1. _both_ \(E\) _and_ \(F\) _are rare sets in this category base._
2. _both_ \(E\) _and_ \(F\) _are vector spaces over the field_ \(\mathbb{Q}\) _of rationals._
3. \(\mathbb{R}=E+F\) _and_ \(E\cap F=\{0\}\)_._
Proof.: Let \(\Omega\) denotes the least ordinal number representing \(2^{\aleph_{0}}\) which is the cardinality of the continuum. Since \(\mathcal{C}\) satisfies countable chain condition, every meager set in this category base is contained in a meager \(\mathcal{K}_{\delta\sigma}\) set where \(\mathcal{K}\) denotes the family of all sets whose complements are members of \(\mathcal{C}\) (Th 5, II, Ch 1, [9]). Moreover, as the category base is point meager and given that the cardinality of \(\mathcal{C}\) is atmost \(2^{\aleph_{0}}\), by (Th 1, I, Ch 2, [9]), the cardinality of the family of all meager \(\mathcal{K}_{\delta\sigma}\) sets is \(2^{\aleph_{0}}\).
Let \(\{z_{\alpha}\}_{\alpha<\Omega}\) and \(\{K_{\alpha}\}_{\alpha<\Omega}\) be enumeration of points and meager \(\mathcal{K}_{\delta\sigma}\) sets in \(\mathbb{R}\). Using transfinite recurssion, we construct two \(\Omega\)-sequences \(\{X_{\alpha}\}_{\alpha<\Omega}\), \(\{Y_{\alpha}\}_{\alpha<\Omega}\) of subsets of \(\mathbb{R}\) in the following manner :
Suppose, for any ordinal \(\alpha<\Omega\), the partial \(\alpha\)-sequences \(\{X_{\beta}:\beta<\alpha\}\) and \(\{Y_{\beta}:\beta<\alpha\}\) are already constructed. Since \((\mathbb{R},\mathcal{C})\) is point-meager and \(\mathcal{C}\) is linearly invariant, so under continuum hypothesis both \(X^{\prime}+\mathbb{Q}(\bigcup\limits_{\beta<\alpha}K_{\beta})\) and \(Y^{\prime}+\mathbb{Q}(\bigcup\limits_{\beta<\alpha}K_{\beta})\) are meager in \(\mathbb{R}\) where \(X^{\prime}=\bigcup\limits_{\beta<\alpha}X_{\beta}\) and \(Y^{\prime}=\bigcup\limits_{\beta<\alpha}Y_{\beta}\). Hence, \((\mathbb{R}\backslash\{X^{\prime}+\mathbb{Q}(\bigcup\limits_{\beta<\alpha}K_ {\beta})\})\cap(z_{\alpha}-(\mathbb{R}\backslash\{Y^{\prime}+\mathbb{Q}( \bigcup\limits_{\beta<\alpha}K_{\beta})\}))\neq\phi\) and therefore there exist \(x_{\alpha}\in\mathbb{R}\backslash\{X^{\prime}+\mathbb{Q}(\bigcup\limits_{ \beta<\alpha}K_{\beta})\}\) and \(y_{\alpha}\in\mathbb{R}\backslash\{Y^{\prime}+\mathbb{Q}(\bigcup\limits_{ \beta<\alpha}K_{\beta})\}\) such that \(z_{\alpha}=x_{\alpha}+y_{\alpha}\).
We set \(X_{\alpha}=\mathbb{Q}x_{\alpha}+X^{\prime}\) and \(Y_{\alpha}=\mathbb{Q}y_{\alpha}+Y^{\prime}\).
Proceeding in this manner, we get two \(\Omega\)-sequences \(\{X_{\alpha}\}_{\alpha<\Omega}\), \(\{Y_{\alpha}\}_{\alpha<\Omega}\) of sets in \(\mathbb{R}\) with the following properties :
1. for each \(\alpha<\Omega\), \(\operatorname{card}(X_{\alpha})=\operatorname{card}(Y_{\alpha})=\operatorname{ card}(\alpha)+\omega\).
2. if \(\beta<\alpha\), then \(K_{\beta}\cap X_{\beta}=K_{\beta}\cap X_{\alpha}\) and \(K_{\beta}\cap Y_{\beta}=K_{\beta}\cap Y_{\alpha}\).
Setting \(E=\bigcup\limits_{\alpha<\Omega}X_{\alpha}\) and \(F=\bigcup\limits_{\alpha<\Omega}Y_{\alpha}\), we find that \(E\) and \(F\) are rare sets which are also vector spaces over \(\mathbb{Q}\) such that \(\mathbb{R}=E+F\).
We know that in any vector space over a field, if \(U\) and \(V\) are subspaces, then there exists a subspace \(V^{\prime}\subseteq V\) such that \(U+V=U+V^{\prime}\) and \(U\cap V^{\prime}=\{0\}\). Since \(U\cap V\) is a subspace, \(V^{\prime}\) may be taken as a subspace of \(V\) complemented to \(U\cap V\). Thus finally in the above procedure, we may assume that \(E\cap F=\{0\}\). This proves the theorem.
It may be noted here that any perfect base is a point-meager Baire base (Th 3, I, Ch 5, [9]), so if it is linearly invariant, then Theorem 2.5 can be reproduced in an exactly similar manner in any Polish topological vector space.
**Theorem 2.6**.: _Let \(X\) be Polish topological vector space. Then under continuum hypothesis, in any perfect base (\(X,\mathcal{C}\)) satisfying c.c.c where \(\mathcal{C}\) is a linearly invariant family having cardinality \(\leq 2^{\aleph_{0}}\), there exist sets \(R_{1}\) and \(R_{2}\) such that_
1. _both_ \(R_{1}\) _and_ \(R_{2}\) _are rare sets._
2. _both_ \(R_{1}\) _and_ \(R_{2}\) _are vector spaces over the field_ \(\mathbb{Q}\) _of rationals._
3. \(X=R_{1}+R_{2}\) _and_ \(R_{1}\cap R_{2}=\{0\}\)_._
**Definition 2.7** ([9]).: Two category bases \((X,\mathcal{C})\) and \((X,\mathcal{D})\) are said to be equivalent if \(\mathcal{M}(\mathcal{C})=\mathcal{M}(\mathcal{D})\) and \(\mathcal{B}(\mathcal{C})=\mathcal{B}(\mathcal{D})\).
**Definition 2.8** ([9]).: Two category bases \((X,\mathcal{C})\) and \((X,\mathcal{D})\) are called complementary if \(X\) is representable as the union of two disjoint sets \(M\) and \(N\), where \(M\) is \(\mathcal{C}\)-meager and \(N\) is \(\mathcal{D}\)-meager.
In our next result, we use the following theorem by Sander
**Theorem 2.9** ([12]).: _Two perfect translation bases (\(X,\mathcal{C}\)) and (\(X,\mathcal{D}\)) in any topological group which is a complete separable space and without isolated points are non-equivalent iff they are complementary._
The above theorem is also valid for any topological group \(X\) with the property that each topologically dense subset of \(X\) contains a countable dense subset [12].
**Theorem 2.10**.: _Let \(X\) be a Polish topological vector space. Then under continuum hypothesis, in any perfect translation base (\(X,\mathcal{C}\)) satisfying c.c.c where \(\mathcal{C}\) is linearly invariant having cardinality \(\leq 2^{\aleph_{0}}\), there exist two sets such that_
1. _both are rare sets._
2. _their sum is a Bernstein set in_ \(X\)_._
3. _they are meager in any perfect translation base which is non-equivalent to (_\(X,\mathcal{C}\)_)._
Proof.: According to Theorem 2.6, there exist rare sets \(R_{1}\) and \(R_{2}\) which are vector spaces over \(\mathbb{Q}\) and \(X=R_{1}+R_{2}\), \(R_{1}\cap R_{2}=\{0\}\). Let \(\{P_{\alpha}\}_{\alpha<\Omega}\) be an enumeration of all nonempty perfect sets in \(X\) (\(X\) being Polish, we know that the cardinality of all nonempty perfect sets in \(X\) is \(c\)). It may be assumed that each of the subfamilies \(\{P_{\alpha}:\alpha<\Omega,\)\(\alpha\) is an even ordinal\(\}\) and \(\{P_{\alpha}:\alpha<\Omega,\)\(\alpha\) is an odd ordinal\(\}\) also contains all nonempty perfect subsets of \(X\).
Using transfinite recurssion, we now construct an injective \(\Omega\)-sequence of points in \(R_{2}\) in the following manner :
Suppose for any ordinal \(\alpha<\Omega\), the partial sequence \(\{y_{\beta}:\beta<\alpha\}\) is already constructed. As \(\underset{\beta<\alpha}{\bigcup}(R_{1}+y_{\beta})\) is a rare set because of continuum hypothesis and translation invariance of \(\mathcal{C}\) and every rare set in a perfect base satisfying c.c.c is Marczewski singular (Th 8, II, Ch 5, [9]), so \(P_{\alpha}-\underset{\beta<\alpha}{\bigcup}(R_{1}+y_{\beta})\neq\phi\). We choose a point \(z\) from the set \(P_{\alpha}-\underset{\beta<\alpha}{\bigcup}(R_{1}+y_{\beta})\). By virtue of the equality \(X=R_{1}+R_{2}\), there exists \(y\in R_{2}\) such that \(z\in R_{1}+y\). We put \(y_{\alpha}=y\).
We define \(T_{2}=\{y_{\alpha}:\alpha<\Omega\), \(\alpha\) is an even ordinal\(\}\) and \(T_{3}=\{y_{\alpha}:\alpha<\Omega\), \(\alpha\) is an odd ordinal\(\}\).
Then both \(T_{2}\) and \(T_{3}\) have cardinality \(2^{\aleph_{0}}\) and according to the construction, they are also rare sets. Moreover, for any even ordinal \(\alpha<\Omega\),
\[P_{\alpha}\cap(R_{1}+T_{2})\supseteq P_{\alpha}\cap(R_{1}+y_{\alpha})\neq\phi\]
and likewise for any odd ordinal \(\alpha<\Omega\),
\[P_{\alpha}\cap(R_{1}+T_{3})\supseteq P_{\alpha}\cap(R_{1}+y_{\alpha})\neq\phi.\]
As \(R_{1}\cap R_{2}=\{0\}\), so \((R_{1}+T_{2})\cap(R_{1}+T_{3})=\phi\) and both \(R_{1}+T_{2}\) and \(R_{1}+T_{3}\) are Bernstein sets in \(X\).
Let \((X,\mathcal{D})\) be any perfect translation base such that \((X,\mathcal{C})\) and \((X,\mathcal{D})\) are nonequivalent. Then they are complementary (by Th 2.9) and accordingly \(X\) can be expressed as the disjoint union of sets \(M\) and \(N\) where \(M\) is \(\mathcal{C}\)-meager and \(N\) is \(\mathcal{D}\)-meager. Obviously then the two rare sets constructed above are both \(\mathcal{D}\)-meager.
Hence the theorem.
Thus we find from the above theorem that under continuum hypothesis in any Polish topological vector space \(X\), there exists for any perfect base \((X,\mathcal{C})\) where \(\mathcal{C}\) is linearly invariant having cardinality atmost \(2^{\aleph_{0}}\) and satisfying c.c.c, two rare sets whose Minkowski's sum is non-Baire in any perfect base. In addition, if \((X,\mathcal{C})\) is also a translation base, then each of these rare sets is also meager in any perfect translation base which is nonequivalent (or, complementary) to \((X,\mathcal{C})\).
|
2301.12308 | Correcting models with long-range electron interaction using generalized
cusp conditions | Sources of energy errors resulting from the replacement of the physical
Coulomb interaction by its long-range $\mathrm{erfc}(\mu r)/r$ approximation
are explored. It is demonstrated that the results can be dramatically improved
and the range of $\mu$ giving energies within chemical accuracy limits
significantly extended, if the generalized cusp conditions are used to
represent the wave function at small $r$. The numerical results for
two-electron harmonium are presented and discussed. | Andreas Savin, Jacek Karwowski | 2023-01-28T23:18:23Z | http://arxiv.org/abs/2301.12308v1 | # Correcting models with long-range electron interaction using generalized cusp conditions
###### Abstract
Sources of energy errors resulting from the replacement of the physical Coulomb interaction by its long-range \(\mathrm{erfc}(\mu\,r)/r\) approximation are explored. It is demonstrated that the results can be dramatically improved and the range of \(\mu\) giving energies within chemical accuracy limits significantly extended, if the generalized cusp conditions are used to represent the wave function at small \(r\). The numerical results for two-electron harmonium are presented and discussed.
range separation; long-range interaction; short-range interaction, cusp conditions; Schrodinger equation; harmonium; chemical accuracy
The problem to be solved
We have a model system, \(H(\mathbf{R};\mu)\), and a corresponding Schrodinger equation,
\[H(\mathbf{R};\mu)\Psi(\mathbf{R},\boldsymbol{\sigma};\mu)=E(\mu)\Psi(\mathbf{R}, \boldsymbol{\sigma};\mu). \tag{1}\]
The system is composed of \(N\) electrons confined by an external potential, \(\mathbf{R}\) and \(\boldsymbol{\sigma}\) stand, respectively, for their orbital and spin coordinates. All quantities characterizing the system (e.g. energy or wave function) depend on the external potential, but we show this dependence explicitly only when the form of this potential is specified (e.g. the dependence on \(\omega\) in the section "The model system".
The interaction between electrons is described by a \(\mu\)-dependent model potential \(v_{\mathrm{int}}(r;\mu)\):
* \(\mu=0\): there is no interaction between electrons, so \(v_{\mathrm{int}}(r;0)=0\),
* \(\mu=\infty\): we have the physical, Coulomb interaction, so \(v_{\mathrm{int}}(r;\infty)=1/r\),
* \(\mu\in(0,\infty)\): we choose \[v_{\mathrm{int}}(r;\mu)=w(r;\mu)=\frac{\mathrm{erf}(\mu r)}{r},\] (2) where \(r=r_{12}=|\mathbf{r}_{1}-\mathbf{r}_{2}|\). Exploring other forms of interaction may be both interesting and useful as, for example, in ref 1.
To simplify the notation, we drop \(\mu\) when \(\mu=\infty\).
We assume that the solutions of eq (1) are accessible for selected finite values of \(\mu\). However, we are not interested in the model system energy, \(E(\mu)\). We aim at determining \(E\) corresponding to the physical interaction. Stated differently, we are interested in
\[\overline{E}(\mu)=E-E(\mu)\,\equiv\,-\Delta_{0}E(\mu), \tag{3}\]
where \(\Delta_{0}E(\mu)\) is referred to as the _error of the energy of the model system_.
To get an idea about the change of \(\overline{E}(\mu)\) with respect to \(\mu\), we show in Figure 1 some situations where \(E\) is known for arbitrary precision. By construction, \(\overline{E}(\infty)=0\). As \(\mu\) decreases, the interaction weakens and disappears for \(\mu=0\). This effect, not compensated by any change in the external potential, leads to the absolute values of \(\overline{E}(\mu)\) increasing with decreasing \(\mu\) and becoming very large for sufficiently small \(\mu\).
In this paper we explore how much one can lower the values of \(\mu\) and, by correcting the model, still retain approximations of \(\overline{E}\) within the _chemical accuracy_ (\(\pm\,1\,\mathrm{kcal/mol}\)) error bars.
## II Correcting Models
### Energy extrapolation - a historic solution
A way proposed in ref [2] is to expand \(\overline{E}(\mu)\) in the following basis,
\[\overline{E}(\mu)\approx\sum_{k=1}^{M}\tilde{e}_{k}\chi_{k}(\mu). \tag{4}\]
Here \(\chi_{k}(\mu)\), \(k=1,\ldots,M\) are some basis functions, and \(\tilde{e}_{k}\) are coefficients to be determined. There are different ways to determine these coefficients, once the basis functions are given. One option is to obtain them from the derivatives of \(E(\mu)\) with respect to \(\mu\). This corresponds to (generalized) Taylor expansions, or to perturbation theory. Another possibility is calculating \(E(\mu)\) for several values of \(\mu\), and then matching the results to the expansion. This method is called _energy extrapolation_[2]: from model information, we aim to reach the physical result.
One way to achieve our aim is to introduce more parameters into the Hamiltonian [3], e.g.,
\[H({\bf R};\lambda,\mu)=H({\bf R};\mu)+\lambda\left[H({\bf R})-H({\bf R};\mu) \right]. \tag{5}\]
The eigenvalue and the corresponding eigenfunction of \(H({\bf R};\lambda,\mu)\) are, respectively, \(E(\lambda,\mu)\) and \(\Psi({\bf R},{\mathbf{\sigma}};\lambda,\mu)\) - notice that \(E(\mu)=\left.E(\lambda,\mu)\right|_{\lambda=0}\). In the Hamiltonian (5) the interaction
Figure 1: Error of the energy of the model system, \(\Delta_{0}E(\mu)\), for the lowest energy state of harmonium with \(\omega=1/2\) (red curves), and \(\omega=1\) (thin blue curve), for \(\ell=0\), full curves; \(\ell=1\), dot-dashed curve; \(\ell=2\), dashed curve.
potential is
\[v_{\rm int}(r;\lambda,\mu)=w(r;\mu)+\lambda\,\overline{w}(r;\mu),\]
where
\[\overline{w}(r;\mu)=\frac{1}{r}-w(r;\mu)=\frac{1-{\rm erf}(\mu r)}{r}=\frac{{ \rm erfc}(\mu r)}{r}. \tag{6}\]
Therefore,
\[v_{\rm int}(r;\lambda,\mu)=(1-\lambda)\frac{{\rm erf}(\mu r)}{r}+\frac{ \lambda}{r} \tag{7}\]
We see that we can reach the physical result either with \(\lambda=1\), or with \(\mu=\infty\). "Shooting" from different points to the same target may simplify our task. However, this is not further discussed in this paper.
Energy extrapolation has an important problem: we do not know how to choose \(\chi_{k}(\mu)\). What makes the problem worse is that we are not willing to use many basis functions. Ideally, we should use a single function, that is to perform a single model calculation, \(M=1\) in eq (4).
### The adiabatic connection
For \(\|\Psi({\bf R},\mathbf{\sigma};\lambda,\mu)\|=1\), the Hellmann-Feynman theorem yields
\[\partial_{\lambda}E(\lambda,\mu)=\langle\Psi({\bf R},\mathbf{\sigma} ;\lambda,\mu)|\partial_{\lambda}H({\bf R};\lambda,\mu)|\Psi({\bf R},\mathbf{\sigma};\lambda,\mu)\rangle=\langle\overline{w}(\lambda,\mu)\rangle, \tag{8}\]
where
\[\langle\overline{w}(\lambda,\mu)\rangle=\langle\Psi({\bf R},\mathbf{ \sigma};\lambda,\mu)|\overline{W}({\bf R};\mu)|\Psi({\bf R},\mathbf{ \sigma};\lambda,\mu)\rangle, \tag{9}\]
and
\[\overline{W}({\bf R};\mu)\equiv H({\bf R})-H({\bf R};\mu)=\sum_{1\leq i<j\leq N }\overline{w}(r_{ij};\mu), \tag{10}\]
By integrating eq (8) over \(\lambda\) one obtains that 1
Footnote 1: Notice that \(E\) could have also been obtained by integration over \(\mu\), as \(E=E(1,\mu)=E(\lambda,\infty)\).
\[\overline{E}(\mu)=\int_{0}^{1}\langle\overline{w}(\lambda,\mu)\rangle\,d\lambda. \tag{11}\]
The integrand in eq (11), an integral over \(3N\)-dimensional configuration space and \(2^{N}\)-dimensional spin space, can be reduced to a one-dimensional radial integral. Exploiting
the antisymmetry of the wave function and integrating over spin, and over coordinates of electrons \(3,4,\ldots,N\), yields [4]
\[\langle\overline{w}(\lambda,\mu)\rangle=\int_{\mathbb{R}^{6}}\overline{w}(r_{12}; \mu)\Gamma_{\lambda,\mu}(\mathbf{r}_{1},\mathbf{r}_{2})d\mathbf{r}_{1}d \mathbf{r}_{2}, \tag{12}\]
where
\[\Gamma_{\lambda,\mu}(\mathbf{r}_{1},\mathbf{r}_{2})=\binom{N}{2}\sum_{\sigma_{ 1},\ldots,\sigma_{N}}\int_{\mathbb{R}^{3N-6}}|\Psi(\mathbf{R},\boldsymbol{ \sigma};\lambda,\mu)|^{2}\,d\mathbf{r}_{3}d\mathbf{r}_{4}\cdots d\mathbf{r}_{N}\]
is the diagonal part of the second-order reduced density matrix, 2RDM, corresponding to \(\Psi(\mathbf{R},\boldsymbol{\sigma};\lambda,\mu)\); the sum is extended over spin coordinates of all electrons.
After introducing the relative-motion variables
\[\mathbf{r}=\mathbf{r}_{1}-\mathbf{r}_{2},\quad\mathbf{r}^{+}=\frac{\mathbf{r} _{1}+\mathbf{r}_{2}}{2}, \tag{13}\]
performing integration over \(\mathbf{r}^{+}\), and expressing \(\mathbf{r}\) in spherical coordinates, \(\mathbf{r}(r,\theta,\phi)\), eq (12) can be rewritten as
\[\langle\overline{w}(\lambda,\mu)\rangle=\int_{\mathbb{R}^{6}}\overline{w}(r; \mu)\Gamma_{\lambda,\mu}(\mathbf{r},\mathbf{r}^{+})d\mathbf{r}d\mathbf{r}^{+} =\int_{\mathbb{R}^{3}}\overline{w}(r;\mu)\gamma(\mathbf{r};\lambda,\mu)d \mathbf{r}, \tag{14}\]
where \(d\mathbf{r}=r^{2}\,dr\,\sin\theta\,d\theta\,d\phi\), \(r=r_{12}=|\mathbf{r}_{1}-\mathbf{r}_{2}|\), and \(\gamma(\mathbf{r};\lambda,\mu)\) is the diagonal part of the first-order reduced density matrix, 1RDM. Since in the coordinate space \(\overline{w}(r;\mu)\) depends on the radial coordinate \(r\) only, we can do the spherical averaging. In effect the integrand of eq (11) is simplified to
\[\langle\overline{w}(\lambda,\mu)\rangle=\int_{0}^{\infty}\,\overline{w}(r;\mu )\tilde{\gamma}(r;\lambda,\mu)r^{2}dr, \tag{15}\]
where
\[\tilde{\gamma}(r;\lambda,\mu)=\int_{0}^{2\pi}\int_{0}^{\pi}\gamma(\mathbf{r} ;\lambda,\mu)\sin\theta\,d\theta\,d\phi.\]
The adiabatic connection defined in eq (11) carries no practical information, as it requires the knowledge of \(\langle\overline{w}(\lambda,\mu)\rangle\) for all values of \(\lambda\) while in the present approach it is known only for \(\lambda=0\).
At the limit of \(r\to 0\), \(\overline{w}(r;\mu)\sim 1/r\). Therefore, also for large \(\mu\), \(\overline{w}(r;\mu)\) is non-negligible if \(r\) is small enough. As is shown hereafter, the information necessary for correcting the model at small \(r\) can be derived from the generalized cusp conditions (GCC).
### Generalized cusp conditions
The information about the behavior of the wave function in the vicinity of the coalescence point, i.e., for \(r=r_{12}\ll 1\), can be derived from general properties of the Schrodinger equation at \(r\to 0\). In general, the approach is based on the expansion of the wave function and of the potential as the power series of \(r\) and deriving conditions that have to be fulfilled by the expansion coefficients in order to retain the consistency of the Hamiltonian eigenvalue problem. The simplest and most commonly known is the Kato's cusp condition [5], which can be derived from the requirement that in the case of the electrostatic interaction the local energy at \(r=0\) is nonsingular. Higher order (generalized) cusp conditions can be obtained from the demand that the local energies generated by powers of the Hamiltonian are nonsingular (energy-independent conditions) and ratios of the local energies generated by the consecutive powers are constant (energy-dependent conditions) [6]. Alternatively one can use the expansion of the wave function in powers of \(r\) and require that the Schrodinger equation is satisfied [7; 8; 9]. Both approaches are equivalent, but the conditions derived from the former one, though more complicated, have more transparent physical meaning.
In this paper the GCCs are applied to describe the \(r\)-dependence of the wave function in the area of small \(r\), where the model interaction potential departs from the physical one.
## III The model system
The simplest nontrivial model system containing one pair of electrons is composed of three particles: two electrons interacting by a repulsive model potential, and a third particle, "nucleus", which interacts with electrons by an attractive force. Commonly known examples of such systems are helium atom - the nucleus attracts electrons by the Coulomb force, and harmonium (Hooke atom) - the nucleus attracts electrons by the Hooke force. After the separation of the center of mass, the system is reduced to two interacting particles in an external potential \(v_{\rm ext}\). In the case of harmonium,
\[v_{\rm ext}(r_{1},r_{2};\omega)=\frac{\omega^{2}}{2}\left(r_{1}^{2}+r_{2}^{2} \right). \tag{16}\]
The potential depends on a parameter, \(\omega\), which defines the strength of the confinement. In the case of quantities that depend on this potential, the dependence on \(\omega\) is explicitly
shown. For example, \(E(\omega,\mu)\) stands for the special case of \(E(\mu)\), corresponding to the external potential (16).
To our knowledge, harmonium is the only bound system containing a pair of interacting electrons for which the Schrodinger equation is known to be separable. Apart from the three-dimensional free-particle equation describing the motion of the center of mass, the two-particle Schrodinger equation for harmonium is separable into six one-dimensional equations - five are exactly solvable, and the sixth one can be solved numerically to an arbitrary precision (for some specific values of \(\omega\) it is also solvable analytically). It is important to note that the separability holds for all forms of the interaction potential including the form given by \(v_{\rm int}(r;\lambda,\mu)\). 2 Therefore, harmonium is particularly suitable for pilot studies of the consequences of using various non-Coulombic forms of the interaction potentials. Motivated by these observations, at this stage, we explore our problem using harmonium as the model system.
Footnote 2: In principle the term ‘harmonium’ refers to two confined Coulomb-interacting electrons. In this paper we extend this term to the case of model potentials \(v_{\rm int}\).
The Schrodinger equation for harmonium
\[\left[T({\bf r}_{1},{\bf r}_{2})+v_{\rm ext}(r_{1},r_{2};\omega)+v_{\rm int}( r;\lambda,\mu)-{\cal E}({\mathfrak{p}})\right]\Psi({\bf r}_{1},{\bf r}_{2};{ \mathfrak{p}})=0, \tag{17}\]
where \(T({\bf r}_{1},{\bf r}_{2})\) is two-particle kinetic energy operator, depends on three parameters, collectively denoted \({\mathfrak{p}}=\{\omega,\lambda,\mu\}\), and \(\Psi({\bf r}_{1},{\bf r}_{2};{\mathfrak{p}})\) is the orbital part of the two-electron wave function. After transformation (13), eq (17) can by split into two spherically-symmetric equations. The first one depends on the interaction potential and describes the relative motion of electrons:
\[\left[-\Delta_{\bf r}+v(r;{\mathfrak{p}})-E({\mathfrak{p}})\right]\psi_{\rm rel }({\bf r};{\mathfrak{p}})=0, \tag{18}\]
where
\[v(r;{\mathfrak{p}})=\frac{\omega^{2}\,r^{2}}{4}+v_{\rm int}(r;\lambda,\mu)= \frac{\omega^{2}\,r^{2}}{4}+(1-\lambda)\frac{{\rm erf}(\mu r)}{r}+\frac{ \lambda}{r}. \tag{19}\]
The second equation describes the motion of the center of mass of the electron pair in the external potential:
\[\left[-\frac{\Delta_{{\bf r}^{+}}}{4}+\omega^{2}(r^{+})^{2}-{\mathfrak{E}}( \omega)\right]\psi_{\rm cm}({\bf r}^{+};\omega)=0, \tag{20}\]
where \(r^{+}=|{\bf r}^{+}|\). By construction, we have
\[{\cal E}({\mathfrak{p}})=E({\mathfrak{p}})+{\mathfrak{E}}(\omega),\ \ \ \ \Psi({\bf r}_{1},{\bf r}_{2};{\mathfrak{p}})=\psi_{\rm rel}({\bf r};{ \mathfrak{p}})\,\psi_{\rm cm}({\bf r}^{+};\omega). \tag{21}\]
The interaction potential appears only in eq (18). So, for our study, we deal with this equation only. The potential is spherically symmetric. Therefore,
\[\psi_{\rm rel}({\bf r};{\mathfrak{p}})=\psi_{\ell}(r;{\mathfrak{p}})Y_{\ell m}( \theta,\phi), \tag{22}\]
where \(\psi_{\ell}(r;{\mathfrak{p}})\) is determined by the radial equation
\[\left[-\frac{d^{2}}{dr^{2}}+\frac{\ell(\ell+1)}{r^{2}}+v(r;{\mathfrak{p}})-E({ \mathfrak{p}})\right][r\,\psi_{\ell}(r;{\mathfrak{p}})]=0. \tag{23}\]
The two-electron wave function, \(\Psi({\bf r}_{1},{\bf r}_{2};{\mathfrak{p}})\), symmetric/antisymmetric with respect to the transposition of \(({\bf r}_{1},{\bf r}_{2})\) correspond to singlet/triplet. As one can see, singlet states correspond to the even parity (even \(\ell\)) spherical harmonics in eq (22), and triplet states - to the odd ones.
### Generalized cusp conditions for the model system
Using [10]
\[\frac{{\rm erf}(\mu r)}{r}=\frac{2\mu}{\sqrt{\pi}}\left(1-\frac{(\mu r)^{2}}{ 3\cdot 1!}+\frac{(\mu r)^{4}}{5\cdot 2!}-\frac{(\mu r)^{6}}{7\cdot 3!}+\cdots\right)\]
one can expand the potential (19) as
\[v(r;{\mathfrak{p}})=\sum_{i=-1}^{\infty}v_{i}({\mathfrak{p}})\,r^{i}, \tag{24}\]
with
\[v_{-1}=\lambda,\ \ v_{0}=(1-\lambda)\frac{2\mu}{\sqrt{\pi}},\ \ v_{1}=0,\ \ v_{2}=\frac{\omega^{2}}{4}-(1-\lambda)\frac{2\mu^{3}}{3\sqrt{\pi}},\ \ v_{3}=0,\ldots. \tag{25}\]
The wave function, for small \(r\), can be represented by the following power series
\[\psi_{\ell}(r;{\mathfrak{p}})\approx r^{\ell}\sum_{k=0}^{K}c_{k}({\mathfrak{p }})r^{k}=c_{0}({\mathfrak{p}})\,r^{\ell}\sum_{k=0}^{K}\widetilde{c}_{k}({ \mathfrak{p}})\,r^{k}, \tag{26}\]
where \(\widetilde{c}_{k}({\mathfrak{p}})=c_{k}({\mathfrak{p}})/c_{0}({\mathfrak{p}})\). General formulas for GCC are given in refs [6, 7, 8, 9]. Here we give equations defining \(\widetilde{c}_{k}\) for \(k\leq 4\):
\[A_{1}\widetilde{c_{1}}+v_{-1}=0,,\] \[A_{2}\widetilde{c_{2}}+v_{-1}\widetilde{c_{1}}-\epsilon=0,\] \[A_{1}A_{3}\widetilde{c_{3}}+(A_{1}+A_{2})v_{-1}\widetilde{c_{2} }+v_{-1}^{2}\widetilde{c_{1}}+A_{1}v_{1}=0, \tag{27}\] \[A_{2}A_{4}\widetilde{c_{4}}+(A_{2}+A_{3})v_{-1}\widetilde{c_{3} }+v_{-1}^{2}\widetilde{c_{2}}+A_{2}v_{1}\widetilde{c_{1}}+(v_{-1}v_{1}+A_{2}v_ {2}-\epsilon^{2})=0,\] \[\cdots\ \cdots\]
where \(A_{i}=-i(2\ell+i+1)\), and \(\epsilon=E-v_{0}\). Coefficients \(\widetilde{c}_{k}\), energy parameter \(\epsilon\), and coefficients \(v_{i}\), depend on the parameters \(\omega\), \(\lambda\) and \(\mu\). For simplicity, in eqs (27) this dependence has not been shown explicitly. All coefficients \(\widetilde{c}_{k}\) depend on \(\ell\) and on \(v_{-1}\). The coefficients \(\widetilde{c}_{k}\) with \(k\geq 3\) depend on \(v_{1}\). In general, \(v_{i}\) shows up in \(\widetilde{c}_{k}\) with \(k\geq(i+2)^{6}\). The external potential is proportional to \(r^{2}\) and vanishes at \(r=0\). Therefore, in expansion (26) the \(\omega\) dependence begins at \(\widetilde{c}_{4}\).
For the construction of \(\widetilde{c}_{k}\) with even values of \(k\), the state energy is needed. In these cases we use the expectation value of the Hamiltonian defined in eq (5):
\[E(\omega,\lambda,\mu)\,\approx\,E(\omega,\mu)+\lambda\,\langle\overline{w}(r; \omega,0,\mu)\rangle,\]
where \(E(\omega,\mu)=\left.E(\omega,\lambda,\mu)\right|_{\lambda=0}\). For definition of \(\langle\overline{w}(r;\omega,\lambda,\mu)\rangle\) see eq (28). Numerical tests using the exact \(E(\omega,\lambda,\mu)\) have shown that this approximation is negligible in comparison to the other approximations made in the present paper.
### Dependence on \(\lambda\) and normalization
The GCC provide only ratios \(c_{k}(\mathfrak{p})/c_{0}(\mathfrak{p})\equiv\widetilde{c}_{k}(\mathfrak{p})\). If \(\psi_{\ell}(r;\mathfrak{p})\) is known in the whole range of \(r\) then \(c_{0}(\mathfrak{p})\) can be determined by the normalization condition. In our case this approach is nonapplicable since only the small-\(r\) part of \(\psi_{\ell}(r;\mathfrak{p})\) is defined by the cusp conditions. But \(c_{0}(\mathfrak{p})\) appears as a prefactor in the approximation (26) for \(\psi_{\ell}(r;\mathfrak{p})\), and its value is necessary for any practical use of this approximation. Therefore, \(c_{0}(\mathfrak{p})\) has to be estimated in a different way using only the information about the short-range behavior of the wave function.
We have to introduce additional information to deal with this issue. Let us first consider the dependence of \(c_{0}\) on \(\lambda\). It is needed for the adiabatic connection expression [eqs (11) and (15)]. We select \(\left\|\psi_{\mathrm{cm}}(\mathbf{r}^{+};\omega)\right\|=1\). Then the substitution of the wave function defined in eqs (21) and (22), and of its expansion (26), to eq (14) yields
\[\begin{split}\langle\overline{w}(\mathfrak{p})\rangle& =\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{\infty}\,\left|\psi_{ \mathrm{rel}}(\mathbf{r};\mathfrak{p})\right|^{2}\,\overline{w}(r;\mu)\,r^{2} dr\sin\theta\,d\theta\,d\phi\\ &=\int_{0}^{\infty}\,\left|\psi_{\ell}(r;\mathfrak{p})\right|^{2} \,\overline{w}(r;\mu)\,r^{2}dr,\\ &\approx\left|c_{0}(\mathfrak{p})\right|^{2}\int_{0}^{\infty} \left|r^{\ell+1}\sum_{k=0}^{K}\,\widetilde{c}_{k}(\mathfrak{p})r^{k}\right|^{ 2}\,\overline{w}(r;\mu)\,dr,\end{split} \tag{28}\]
and, according to eq (11),
\[\overline{E}(\omega,\mu)=\int_{0}^{1}\langle\overline{w}(\omega,\lambda,\mu) \rangle\,d\lambda. \tag{29}\]
For models close to the exact interaction (for large \(\mu\)) one can derive the following relationship [3; 11] (a derivation is given in the Appendix)
\[c_{0}(\omega,\lambda,\mu)\underset{\mu\rightarrow\infty}{\sim}\mathcal{N} \left(1+\frac{1-\lambda}{\sqrt{\pi}\mu}+O\left(\mu^{-2}\right)\right), \tag{30}\]
where \(\mathcal{N}\) is a still unknown normalization constant. As one can see,
\[\mathcal{N}=c_{0}(\omega,1,\mu)=c_{0}(\omega,\lambda,\infty)\equiv c_{0}( \omega).\]
Notice that \(c_{0}(\omega,1,\mu)\) and \(c_{0}(\omega,\lambda,\infty)\) do not depend, respectively, on \(\mu\) and on \(\lambda\) and are equal to \(c_{0}(\omega)\) corresponding to the physical (Coulomb) interaction potential.
We introduce the notation
\[\mathcal{I}_{K}(\omega,\lambda,\mu)=\left(1+\frac{1-\lambda}{\sqrt{\pi}\mu} \right)^{2}\int_{0}^{\infty}\left[r^{\ell+1}\sum_{k=0}^{K}\widetilde{c}_{k}( \omega,\lambda,\mu)r^{k}\right]^{2}\overline{w}(r,\mu)dr. \tag{31}\]
As stated at the beginning of this article, we know the model wave function at \(\lambda=0\). Therefore, we can calculate
\[\langle\overline{w}(\omega,0,\mu)\rangle=\int_{0}^{\infty}|\psi(r,\omega,0, \mu)|^{2}\overline{w}(r,\mu)r^{2}dr. \tag{32}\]
Since \(\mathcal{N}\) does not depend on \(\lambda\), we can assume that
\[\mathcal{N}^{2}\approx\,\frac{\langle\overline{w}(\omega,0,\mu)\rangle}{ \mathcal{I}_{K}(\omega,0,\mu)}. \tag{33}\]
Combining eqs (29), (28), and (33) we get
\[\overline{E}(\omega,\mu)\,\approx\,\frac{\langle\overline{w}(\omega,0,\mu) \rangle}{\mathcal{I}_{K}(\omega,0,\mu)}\,\int_{0}^{1}\mathcal{I}(\omega, \lambda,\mu)d\lambda \tag{34}\]
### Corrections to the model
Assume that \(\omega\) is fixed and \(\lambda=0\). For simplicity, in the next equations, we skip these parameters. Also the \(r\)-dependence, in most cases obvious, is not shown explicitly. We designate \(\mu\)-dependence only, retaining the convention that we drop \(\mu\), if \(\mu=\infty\). Our model is defined by the analog of eq (1):
\[H(\mu)\psi(\mu)=E(\mu)\psi(\mu),\quad\text{ i.e. }\quad E(\mu)=\langle\psi(\mu)|H( \mu)|\psi(\mu)\rangle. \tag{35}\]
For the physical (Coulombic) interaction, \(\mu=\infty\) and we have \(E=\langle\psi|H|\psi\rangle\).
As we already said, the solutions of the crude model, \(E(\mu)\) and \(\psi(\mu)\), are known. The error of this model, \(\Delta_{0}E(\mu)\), has been defined in eq (3). Our aim is to add corrections to the model. The first-order perturbation correction to \(E(\mu)\) is
\[\langle\psi(\mu)|H-H(\mu)|\psi(\mu)\rangle=\langle\psi(\mu)|\overline{w}(\mu) |\psi(\mu)\rangle. \tag{36}\]
From here we have the correction:
\[\Delta_{\rm H}E(\mu)=\langle\psi(\mu)|H|\psi(\mu)\rangle-E=E(\mu)-E+\langle \overline{w}(\mu)\rangle. \tag{37}\]
Plots of \(\Delta_{0}E(\mu)\) and \(\Delta_{\rm H}E(\mu)\) are given in Figure 2. At \(\mu=0\), that is for the non-interacting system, \(\Delta_{0}E(\mu)\) is huge. For the ground state of harmonium with \(\omega=1/2\), \(E(\mu=0)=0.75\) instead of \(E=1.25\) hartree, giving \(\Delta_{0}E(\mu)=0.5\) hartree. But, \(\Delta_{0}E(\mu)\)
Figure 2: Errors of the model energy, \(\Delta_{0}E(\mu)\), eq (3), gray curve, and of the expectation value of the physical Hamiltonian, \(H\), with the model wave function, \(\psi(\mu)\), \(\Delta_{\rm H}E(\mu)\), eq (37), black curve, for the ground state of harmonium with \(\omega=0.5\). The inset zooms on the same curves (by two orders of magnitude), the horizontal dashed lines indicating the region of ”chemical accuracy” (\(\pm 1\) kcal/mol).
does not fall to the chemical accuracy error bars in the entire range of \(\mu\) - up to \(3\) bohr\({}^{-1}\). The improvement due to the first-order perturbation (without any reference to a specific structure of the wave function) is impressive. For \(\Delta_{\rm H}E(\mu)\) the chemical accuracy is reached if \(\mu\;>\;1.5\,{\rm bohr}^{-1}\). The error at \(\mu=0\) is reduced from \(0.5\) hartree to \(\sim 0.06\) hartree, that is of the same order of magnitude as obtained with mean-field approximations (\(\sim 0.04\) hartree for Hartree-Fock or Kohn-Sham). The sign of the errors in Figure 2 is (i) negative for \(E(\mu)-E\) (as \(w(r,\mu)\leq 1/r\)), and (ii) positive for \(\langle\psi(\mu)|H|\psi(\mu)\rangle-E\) (by the variational principle).
Eq (34) may be rewritten as
\[\Delta_{\rm K}E(\mu)=E(\mu)-E+C_{\mathcal{I}}^{K}\,\langle\overline{w}(\mu)\rangle, \tag{38}\]
where
\[C_{\mathcal{I}}^{K}\,=\,\frac{1}{\mathcal{I}_{K}(\omega,0,\mu)}\,\int_{0}^{1 }\mathcal{I}(\omega,\lambda,\mu)d\lambda. \tag{39}\]
Equation (38) can be interpreted as a generalization of eqs (3) and (37). The prefactor of \(\langle\overline{w}(\mu)\rangle\) in these two equations is, respectively, \(0\) and \(1\). Prefactor \(C_{\mathcal{I}}^{K}\) in eq (38) has been derived using some information about the structure of the wave function, specifically, GCC and the adiabatic connection. Therefore, one can expect that
\[|\Delta_{0}E(\mu)|\,\gg\,|\Delta_{\rm H}E(\mu)|\,\gg\,|\Delta_{\rm K}E(\mu)|\,. \tag{40}\]
The correctness of this expectation is demonstrated in the next section.
The integrals over \(r\) in eq (31) can be computed analytically [10]
\[\int_{0}^{\infty}\,{\rm erfc}(\mu\,r)r^{k}\,dr=\frac{k\,\Gamma(k/2)}{2\sqrt{ \pi}(k+1)\mu^{k+1}}. \tag{41}\]
Therefore \(\mathcal{I}_{K}(\omega,\lambda,\mu)\) can be expressed as an algebric function of \(\omega\), \(\lambda\), and \(\mu\). Explicit formulas for \(\widetilde{c}_{k}(\mathfrak{p})\) can be easily deduced from eq (27). Consequently, the effort for computing the prefactor \(C_{\mathcal{I}}^{K}\) in eq (38) is negligible. Notice that in eq (41)) \(\Gamma(k/2)\) introduces a fast increase ot the absolute value of \(\mathcal{I}_{K}(\omega,\lambda,\mu)\) with \(K\) (the maximum power of \(r\) in the expansion).
The approximation is valid only asymptotically, for sufficiently large \(\mu\). Notice that expressions for \(c_{0}(\mathfrak{p})\) and \(\mathcal{I}(\mathfrak{p})\) given in eqs (30), and (31), are divergent at \(\mu=0\).
Results
### General considerations
In the following we will consider only harmonium systems. Of course, this can be seen as futile, because solving the one-dimensional differential Schrodinger equation, eq (23), is trivial (to obtain accurate results we did it on a grid of the order of \(10^{5}\) points). The choice of this simple systems is motivated by the desire to study uniquely the effect of the approximations introduced without adding other effects such as the dependence on the one-particle basis set, or the expansion in terms of Slater determinants. To avoid introducing new effects, the one-particle (external) potential is independent of \(\mu\). In view of future practical applications, we are interested to see how weak the model interaction can be taken (how small \(\mu\) can be chosen), and still obtain an estimate of the energy within "chemical accuracy".
We now define the "smallest acceptable \(\mu\)", SA\(\mu\): for \(\mu\geq\) SA\(\mu\) the absolute errors are smaller than the chemical accuracy (1 kcal/mol). The range of errors between \(\pm 1\) kcal/mol is indicated in the plots by horizontal dashed lines. For the system considered in Figure 2, \(E(\mu)\) has a SA\(\mu\) slightly above 3 bohr\({}^{-1}\). So, it is not interesting to discuss corrections if the interaction already reaches this strength. Weaker interactions can be considered if we simply correct the energy to first order - with \(\langle\psi(\mu)|H|\psi(\mu)\rangle\), SA\(\mu\approx 1.5\).
In the following, the presented plots show the errors of the approximations of \(E\) as functions of \(\mu\) using the same scale as the inset in Figure 2.
### How weak can the interaction be?
Let us consider the ground state of harmonium with \(\omega=1/2\). Now we use the GCC to correct the model energy, \(E(\mu)\) as in eqs (34) and (38), and cut off the expansion of the wave function \(\psi(r,\mathfrak{p})\), limiting the expansion in eq (26) to a maximal power of \(r\), \(K=1,2,3,4\) - see Figure 3a. When \(K=1\) we satisfy only Kato's cusp condition. This improves over \(E(\mu)\), and also over first-order perturbation theory, bringing the SA\(\mu\) to 1.3 bohr\({}^{-1}\). Increasing \(K\) further reduces the SA\(\mu\), until a "wall" at around 0.5 bohr\({}^{-1}\) is reached. Notice that not only the error is reduced by increasing \(K\), but also the stability with respect to the change of the results by chaining \(\mu\) is increased when going beyond satisfying only the Kato cusp
Figure 3: Errors of the different approximations for the energy of harmonium systems. The errors of the model energy, \(\Delta_{0}E(\mu)\), eq (3) are represented as gray curves, that of the expectation value of the physical Hamiltonian with the model wave function, \(\Delta_{\rm H}E(\mu)\), eq (37), by black curves. Panels a – g show the errors for different orders \(K\) in the GCC expansion, eq (26), the value of \(K\) being indicated next to the curve. Panel h shows the error of the approximation of order \(\mu^{-4}\), as given in eq (42). Chemical accuracy (\(\pm 1\) kcal/mol) is indicated by horizontal dashed lines. a: \(\omega=1/2,n=1,\ell=0\); b: \(\omega=1/2,n=1,\ell=1\); c: \(\omega=1/2,n=1,\ell=2\); d: \(\omega=1/2,n=2,\ell=0\); e: \(\omega=1/2,n=3,\ell=0\); f: \(\omega=1,n=1,\ell=0\); ld: \(\omega=1/4,n=1,\ell=0\); h: \(\omega=1/4,1/2\), or \(1\), and \(n=1,\ell=0\).
condition. This is important because (i) in practice, there is an arbitrariness in the choice of \(\mu\), and (ii) the SA\(\mu\) is system-dependent, as will be illustrated further down. The latter is of importance for size-consistency.
The derivations presented above never supposed that the state considered corresponds to the ground state. So, let us now consider some excited states. We consider first the lowest energy states with \(\ell=1\) and \(\ell=2\). The first corresponds to a triplet state, (Figure 3b), the second to a non-natural singlet state [12] (Figure 3c); for \(\ell=2\) the singlet cusp condition does not give a prefactor \((1+r/2)\) but \((1+r/6)\). King [13] remarks that, in an orbital picture, the \(\ell=2\) state corresponds to a strong mixture of sd and p\({}^{2}\) configurations. Notice that both the values provided by the model, and the expectation value of the physical Hamiltonian, are now in much better agreement with the exact value than for \(\ell=0\): the prefactor \(r^{\ell}\) appearing in eq (26) already keeps the electrons apart. Even for \(K=4\), for this system and these states, there is no significant gain over using just \(\langle\psi(\mu)|H|\psi(\mu)\rangle\). For \(\ell=2\), the SA\(\mu\)
Figure 4: Errors in the ground state energy estimate for harmonium (\(\omega=1/2\)) using density functional approximations (\(\mu\)-LDA and \(\mu\)-PBE). Also shown are the errors of the model energy, \(\Delta_{0}E(\mu)\), gray curve, and of the expectation value of the physical Hamiltonian, \(\Delta_{\rm H}E(\mu)\), black curve.
obtained for \(K\leq 4\) are not improved over that of the expectation value of the Hamiltonian.
We also present the first two excited states with \(\ell=0\), Figure 3d,e. The model system has much larger errors which fall outside the domain of the plots. We notice an overall worsening ot the quality of the approximations. The model shows errors that fall outside the range of the plots. The SA\(\mu\) for the expectation value of the Hamiltonian is around 2 bohr\({}^{-1}\). Using the Kato cusp condition reduces it to \(\approx 1.5\) bohr\({}^{-1}\). Increasing the order of the expansion, \(K\), moves the SA\(\mu\) down to somewhere between 1 and 0.5 bohr\({}^{-1}\).
We have seen above that the SA\(\mu\)s can be quite sensitive to the state described. They can also be sensitive to the system. Modifying \(\omega\) affects the SA\(\mu\)s. Figure 3f,g show the effect of making the system more compact (\(\omega=1\) au), or diffuse (\(\omega=1/4\) au), for the respective ground states.
### Comparisons
Using the GCC is not the only way to describe the short-range behavior. One can replace the expansion in powers of \(r\), eq (26) by a simple form that does not diverge as \(r\to\infty\). For example, one can ignore all terms arising from the external potential and the energy in the Schrodinger equation, in the limit \(\mu\to\infty\), and we get, to order \(\mu^{-4}\) (see ref [11], and eq (21) in ref [3]),
\[\frac{\int_{0}^{1}d\lambda\,\mathcal{I}(\lambda,\mu)}{\mathcal{I}(\lambda=0, \mu)}=\frac{\mu^{2}+1.06385\mu+0.31982}{\mu^{2}+1.37544\mu+0.487806} \tag{42}\]
A comparison of Figures 3a, 3f, or 3g to 3h shows that this approximation performs very well compared to those obtained from the GCC: the "walls" are at comparable values of \(\mu\). This is encouraging because the GCC require, in general, taking into account the external potential and the energy.
Density functional approximations have been used for many years to correct models for missing short-range interaction.[14; 15]. Figure 4 shows the corrections provided for two approximations, the local density approximation, LDA, and that of Perdew, Burke, and Ernzerhof, PBE, modified to depend on \(\mu\).[15; 16; 17] The SA\(\mu\)s are around 1 bohr\({}^{-1}\), thus improving over using only Kato's cusp condition (cf. Figure 3a). It may be at first surprising that in Figure 4 LDA is slightly better than PBE. However, PBE becomes better than LDA for small \(\mu\). This cannot be seen in Figure 4 because it shows only the region of "chemical accuracy"; the errors for small \(\mu\) are much larger.
Conclusions
### Summary
We have considered model systems, where electrons interact only via a long-range potential, eq (2). In order to obtain the physical energy we explored corrections based upon the short-range behavior of the wave function, both for the ground and excited states. The numerical results were all obtained for harmonium, eqs (16) - (22) where accurate results are easy to reach due to separability.
We are interested in having corrections to models where the interaction is very weak. However, the approximations proposed can be systematically improved in an asymptotic sense: as the order of the approximation increases, there is a domain of models close enough to the exact solution that gets systematically better. Unfortunately, at present, the approximations fail when the range of the interaction between particles becomes too large (our parameter \(\mu\) becomes too small). We attribute it to imposing only the short-range behavior of the wave function. For short range, the results turned out to be more reliable than obtained by correcting the models with density functional approximations. Unfortunately, the range of validity is system- and state-dependent.
### Perspectives
In quantum chemistry there are different approaches to tackle the problem raised by the singularity of the Coulomb potential. A "brute force" pathway, is to use a large expansion in Slater determinants. The burden can be reduced significantly by using "selected configuration interaction" techniques (see, e.g., ref [20] and references therein). Another way is to improve the description of the wave function by the use of correlation factors, like Jastrow factors in Quantum Monte Carlo (see, e.g., refs [21, 22, 23, 24]), or F12 methods (see, e.g., refs [25, 26]). Still another way to approach this problem is to use density functionals that transfer the short-range behavior from other systems (as a rule, from the uniform electron gas).
In this paper, we combine the spirit of the last two approaches. As in density functional calculations, we compute a model system for which the interaction has no singularity (and thus is expected to converge faster, in general). However, in contrast to density functional calculations, no density functional approximations are present, and the Hohenberg-Kohn
theorem (formulated for ground states) is not used. The approach is applicable to ground and excited states.
As in methods using correlation factors we use the exact short-range behavior of the wave function. The trick allowing to use it comes from having a correction that depends only on the missing part of the interaction that is short-ranged.
The present paper did only show exploratory calculations. However, it is possible to extend considerations presented in this paper to other systems. Already Kurokawa _et al._[8] have shown that the GCC can be applied when the Schrodinger equation for the relative coordinate cannot be separated from that of the center of mass (the He atom).
Before extending our approach there are several issues to be explored. Probably the first one is its formulation in terms of reduced density matrices. In this paper we use GCC for the wave function. To generalize our approach, it might be useful to express the GCC in terms of the 2RDM [18; 19].
One may ask whether it is not more convenient to re-express our formulation in terms of the 1RDM. In 1975 Kimball found a relationship between the 2RDM at coalescence and the distribution of momentum, in fact, the behavior of the 1RDM [27]. At the same time Yasuhara demonstrated that the energy of the uniform electron gas can be expressed in terms of the kinetic energy (one-particle operators) instead of the two-body interations [28]. This gave rise to studies on using the adiabatic connection on the kinetic energy (i.e. the 1RDM) rather than on two-body interactions [29; 30; 31]. Recently, the effects of the electron-electron coalescence on the properties of the natural orbitals and structure of the 1RDM have been revealed [32; 33].
Another issue is the dependence of the GCC on the coalescence point [6; 7; 8; 9]. For this, we can get inspiration from density functional approximation: in each point of space one has a different approximation. Even the Kato's cusp condition (that seemingly is universal) contains a state dependence (through \(\ell\)). In density functional calculation it is treated by using the spin polarization [34], but it can be treated in the context of a pair density (see, e.g., ref [35]).
We illustrate the problematic with two two-electron harmonium systems (A and B) that instead of being treated separately are treated together (for example, having two quantum dots). The system may be described by a double-well potential with two wells sufficiently separated and deep, so that each of them can be approximated by a harmonic potential. We assume that this separation is large enough to neglect the exchange effects between electrons
in A and B. Consequently, the spin part may be separated and, in effect, we can cosider the total wave function which depends on the orbital variables only. It can be written as a product of two harmonium wave functions:
\[\Psi_{\text{total}}(\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3},\mathbf{r}_{4}) =\Psi_{\text{A}}(\mathbf{r}_{1},\mathbf{r}_{2})\Psi_{\text{B}}(\mathbf{r}_{3},\mathbf{r}_{4}) \tag{43}\]
Let us first consider that subsystem A is in a state with \(\ell=0\), while subsystem B is in a state with \(\ell=1\). In region A the coefficient \(\tilde{c}_{1}=1/2\), while in region B, \(\tilde{c}_{1}=1/4\). Let us now consider another example. In both subsystems we have \(\ell=0\). However, the values of \(\omega\) differ: \(\omega_{\text{A}}\neq\omega_{\text{B}}\). We can see from Figures 3a, 3f-h that the range of where results are reliable depend on the model chosen. For example, we see in Figure 3h that if we choose \(\mu=0.5\) we will have chemical accuracy for a subsystem with \(\omega=1/4\), but that the error is significantly larger for a subsystem with \(\omega=1\) where chemical accuracy is reached only for \(\mu=1\). Do we choose to make the more expensive global calculation, say with \(\mu=1\), or do we decide to lose quality and make the cheaper calculation at \(\mu=1/2\)? Alternatively, we might consider making calculations with models that change from one region of space to another.
In how far do we need to go beyond the Kato cusp condition? Using eq (42), Figure 3h, shows that this might not be needed. Furthermore, eq (50) of the Appendix (see also discussion in ref [11]) shows that there are terms that are important when \(r>1/\mu\) even when \(r\) is small and they may need a more careful treatment.
Another question is raised by analyzing the dependence on the external potential. It was shown by Kurokawa _et al._[8] that for small interelectronic distance \(r\), the leading term in the expansion of the Coulomb potential of the nuclei is quadratic. So, some of the conclusions drawn from the present treatment of harmonium might be applied in systems where the external potential is Coulombic.
## VI Appendix: A derivation of Eq (30)
We repeat here the argumentation given in ref [11], section III, where the result was obtained for \(\lambda=0\).
We consider the behavior of the Schrodinger equation
\[\left(-\frac{d^{2}}{dr^{2}}-\frac{2}{r}\,\frac{d}{dr}+\frac{(1-\lambda)\, \text{erf}(\mu\,r)+\lambda}{r}+\frac{\omega^{2}r^{2}}{2}-E\right)\psi(r)=0 \tag{44}\]
at the limit of large \(\mu\). After a change of the variable \(r\) to \(x=\mu r\), and defining \(u(x)=\psi(x/\mu)\) one obtains
\[\left(-\frac{d^{2}}{dx^{2}}-\frac{2}{x}\,\frac{d}{dx}+\frac{(1-\lambda)\,{\rm erf }(x)+\lambda}{\mu\,x}+\frac{\omega^{2}x^{2}}{2\,\mu^{4}}-\frac{E}{\mu^{2}} \right)u(x)=0. \tag{45}\]
For large \(\mu\), we neglect terms proportional to \((1/\mu)^{n}\) with \(n>1\) and to the resulting equation apply perturbation theory up to the first order in \(1/\mu\). We set:
\[u(x)=u^{(0)}(x)+\frac{1}{\mu}u^{(1)}(x). \tag{46}\]
The \(0\)th order in \(1/\mu\) gives
\[\left(-\frac{d^{2}}{dx^{2}}-\frac{2}{x}\,\frac{d}{dx}\right)u^{(0)}(x)=0. \tag{47}\]
From here we get
\[u^{0}(x)={\cal A}_{0}+\frac{{\cal A}_{1}}{x} \tag{48}\]
where \({\cal A}_{0}\) and \({\cal A}_{1}\) are integration constants. The wave function has to be finite at \(x=0\). Therefore, \({\cal A}_{1}=0\). Furthermore, as \(u^{(0)}\) corresponds to the solution at \(\mu=\infty\), namely to the exact solution independent of either \(\lambda\) or \(\mu\), we set \({\cal A}_{0}={\cal N}\). Consequently, the equation for \(u^{(1)}(x)\) is
\[\left(-\frac{d^{2}}{dx^{2}}-\frac{2}{x}\,\frac{d}{dx}\right)u^{(1)}(x)+\frac{( 1-\lambda)\,{\rm erf}(x)+\lambda}{x}\,{\cal N}=0. \tag{49}\]
After solving this equation and changing variable \(x\) to \(r\), we get
\[\psi(r)={\cal N}\left(1+\frac{\lambda}{2}r+\frac{1-\lambda}{2\sqrt{\pi}\mu}e^ {-\mu^{2}r^{2}}+(1-\lambda)\left(\frac{r}{2}+\frac{1}{4\,\mu^{2}\,r}\right){ \rm erf}(\mu r)\right)+{\cal B}_{0}+\frac{{\cal B}_{1}}{\mu r}, \tag{50}\]
where \({\cal B}_{0}\) and \({\cal B}_{1}\) are integration constants. To avoid singularity of the wave function at \(r=0\)\({\cal B}_{1}=0\). In order to recover results for \(\lambda=0\)[11] we have to set \({\cal B}_{0}=0\). Finally, in the limit \(r\,\rightarrow\,0\), up to the first-order in \(1/\mu\), we have
\[\left.\psi(r)\right|_{r=0}={\cal N}\left(1+\frac{1-\lambda}{\sqrt{\pi}\mu} \right), \tag{51}\]
where \({\cal N}\) is a normalization constant independent of either \(\lambda\) or \(\mu\).
## VII Acknowledgement
This work was done without specific financial support. It was presented in part at the MQM 2022 honoring Profs. Gustavo E. Scuseria (Rice University, Houston, Texas, U.S.A.) and Martin Head-Gordon (Univerisity of California, Berkeley, California, U.S.A.) held at Virginia Tech (Blacksburg, Virginia, U.S.A.). |
2310.13757 | Nearly-optimal state preparation for quantum simulations of lattice
gauge theories | We present several improvements to the recently developed ground state
preparation algorithm based on the Quantum Eigenvalue Transformation for
Unitary Matrices (QETU), apply this algorithm to a lattice formulation of U(1)
gauge theory in 2+1D, as well as propose a novel application of QETU, a highly
efficient preparation of Gaussian distributions.
The QETU technique has been originally proposed as an algorithm for
nearly-optimal ground state preparation and ground state energy estimation on
early fault-tolerant devices. It uses the time-evolution input model, which can
potentially overcome the large overall prefactor in the asymptotic gate cost
arising in similar algorithms based on the Hamiltonian input model. We present
modifications to the original QETU algorithm that significantly reduce the cost
for the cases of both exact and Trotterized implementation of the time
evolution circuit. We use QETU to prepare the ground state of a U(1) lattice
gauge theory in 2 spatial dimensions, explore the dependence of computational
resources on the desired precision and system parameters, and discuss the
applicability of our results to general lattice gauge theories. We also
demonstrate how the QETU technique can be utilized for preparing Gaussian
distributions and wave packets in a way which outperforms existing algorithms
for as little as $n_q \gtrsim 2-5$ qubits. | Christopher F. Kane, Niladri Gomes, Michael Kreshchuk | 2023-10-20T18:35:06Z | http://arxiv.org/abs/2310.13757v2 | # Nearly-optimal state preparation for quantum simulations of lattice gauge theories
###### Abstract
We present several improvements to the recently developed ground state preparation algorithm based on the Quantum Eigenvalue Transformation for Unitary Matrices (QETU), apply this algorithm to a lattice formulation of U(1) gauge theory in 2+1D, as well as propose a novel application of QETU, a highly efficient preparation of Gaussian distributions.
The QETU technique has been originally proposed as an algorithm for nearly-optimal ground state preparation and ground state energy estimation on early fault-tolerant devices. It uses the time-evolution input model, which can potentially overcome the large overall prefactor in the asymptotic gate cost arising in similar algorithms based on the Hamiltonian input model. We present modifications to the original QETU algorithm that significantly reduce the cost for the cases of both exact and Trotterized implementation of the time evolution circuit. We use QETU to prepare the ground state of a U(1) lattice gauge theory in 2 spatial dimensions, explore the dependence of computational resources on the desired precision and system parameters, and discuss the applicability of our results to general lattice gauge theories. We also demonstrate how the QETU technique can be utilized for preparing Gaussian distributions and wave packets in a way which outperforms existing algorithms for as little as \(n_{q}>2-3\) qubits.
###### Contents
* I Introduction
* II QETU
* II.1 Algorithm review
* II.2 Ground state preparation via QETU
* II.2.1 Original algorithm
* II.2.2 Modified ground state preparation
* II.3 Wavepacket construction via QETU
* III Lattice formulation of U(1) gauge theory
* IV Numerical Results
* IV.1 U(1) gauge theory
* IV.2 Wavepacket construction
* V Ground state preparation cost for a general gauge theory
* VI Discussion and Conclusion
## I Introduction
Simulating many-particle quantum systems has always been seen as one of the most exciting potential applications of quantum computers [1]. While simulation of non-relativistic many-body physics is already by itself complex enough to be considered as a candidate for quantum advantage [2; 3; 4; 5; 6], quantum field theory (QFT) adds to the picture such ingredients as particle number non-conservation, a multitude of particle types and their interactions, gauge symmetry, etc. Quantum simulation of realistic QFTs such as Quantum Chromodynamics (QCD), will likely require the access to fault-tolerant quantum computers. In order to extract as much physics as possible from such devices, it is imperative to investigate all theoretical aspects of the quantum simulation. The preliminary steps include constructing a model suitable for quantum computation by discretizing the original continuous theory [7; 8], mapping the degrees of freedom in the discretized model onto logical qubits [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 22; 223; 231; 232; 241; 242; 25; 251; 252; 253; 254; 255; 256; 257; 258; 26; 267; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 291; 285; 286; 287; 289; 292; 300; 30; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 333; 341; 351; 352; 353; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 383; 391; 384; 385; 386; 387; 388; 399; 400; 401; 402; 403; 404; 41; 442; 443; 444; 455; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 533; 540; 557; 558; 561; 562; 563; 564; 565; 566; 567; 579; 588; 590; 591; 592; 593; 594; 595; 596; 597; 598; 600; 509; 511; 517; 518; 519; 530; 519; 540; 512; 541; 542; 543; 558; 567; 572; 589; 599; 610; 621; 622; 623; 624; 625; 626; 637; 638; 639; 640; 659; 66; 670; 681; 682; 683; 684; 685; 691; 692; 693; 694; 695; 696; 701; 697; 702; 703; 704; 705; 706; 707; 71; 721; 735; 741; 755; 76; 778; 797; 788; 799; 800; 81; 829; 836; 847; 859; 860; 87; 88; 891; 892; 893; 894; 895; 896; 970; 897; 988; 999; 99; 990; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 108; 1099; 111; 114; 115; 116; 1179; 120; 121; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 170; 171; 183; 184; 185; 186; 187; 188; 189; 193; 194; 195; 196; 197; 198; 1997; 1998; 299; 301; 310; 311; 321; 333; 342; 343; 354; 366; 371; 388; 399; 401; 411; 425; 437; 439; 445; 461; 477; 48; 491; 492; 405; 493; 406; 407; 408; 409; 412; 409; 413; 414; 415; 406; 416; 417; 418; 419; 42; 431; 445; 45; 462; 473; 475; 48; 494; 495; 419; 430; 414; 450; 417; 496; 418; 497; 419; 55; 419; 56; 573; 58; 598; 599; 600; 512; 507; 513; 599; 513; 514; 52; 516; 52; 53; 557; 59; 517; 518; 52; 536; 58; 599; 62; 519; 503; 520; 531; 540; 55; 596; 519; 574; 51; 533; 55; 597; 5
the capabilities of either near-term or fault-tolerant devices. The former direction has been revolutionized by the invention of variational [71; 72; 73; 74] and subspace [75; 76; 77; 78; 79] methods, while the exploration of the latter one has lead to the development of quantum simulation algorithms optimal and nearly-optimal in problem parameters [80; 81; 82; 83]. The asymptotic cost of these nearly-optimal algorithms is typically given in terms of the number of calls to subroutines that provide information about the physical Hamiltonian. Most of such algorithms rely on the usage of _block encoding_ subroutines whose construction comes at a great expense and leads to large prefactors in the final gate cost. A number of algorithms, however, use instead the Hamiltonian time evolution input model [84; 85; 86; 87], in which case it is an approximate time evolution circuit that serves as an elementary block for the circuit construction. The development of this idea has ultimately lead to the QETU algorithm [88], which enables one to assemble circuits applying a large class of polynomial transformations of a unitary operator to a given state.
In this work, we present the first study using QETU for preparing the ground state of a lattice gauge theory. The test theory we consider is a particular formulation of a U(1) lattice gauge theory in two spatial dimensions [89; 42; 90]. Our study includes an analysis of how the parameters of the QETU algorithm, as well as the Trotter error from approximating the time evolution circuit, scale with the system size, number of qubits per site, and gauge coupling. Armed with the intuition from this numerical study, we provide a general discussion about how the cost of QETU for ground state preparation of a QCD-like lattice gauge theory will scale with the system parameters.
We also present a novel application of QETU for preparing Gaussian states in quantum mechanical systems. We show that, while a naive application of QETU to this problem results in the error of the approximation decreasing polynomially with the number of calls to the time-evolution circuit, this scaling can be made exponential with simple modifications to the QETU procedure. Using our improved methods, we show that the cost of preparing Gaussian states with QETU outperforms existing state preparation methods for states represented using \(n_{q}>2-3\) qubits.
In addition to these more specialized studies, we developed modifications to the original QETU algorithm that can significantly reduce the cost for arbitrary Hamiltonians. These modifications can be applied to both the scenario when the time evolution circuit is prepared exactly, and when prepared approximately using Trotter methods.
The rest of this work is organized as follows. In Sec. II.1 we review the general QETU algorithm. From there, in Sec. II.2 we review how QETU can be used for ground state preparation. Next, in Sec. II.2.2 we discuss our modifications to the original QETU algorithm, and provide numerical demonstrations of the achievable cost reduction. In Sec. II.3 we discuss how to use QETU to prepare Gaussian states. Section III reviews the details of the test theory we consider, which is a particular formulation of a U(1) lattice gauge theory in two spatial dimensions. Numerical results for ground state preparation of this U(1) lattice gauge theory using QETU are presented in Sec. IV.1. From there, we present the results of our study using QETU to prepare Gaussian states in Sec. IV.2. In Sec. V, we provide a general discussion of the scaling expected when extending QETU for ground state preparation of a general lattice gauge theory with the same qualititive properties as QCD. Our conclusions, as well as a discussion of future applications, are presented in Sec. VI. The main notations used throughout the paper are listed in Tab. 1.
## II QETU
We begin this section by reviewing the QETU algorithm in its most general form, as well as its application to ground state preparation. Next, we present modifications to the original algorithm that reduce the cost of ground state preparation. Lastly, we consider a novel application of QETU to the preparation of Gaussian distributions and wave packets.
### Algorithm review
Similarly to the Quantum Eigenvalue Transformation [91; 92] algorithm, the QETU circuit realizes a polynomial transformation of \(\mathrm{e}^{-i\tau H}\), which, in turn, can be used for implementing a wide class of functions of \(H\). While for a given Hermitian operator \(H\) constructing the exact circuit for \(U=\mathrm{e}^{-i\tau H}\) is, generally, a non-trivial problem -- even for short times \(\tau\) -- QETU can potentially render useful results for approximate implementations of \(U\). The impact of such approximations on the method's performance will be explored in future sections.
Preparing the ground state of a Hamiltonian with the aid of the QETU algorithm is based on the concept of _filtering_[87; 88; 93; 94; 95], which implies constructing a circuit approximately implementing the action of a projector \(P_{<\mu}=|\psi_{0}\rangle\langle\psi_{0}|\) onto the ground state of \(H=\sum_{n}E_{n}|\psi_{n}\rangle\langle\psi_{n}|\) and applying this circuit to a state \(|\psi_{\mathrm{init}}\rangle\) having significant overlap \(|\langle\psi_{0}|\psi_{\mathrm{init}}\rangle|=\gamma\geq 0\) with the ground state:
\[P_{<\mu}|\psi_{\mathrm{init}}\rangle\propto|\psi_{0}\rangle\,. \tag{1}\]
In Ref. [94] it was observed that \(\cos^{M}H\) for large values of \(M\) is approximately proportional to the projector onto the ground state of \(H\) for a properly normalized \(H\). The circuit for \(\cos^{M}H\) in this approach is expressed as \(\cos^{M}\!\!H=(\mathrm{e}^{iH}+\mathrm{e}^{-iH})^{M}/2^{M}\), where the exponents are implemented using any available time evolution algorithm, while the linear combination of terms is obtained
\begin{table}
\begin{tabular}{l c} \hline \hline & QETU algorithm \\ \hline \hline \(f(x)\) & Function to be approximated by QETU \\ \hline \(\mathsf{F}(x)\) & \(\mathsf{f}(2\arccos(x))\) \\ \hline \(F(x)\) & Polynomial approximation to \(\mathsf{F}(x)\) \\ \hline \(f(x)\) & \(F(\cos(x/2))\) \\ \hline \(U\) & QETU input operator \\ \hline \(H^{\text{phys}}=\sum_{n}E^{\text{phys}}_{n}|\psi_{n}\rangle\langle\psi_{n}|\) & Physical Hamiltonian \\ \hline \(H=\sum_{n}E_{n}|\psi_{n}\rangle\langle\psi_{n}|\) & Rescaled Hamiltonian \\ \hline \(c_{1,2}\) & Constants in operator rescaling, Eqs. (3) and (27) \\ \hline \(P_{<\mu}=|\psi_{0}\rangle\langle\psi_{0}|\) & Projector onto ground state of \(H\) \\ \hline \(|\psi_{\text{init}}\rangle\) & Initial state \\ \hline \(\sigma_{\pm}\), \(\sigma_{\text{min}}\), \(\sigma_{\text{max}}\) & Parameters of shifted sign function, Eq. (7) \\ \hline \(\Delta^{\text{phys}}\) & Initial (approximate) knowledge of \((E_{1}^{\text{phys}}-E_{0}^{\text{phys}})/2\) \\ \hline \(\Delta\) & Initial (approximate) knowledge of \((E_{1}-E_{0})/2\) \\ \hline \(\mu\) & Initial (approximate) knowledge of \((E_{1}+E_{0})/2\) \\ \hline \(\tau\) & Evolution parameter of \\ \hline \(N_{\text{steps}}\) & Number of steps in Trotter implementation of \(U\) \\ \hline \(\delta\tau\) & \(\tau/N_{\text{steps}}\), Trotter step size \\ \hline \(\tau_{\text{max}}\) & Largest value of \(\tau\) guaranteeing isolation of the ground state \\ \hline \(\eta\) & Parameter defining \(H\) spectrum bounds, Eq. (4) \\ \hline \(\eta_{P_{<\mu}}\) & Parameter of the shifted sign function, Eqs. (14) and (15) \\ \hline \(\gamma=|\langle\psi_{\text{init}}|\psi_{0}\rangle|\geq 0\) & Overlap between initial guess and true ground state \\ \hline \(d\) & Degree of (even) \(F(x)\) polynomial \\ \hline \(n_{\text{Ch}}\) & Number of Chebyshev polynomials used to represent (even) \(F(x)\) \\ \hline \(N_{\text{steps}}\) & Number of Trotter steps in single call to \(U\) \\ \hline \(N_{\text{tot}}\) & \(d\times N_{\text{steps}}\), total number of Trotter steps in QETU circuit \\ \hline FT & Discrete Fourier transformation matrix \\ \hline \hline \multicolumn{3}{c}{U(1) gauge theory} \\ \hline \(g\) & Coupling constant \\ \hline \(a\) & Lattice spacing \\ \hline \(V\) & Physical volume \\ \hline \(N_{s}\) & Number of sites in each dimension of a lattice \\ \hline \(N_{p}\) & Number of independent plaquettes \\ \hline \(b_{\text{max},p}\), \(\delta b_{p}\) & Maximum value of magnetic field operator and discretization step for \(p^{\text{th}}\) plaquette, Eq. (37) \\ \hline \(r_{\text{max},p}\), \(\delta r_{p}\) & Maximum value of rotor operator and discretization step for \(p^{\text{th}}\) plaquette, Eq. (38) \\ \hline \(n_{q}\) & Number of qubits per lattice site \\ \hline \hline \multicolumn{3}{c}{Wavepacket preparation} \\ \hline \(x_{0}\), \(p_{0}\) & Expectation values of position and momentum operators \\ \hline \(\sigma_{x}\) & Wavepacket width \\ \hline \(\widehat{x}_{\text{sh}}\) & Shifted position operator \\ \hline \(x_{0}^{\text{QETU}}\) & \(c_{1}x_{0}+c_{2}\) \\ \hline \(\sigma_{\text{QETU}}\) & \(c_{1}\sigma_{x}\) \\ \hline \(\mathsf{F}_{+}(x),\mathsf{F}_{-}(x)\) & Even and odd parts of \(\mathsf{F}(x)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main notations used in the paper.
via the Linear Combination of Unitaries (LCU) algorithm [92; 96; 97; 98; 99; 100; 101; 102]. While Ref. [94] combined the ideas of the Hamiltonian time evolution input model [84; 85; 86; 87] (i.e., utilizing \(\mathrm{e}^{-iH}\) as a building block for the algorithm), this approach did not have optimal performance. In Ref. [95] algorithms based on the filtering concept have been developed based on the Hamiltonian oracle access model and QET technique [81]. Despite the asymptotically nearly-optimal performance of algorithms in Ref. [95], they suffer from a high cost of the oracle subroutines.
Finally, nearly-optimal algorithms for ground state energy estimation [87] and ground state preparation [88] based on the Hamiltonian time evolution input model have been proposed. While efficient ground state energy estimation (without ground state preparation) could be achieved with as little as a single implementation of \(\mathrm{e}^{-i\tau H}\) for several values of \(\tau\)[87], preparing the ground state in QETU amounts to using a circuit similar to the circuit used in Quantum Eigenvalue Transformation (QET) [92; 88; 91], see Fig 1.
The QETU Theorem [88] assumes the access to a circuit implementing \(U=\mathrm{e}^{-iH}\) for an \(n\)-qubit Hermitian operator \(H\). It states that for any even real polynomial \(F(x)\) of degree \(d\) satisfying \(|F(x)|\leq 1,\,\forall x\in[-1,1]\), one can find a sequence of symmetric phase factors \(\Phi=(\phi_{0},\phi_{1},\ldots,\phi_{1},\phi_{0})\in\mathds{R}^{d+1}\), such that the circuit in Fig. 1 denoted by \(\mathcal{U}\) satisfies \((\left|0\right|\otimes\mathds{1}_{n})\,\mathcal{U}\left(\left|0\right\rangle \otimes\mathds{1}_{n})=F(\cos(H/2))\)[88]. In practice, the QETU circuit is used for approximately implementing \(\mathsf{f}(H)\) by realizing a transformation \(F(\cos(H/2))\), where \(F(x)\) is a polynomial approximation to \(\mathsf{F}(x)\equiv\mathsf{f}(2\arccos(x))\).
Note that there also exists a control-free version of the QETU circuit that avoids having to implement controlled calls to the \(\mathrm{e}^{-iH}\) circuit [88]. The procedure for using the control-free implementation involves grouping the terms of \(H\) into groups \(H=\sum_{j}H^{(j)}\) such that each term in \(H^{(j)}\) anticommutes with the Pauli operator \(K_{j}\). It can be shown that, once this grouping is found, only the \(K_{j}\) operators must be controlled. If one finds \(l\) such groups, instead of having to control each term in \(\mathrm{e}^{-iH}\), the control-free version of QETU requires only an additional \(l\) controlled operations. Using this control-free circuit implements instead the mapping \(F(\cos(H))\). More details about the control-free implementation, including the quantum circuit, can be found in App. B.
The original algorithm in Ref. [88] proposed the use of \(\mathrm{e}^{-iH}\) in the QETU circuit. It is possible, however, to use \(\mathrm{e}^{-i\tau H}\) instead. Doing so leads to a modification of the Lemma above, where the mapping is now \(F(\cos(\tau H/2))\). Because \(\cos(\tau x/2)\) is periodic, one must be careful when choosing \(\tau\) to ensure construction of the desired function \(\mathsf{f}(x)\). In the following section, we discuss how using \(\tau\neq 1\) can reduce the cost of state preparation. In Sec. IV.2 we discuss how \(\tau\neq 1\) can reduce the cost of using QETU for constructing Gaussian states.
### Ground state preparation via QETU
In this section, we review the algorithm proposed in Ref. [88] for ground state preparation using QETU. We first state the necessary assumptions and scaling of the algorithm. After reviewing the original algorithm in more detail, we discuss how using instead \(\mathrm{e}^{-i\tau H}\) as a building block in the QETU circuit can lead to significant cost reductions, both when implementing \(\mathrm{e}^{-i\tau H}\) exactly as well as using product formulas. We conclude by discussing how QETU can be used to prepare \(n_{q}\)-qubit Gaussian states with a cost that is linear in \(n_{q}\).
#### ii.2.1 Original algorithm
Suppose one has access to a Hamiltonian \(H\) via \(U=\mathrm{e}^{-iH}\), and that the spectrum of \(H\) is in the range \([\eta,\pi-\eta]\) for some \(\eta>0\). Furthermore, assume one has knowledge of parameters \(\mu\) and \(\Delta\) such that
\[E_{0}\leq\mu-\Delta/2\leq\mu+\Delta/2\leq E_{1}\,, \tag{2}\]
where \(E_{i}\) is the \(i^{\text{th}}\) excited state of \(H\). Here \(\mu\) represents the knowledge of the precise values of the energies, and \(\Delta\) is a lower bound of the excited state energy gap \(E_{1}-E_{0}\). Figure 2 shows an example of the exact projector onto the ground state, given by the step function \(1-\theta(x-\mu)\), that isolates the ground state even with only partial knowledge of \(E_{0}\) and \(E_{1}\). Lastly, assume one has an initial guess \(|\psi_{\text{init}}\rangle\) for the ground state with overlap satisfying \(|\,\langle\psi_{0}|\psi_{\text{init}}\rangle|\geq\gamma\). Under these assumptions, one can prepare a state \(|\tilde{\psi}_{0}\rangle\) such that \(|\langle\tilde{\psi}_{0}|\tilde{\psi}_{0}\rangle|\geq 1-\epsilon\) using \(\tilde{\mathcal{O}}(\gamma^{-2}\Delta^{-1}\log(1/\epsilon))\) controlled calls to the time evolution circuit for \(U\). We will spend the rest of this section describing the details of the algorithm, including how the scaling with \(\gamma,\Delta\) and \(\epsilon\) arise.
We begin by addressing the fact that the spectrum of our target Hamiltonian must be in the range \([\eta,\pi-\eta]\) for some \(\eta>0\). Because QETU returns a function with a periodic argument, i.e., \(F(\cos(H/2))\), any projector constructed with QETU will repeat itself for large enough energies, as illustrated in Fig. 3. From this we see that unless the spectrum of \(H\) is in a limited range, we cannot guarantee that higher excited states will be filtered out. To avoid this problem, in Ref. [88] the Hamiltonian was first scaled so that its spectrum was in the range \([\eta,\pi-\eta]\) for some \(\eta>0\). We find that this constraint can be somewhat relaxed, which is described in detail in Sec. II.2.2. If the physical, unshifted Hamiltonian is given by \(H^{\text{phys}}\) with energies \(E_{i}^{\text{phys}}\), the Hamiltonian with the shifted spectrum is given by
\[H=c_{1}H^{\text{phys}}+c_{2}\mathds{1}\,, \tag{3}\]
where
\[c_{1}=\frac{\pi-2\eta}{E_{\text{max}}^{\text{phys}}-E_{0}^{\text{phys}}}\,, \quad c_{2}=\eta-c_{1}E_{0}^{\text{phys}}. \tag{4}\]
Alternatively, one could replace \(E_{\rm phys}^{\rm max}\) with an upper bound on the maximum eigenvalue. One important consequence of shifting the spectrum is that the energy gap of the shifted Hamiltonian shrinks as well. If \(\Delta^{\rm phys}\) is the energy gap of the physical Hamiltonian \(H^{\rm phys}\), the shifted gap is given by \(\Delta=c_{1}\Delta^{\rm phys}\). This tells us that \(\Delta\approx\Delta^{\rm phys}/E_{\rm max}\). Because the maximum eigenvalue of a Hamiltonian generally grows with the number of terms, the gap \(\Delta\) used in the QETU algorithm will generally shrink with the number of lattice sites. Therefore, one generally expects \(\Delta\sim 1/N_{\rm sites}\), where \(N_{\rm sites}\) denotes the number of lattice sites use in a simulation of some lattice gauge theory. We will discuss this scaling in more detail in Secs. IV.1 and V.
Given some Hamiltonian \(H\) and the associated parameters \(\mu\) and \(\Delta\), one can construct an approximate projector onto the ground state by constructing an approximation \(f(x)\) to \({\sf f}(x)=1-\theta(x-\mu)\) satisfying
\[\begin{split}|f(x)-c|&\leq\epsilon\,,\quad\forall x \in[\eta,\mu-\Delta/2]\,,\\ |f(x)|&\leq\epsilon\,,\quad\forall x\in[\mu+\Delta/2,\pi-\eta]\,.\end{split} \tag{5}\]
Furthermore, because a unitary matrix must have entries with magnitude less than or equal than one, we require \(|f(x)|\leq 1,\forall x\). The parameter \(c\) is chosen to be slightly smaller than \(1\) to avoid numerical overshooting when finding a Chebyshev polynomial satisfying the above constraints, and is discussed in more detail later in the section.
In order to approximately implement the action of \({\sf f}(H)\) with QETU, we need the polynomial \(F(x)\) to approximate the function \({\sf f}(2\arccos(x))\). The presence of the \(\arccos(x)\) implies that \(F(x)\) can only be defined for \(x\in[-1,1]\). Taking the cosine transform into account, we wish to construct \(F(x)\) such that
\[\begin{split}|F(x)-c|&\leq\epsilon\,,\quad\forall x \in[\sigma_{+},\sigma_{\rm max}]\,,\\ |F(x)|&\leq\epsilon\,,\quad\forall x\in[\sigma_{\rm min },\sigma_{-}]\,,\\ |F(x)|&\leq c\,,\quad\forall x\in[-1,1]\,.\end{split} \tag{6}\]
Figure 1: QETU circuit diagram. The top qubit is the control qubit, and the bottom register is the state that the matrix function is applied to. Here \(U={\rm e}^{-iH}\) is the time evolution circuit. Upon measuring the ancillary qubit to be in the zero state, one prepares the normalized quantum state \(F(\cos(H/2))\,|\psi\rangle\,/||F(\cos(H/2))\,|\psi\rangle\,||\) for some polynomial \(F(x)\). For symmetric phase factors \(\Phi=(\phi_{0},\phi_{1},\ldots,\phi_{1},\phi_{0})\in{\mathds{R}}^{d+1}\), then \(F(\cos(H/2))\) is a real even polynomial of degree \(d\). The probability of measuring the control qubit in the zero state is \(p=||F(\cos(H/2))\,|\psi\rangle\,||^{2}\).
Figure 3: In order to construct the approximate projector \(P_{<\mu}\) onto the ground state, QETU implements an approximation of the function \({\sf F}(\cos(H/2))\), where \({\sf F}(x)={\sf f}(2\arccos(x))\) and \({\sf f}(x)\) is a unit step function \({\sf f}(x)=1-\theta(x-\mu)\). The the x-axis variable \(x\) corresponds to the energy \(E\). Because \(\cos(x/2)\) is periodic, the step function repeats itself with period \(4\pi\). Unless the spectrum of the Hamiltonian is in a limited range, one cannot guarantee that all the excited states will be filtered out.
Figure 2: Plot of the step function \({\sf f}(x)=1-\theta(x-\mu)\) encoding partial information known about \(E_{0}\) and \(E_{1}\). The the x-axis variable \(x\) corresponds to the energy \(E\). The ground state energy \(E_{0}\) is known to be in the shaded orange region between the orange vertical lines. The first excited state \(E_{1}\) is known to be larger than the value indicated by the green vertical line. The gap \(\Delta\) used is the difference between the lower bound of \(E_{1}\) and the upper bound of \(E_{0}\). The central value \(\mu\) is chosen to be halfway between the upper bound of \(E_{0}\) and the lower bound of \(E_{1}\). Isolating the ground state is possible without exact knowledge of \(E_{0}\) and \(E_{1}\).
where
\[\sigma_{\pm} =\cos\frac{\mu\mp\Delta/2}{2}\,,\] \[\sigma_{\rm min} =\cos\frac{\pi-\eta}{2}\,, \tag{7}\] \[\sigma_{\rm max} =\cos\frac{\eta}{2}\,.\]
For a non-zero gap \(\Delta\) and a non-zero error \(\epsilon\), one candidate function for \(F(x)\) is the shifted error function, which has the important property that Chebyshev approximations of it converge exponentially with the degree of the polynomial [88]. In principle, one could solve for \(F(x)\) by choosing a specific error function such that the Chebyshev approximation to it has a constant error for all \(x\). While it was shown in Ref. [103] that this procedure prepares the ground state of a system with a gap \(\Delta\) to a precision \(\epsilon\) using a polynomial with degree scaling as \(\mathcal{O}(\Delta^{-1}\log\epsilon^{-1})\), this method can lead to numerical instabilities if not performed carefully [88]. Instead, we follow a different procedure, also described in Ref. [88], that avoids the need to first choose a shifted error function while still producing a near-optimal approximation. This procedure is discussed in detail later in this section.
For the sake of argument, if one did choose \(\mathsf{F}(x)\) to be a shifted error function, would exist a Chebyshev polynomial approximation to it with fidelity \(1-\epsilon\) where the degree of the polynomial is \(\mathcal{O}(\Delta^{-1}\log\bigl{(}\epsilon^{-1}\bigr{)})\)[88]. While we do not in practice choose \(\mathsf{F}(x)\) to be a shifted error function manually, the convex-optimization method we use to determine the Chebyshev approximation of \(\mathsf{F}(x)\) has this same scaling. The scaling with \(\Delta\) can be understood by first recognizing that for a smaller \(\Delta\), the shifted error function rises more quickly. A steeper rising error function requires a higher degree polynomial to approximate the function to the same precision than a more slowly rising one. By inverting \(\mathcal{O}(\Delta^{-1}\log\bigl{(}\epsilon^{-1}\bigr{)})\), the error \(\epsilon\) scales as \(\epsilon=\mathcal{O}(\mathrm{e}^{-b\Delta d})\), where \(b\) is a constant. Figure 4 shows an example \(F(x)\) for parameter values \(\eta=0.3\), \(c=0.999\), \(E_{0}=1\), \(E_{1}=1.6\), \(\mu=(E_{0}+E_{1})/2\), \(\Delta=(E_{1}-E_{0})\), corresponding to \(\sigma_{\rm min}=0.15\), \(\sigma_{-}=0.70\), \(\sigma_{+}=0.88\), and \(\sigma_{\rm max}=0.99\). The function \(F(x)\) is represented using a \(d=22\) degree polynomial, resulting in error \(\epsilon\approx 0.0096\).
The final piece of the scaling has to do with the overlap of the initial guess for the ground state. Because the probability to measure zero in the ancillary register of the QETU circuit is given by \(||F(\cos(H/2))\,|\psi_{\rm init}\rangle\,||=\gamma^{2}\), the number of times the circuit must be prepared in order to measure zero increases for a poor initial guess. It can therefore be beneficial to dedicate substantial resources to prepare a high quality initial guess \(|\psi_{\rm init}\rangle\).
We now review the procedure in Ref. [88] for finding a degree \(d\) even Chebyshev polynomial for \(F(x)\). The goal is to solve for \(n_{\rm Ch}=d/2+1\) Chebyshev coefficients \(c_{k}\) such that
\[F(x)=\sum_{k=0}^{n_{\rm Ch}-1}c_{2k}T_{2k}(x)\,. \tag{8}\]
To do so, one first samples \(F(x)\) at a set of \(M\) discrete points. It is known that polynomial interpolation sampling at equidistant points is exponentially ill-conditioned [104]. To avoid this problem, \(F(x)\) is instead sampled using the roots of Chebyshev polynomials \(x_{j}=-\cos\Bigl{(}\frac{j\pi}{M-1}\Bigr{)}\) for \(j=0,\ldots,M-1\) for some large value of \(M\). Because the region where \(F(x)=c\) shrinks with \(\Delta\), one has to be careful to choose a value of \(M\) large enough to resolve this region. Once we choose the values of \(x_{j}\) to sample \(F(x)\), we define a coefficient matrix \(A_{jk}=T_{2k}(x_{j})\) such that \(F(x_{j})=\sum_{k}A_{jk}c_{2k}\). The coefficients are then determined by solving the following optimization problem:
\[\min_{\{c_{k}\}}\max\left\{\max_{x_{j}\in[\sigma_{+},\sigma_{\rm max}]}|F(x_{j })-c|,\max_{x_{j}\in[\sigma_{\rm min},\sigma_{-}]}|F(x_{j})|\right\} \tag{9}\]
This is a convex optimization problem, and can be solved using, e.g., Matlab code CVX [105] or Python code CVXPY [106]. One chooses \(c\) smaller than one to allow the Chebyshev approximation to overshoot the step function, which, by the equioscillation theorem, is necessary to achieve an optimal Chebyshev approximations [104]. In practice, one can choose \(c\) to be sufficiently close to one such that the effect is negligible. Our numerical investigations showed that the cost of solving for the coefficients generally scales linearly or better in the degree of the polynomial.
Once the Chebyshev expansion for \(F(x)\) is known, the final step is to calculate the phases \(\Phi\) used in the QETU circuit. Note that multiple conventions for defining the phases \(\Phi\) exist, and throughout this work we use the so-called _W-convention_. Here we mostly follow the pedagogical discussion given in Ref. [92]. To calculate the
Figure 4: The blue curve shows \(F(x)\), where \(F(x)\) is a Chebyshev approximation of the shifted error function. The orange vertical dashed lines indicate the values \(\sigma_{\rm min},\sigma_{\rm max},\sigma_{\pm}\). The green shaded regions indicate \(x\) values that are included when solving for the Chebyshev expansion of the cosine transform shifted error function. For values of \(x\) outside the shaded green regions, the function \(F(x)\) can take on any value as long as \(|F(x)|\leq 1\).
phases \(\Phi\), one needs to combine a number of techniques, including quantum signal processing (QSP) [81], qubitization [101], and QETU [88]. QSP is the theory of the unitary representation of scalar polynomials \(P(x)\). Put more concretely, QSP tells us that, for some set of symmetric phase factors \(\Phi=(\phi_{0},\phi_{1},\ldots,\phi_{1},\phi_{0})\in\mathbb{R}^{d+1}\), one can construct an even Chebyshev polynomial \(g(x,\Phi)\) of degree \(d\) in the following way,
\[\begin{split} g(x,\Phi)=\text{Re}[\langle 0|\text{e}^{i\phi_{0}Z} \text{e}^{i\arccos(x)X}\text{e}^{i\phi_{1}Z}\text{e}^{i\arccos(x)X}\rangle\\ \ldots\text{e}^{i\phi_{1}Z}\text{e}^{i\arccos(x)X}\text{e}^{i \phi_{0}Z}\,|0\rangle],\end{split} \tag{10}\]
where \(X\) and \(Z\) are the usual Pauli matrices. Qubitization can then be used to lift the above SU(2) representation to matrices of arbitrary dimensions. Finally, the quantum eigenvalue transformation for unitary matrices (QETU) tells us that, if we choose \(\Phi=\Phi^{*}\) such that \(g(x,\Phi^{*})=F(x)\), the circuit in Fig. 1 implements a block encoding of \(F(\cos(H/2))\). Combining these techniques leads to a procedure for calculating the phases \(\Phi^{*}\) completely in terms of SU(2) matrices, which we describe now.
Because we have \(n_{\text{Ch}}\) independent phases, to produce exactly the Chebyshev polynomial \(F(x)\) we must sample at \(n_{\text{Ch}}\) values, which are taken to be the positive roots of the Chebyshev polynomial \(T_{2n_{\text{Ch}}}(x)\) given by \(x_{j}=\cos\!\left(\pi\frac{2k-1}{4n_{\text{Ch}}}\right)\). If we define the functional \(\mathcal{F}(\Phi)\) as
\[\mathcal{F}(\Phi)=\frac{1}{n_{\text{Ch}}}\sum_{j=1}^{n_{\text{Ch}}}|g(x_{j}, \Phi)-F(x_{j})|^{2}\,, \tag{11}\]
then the phases \(\Phi^{*}\) that produce \(F(x)\) can be found by solving \(\mathcal{F}(\Phi^{*})=0\). It has been found in Ref. [107] that using a quasi-Newton method with a particular initial guess of
\[\Phi^{0}=\left(\frac{\pi}{4},0,0,\ldots,0,0,\frac{\pi}{4}\right) \tag{12}\]
one can robustly find the symmetric phase factors for values of \(d\sim 10000\). An example code to solve for the Chebyshev coefficients \(c_{k}\) and the associated phase factors has been implemented in QSPPACK [108]. Through numerical studies, we find that the cost of finding the phase factors scales roughly quadratically with the number of phases.
To summarize, using QETU to prepare the ground state of a target Hamiltonian \(H^{\text{phys}}\) can be done according to the following steps:
1. Construct \(H=c_{1}H^{\text{phys}}+c_{2}\) such that the spectrum of \(H\) is in \([\eta,\pi-\eta]\) for some \(\eta>0\),
2. Determine \(\mu\) and \(\Delta\),
3. Solve for the Chebyshev approximation \(F(x)\),
4. Solve for the phase factors \(\Phi^{*}\),
5. Implement the circuit in Fig. 1.
In the next section, we describe how the constraint that the spectrum of \(H\) should be in the range \([\eta,\pi-\eta]\) can be relaxed.
#### iii.1.2 Modified ground state preparation
In the previous section, we reviewed the original QETU algorithm for ground state preparation in which the Hamiltonian spectrum was assumed to be in the range \([\eta,\pi-\eta]\). This assumption was necessary because the function \(\cos(x)\) is monotonic in the range \(x\in[0,\pi]\). While this is true, QETU actually returns \(F(\cos(x/2))\), with \(\cos(x/2)\) being monotonic in the range \(x\in[0,2\pi]\). This observation can be leveraged to increase the allowed range of the spectrum of \(H\). This is useful because a larger range leads to a larger energy gap used in the QETU algorithm, which reduces the overall cost of the simulation. For the modified algorithm, we introduce the variable parameter \(\tau\) to characterize the increased spectrum range. This adjustment to the original QETU algorithm can be viewed from two perspectives: one can either continue to use \(\text{e}^{-iH}\) as a building block and change the spectrum of \(H\) to be in the range \([\eta,\tau(\pi-\eta)]\), or, equivalently, consider \(\text{e}^{-i\tau H}\) as a building block with the spectrum of \(H\) in the original range \([\eta/\tau,\pi-\eta]\). We choose the latter perspective, and in what follows derive the largest value of \(\tau\) one can use while still guaranteeing isolation of the ground state. We conclude with a discussion of how using \(\tau\neq 1\) can reduce the cost in the scenario when one implements \(\text{e}^{-i\tau H}\) using product formulas.
We now discuss the modifications required when using \(\tau\neq 1\). First, while the initial algorithm in Ref. [88] proposed using the same value of \(\eta\) when constructing the shifted sign function and when shifting the spectrum of the Hamiltonian, it is in principle possible to use different values. We continue to use \(\eta\) to denote the value used for shifting the spectrum of \(H\) to be in the range \([\eta,\pi-\eta]\), and introduce \(\eta_{P_{<\mu}}\) to denote the value used when constructing the shifted error function. Note that, in order to avoid the scenario where the ground state is filtered out and excited states are not, one must ensure \(\eta_{P_{<\mu}}\leq\eta\). The parameters of the shifted sign function for general \(\tau\) are then given by
\[\sigma_{\pm}(\tau) =\cos\left(\tau\frac{\mu\mp\Delta/2}{2}\right), \tag{13}\] \[\sigma_{\min}(\tau) =\cos\left(\tau\frac{\pi-\eta_{P_{<\mu}}}{2}\right),\] (14) \[\sigma_{\max}(\tau) =\cos\left(\tau\frac{\eta_{P_{<\mu}}}{2}\right). \tag{15}\]
If we implement \(F(x)\) with error \(\epsilon\), on the range
\(x\leq 1\) we would have
\[F(x)=\begin{cases}\epsilon&\sigma_{\rm min}\leq x\leq\sigma_{-}\\ \epsilon&-\sigma_{-}\leq x\leq-\sigma_{\rm min}\\ c-\epsilon&\sigma_{+}\leq x\leq\sigma_{\rm max}\\ c-\epsilon&-\sigma_{\rm max}\leq x\leq-\sigma_{+}\,,\end{cases} \tag{16}\]
where we have used the fact that \(F(x)\) is constructed using even polynomials. Note that while there are no theoretical issues with using \(\tau<1\), doing so shrinks the energy gap and is not beneficial in the context of exact implementations of \(U\). We are now in a position to discuss the two possible pitfalls that can occur when using \(\tau\neq 1\) and how to overcome them.
The first caveat is that, for \(\tau>1\), it is possible that an excited state energy falls into the region \(-\sigma_{\rm min}\leq x\leq\sigma_{\rm min}\) where \(F(x)\) can take on values larger than \(\epsilon\). This can be avoided by implementing the step function with \(\eta_{P_{<\mu}}=0\), i.e., set \(\sigma_{\rm min}=0\). Our numerical studies indicate the quality of the approximation is largely independent of \(\eta_{P_{<\mu}}\), with the choice \(\eta_{P_{<\mu}}=0\) resulting in close to the smallest error in the range \(0\leq\eta_{P_{<\mu}}\leq\eta\).
The second caveat is that one must ensure that higher excited states fall in regions where they are filtered out by the approximate step function \(F(x)\). Consider first using \(\tau=1\). Taking the cosine transform into account, the locations of the shifted energies \(E_{i}\) are \(\cos(E_{i}/2)\), where \(E_{i}\in[\eta,\pi-\eta]\). Because \(\cos(x)\) is monotonically decreasing on \(x\in[0,\pi]\), the cosine transformed energies get successively closer to zero for larger energies, with \(\cos(E_{\rm max}/2)=\cos((\pi-\eta)/2)\) which is close to zero for small \(\eta\). If one instead uses \(\tau>1\), the transformed energies are \(\cos(\tau E_{i}/2)\). Taking advantage of the fact that the step function is even, we see that as long as \(\cos(\tau E_{i}/2)>-\sigma_{-}(\tau)\), one still guarantees isolation of the ground state. This leads to a maximum value
\[\tau_{\rm max}=\frac{2\pi}{\pi-\eta+(\mu+\Delta/2)}\,. \tag{17}\]
Figure 5 shows the fidelity of the prepared ground state of a simple harmonic oscillator as a function of \(\tau\). The Hamiltonian is given by
\[\widehat{H}=\frac{g^{2}}{2}\hat{p}^{2}+\frac{1}{2g^{2}}\hat{x}^{2}\,, \tag{18}\]
and is represented using \(n_{q}\) qubits. Working in the digitized eigenbasis of the position operator, we choose to sample its eigenvalues as
\[x_{j}=-x_{\rm max}+(\delta x)j,\quad j=0,1,\ldots,2^{n_{q}}-1. \tag{19}\]
where \(\delta x=2x_{\rm max}/(2^{n_{q}}-1)\). Using fact that \([\hat{x},\hat{p}]=i\), the momentum operator is implemented as
\[p_{j}^{(\hat{p})} =-p_{\rm max}+(\delta p)j,\quad j=0,1,\ldots,2^{n_{q}}-1, \tag{20}\] \[\hat{p}^{(\hat{x})} =\text{FT}^{\dagger}\hat{p}^{(\hat{p})}\text{FT}, \tag{21}\]
where \(p_{\rm max}=\pi/(\delta x)\), \(\delta p=2\pi/(2^{n_{q}}\delta x)\), and FT is the discrete Fourier transformation matrix. The superscripts \((\hat{p})\) and \((\hat{x})\) indicate that the momentum operator is written in the momentum and position basis, respectively. It will be useful for the circuit construction discussion to review the cost of exponentiation of \(\hat{x}\) and \(\hat{p}\). Because \(\hat{x}\) and \(\hat{p}^{(\hat{p})}\) are diagonal matrices with evenly spaced eigenvalues, \(\text{e}^{i\eta_{0}\hat{x}}\) and \(\text{e}^{i\epsilon\eta_{0}\hat{p}^{(\hat{p})}}\) can be implemented using zero CNOT gates and \(n_{q}\) rotation gates [109]. One can then use the efficient quantum Fourier transform circuit to construct \(\text{e}^{i\alpha p\hat{p}^{(\hat{x})}}=\text{FT}^{\dagger}\text{e}^{i\alpha p \hat{p}^{(\hat{p})}}\text{FT}\), with each Fourier transform requiring \(\mathcal{O}(n_{q}^{2})\) gates [110].
The results in Fig. 5 are for a three qubit system with \(g=1\), \(\eta_{P_{<\mu}}=0\), \(\eta=0.05\), \(\mu=0.233\), and \(\Delta=0.244\). The maximum value of \(\tau\) for these parameters is \(\tau_{\rm max}=1.823\). For various degree polynomials, we see that the fidelity in general improves for increasing \(\tau\) up to \(\tau_{\rm max}\), where increasing further leads to an increase in error. Relative to using \(\tau=1\), using \(\tau=\tau_{\rm max}\) leads to a general improvement in precision by a factor of \(\mathcal{O}(\exp(d\Delta(\tau_{\rm max}-1)))\).
We now discuss how using \(\tau\neq 1\) can improve simulations when the approximating \(\text{e}^{-i\tau H}\) using product formulas. We denote by \(N_{\rm steps}\) the number of Trotter steps per time evolution circuit, so that the building block of the QETU circuit is \(\left(\text{e}^{-i\tau H/N_{\rm steps}}\right)^{N_{\rm steps}}\). Each step is approximated using a first-order Trotter formula. If
Figure 5: Error of the state prepared using QETU as a function of \(\tau\) for a simple harmonic oscillator system. The parameters used are \(g=1\), \(\eta_{P_{<\mu}}=0\), \(\eta=0.05\), \(\mu=0.233\), and \(\Delta=0.244\). Different colored/shaped points indicated different degree polynomial approximations to the shifted error function. The horizontal black line indicates the maximum value of \(\tau\) that still guarantees isolation of the ground state, which, for this choice of parameters, is \(\tau_{\rm max}=1.823\). Using \(\tau=\tau_{\rm max}\) leads to a significant reduction in the error compared to using \(\tau=1\), while using values of \(\tau>\tau_{\rm max}\) leads to significant excited state contamination in the prepared state.
\(H_{x}=\hat{x}^{2}/(2g^{2})\) and \(H_{p}=g^{2}\mathrm{FT}^{\dagger}(\hat{p}^{(\hat{p})})^{2}\mathrm{FT}/2\), then
\[\begin{split}\mathrm{e}^{-i\delta\tau H}&\approx \mathrm{e}^{-i\delta\tau H_{x}}\mathrm{e}^{-i\delta\tau H_{p}}\\ &=\mathrm{e}^{-i\delta\tau H_{x}}\mathrm{FT}^{\dagger}\mathrm{e}^ {-i\delta\tau H_{p}^{(\hat{p})}}\mathrm{FT},\end{split} \tag{22}\]
where \(\delta\tau=\tau/N_{\mathrm{steps}}\). Recall that \(d\) denotes the total number of calls to the time evolution circuit (see Fig.1), which corresponds to the polynomial of degree \(d\). The total number of calls to the elementary time evolution circuit is then \(N_{\mathrm{tot}}=N_{\mathrm{steps}}\times d\).
The error from approximating the step function and from a finite Trotter step size are denoted as \(\epsilon_{\mathrm{QETU}}\) and \(\epsilon_{\mathrm{Trotter}}\), respectively; the parameters are defined as before, such that the prepared ground state \(|\widetilde{\psi}_{0}\rangle\) has overlap with the exact ground state given by \(|\langle\widetilde{\psi}_{0}|\psi_{0}\rangle|=1-\epsilon_{\mathrm{QETU}}- \epsilon_{\mathrm{Trotter}}\). For concreteness, we assume the errors take the forms
\[\begin{split}\epsilon_{\mathrm{QETU}}&=a\mathrm{e}^ {-b(\tau\Delta)d}\\ \epsilon_{\mathrm{Trotter}}&=c(\tau/N_{\mathrm{steps} })^{p},\end{split} \tag{23}\]
where the first line is again obtained by inverting \(d=\widetilde{\mathcal{O}}(\Delta^{-1}\log(1/\epsilon_{\mathrm{QETU}}))\), and the second line is the standard form for the error when using a \(p^{\mathrm{th}}\) order Trotter formula. The parameters \(a,b\) and \(c\) are constants. These expressions are expected to be correct to leading order. Replacing \(d=N_{\mathrm{tot}}/N_{\mathrm{steps}}\), our total error is
\[\begin{split}\epsilon&=a\mathrm{e}^{-b(\tau\Delta)N _{\mathrm{tot}}/N_{\mathrm{steps}}}+c(\tau/N_{\mathrm{steps}})^{p},\\ &=a\mathrm{e}^{-b\Delta N_{\mathrm{tot}}\delta\tau}+c(\delta\tau) ^{p}.\end{split} \tag{24}\]
Notice that the total error \(\epsilon\) only depends on \(N_{\mathrm{tot}}\) and \(\delta\tau\). Solving for \(N_{\mathrm{tot}}\) gives
\[N_{\mathrm{tot}}=\frac{1}{b\Delta\delta\tau}\log\left(\frac{a}{\epsilon-c( \delta\tau)^{p}}\right). \tag{25}\]
With this, we can now ask the question of what value of \(\delta\tau\) minimizes the cost \(N_{\mathrm{tot}}\) for some fixed error threshold \(\epsilon\). Before doing so, we emphasize the fact that \(N_{\mathrm{tot}}\) depends only on \(\delta\tau\), and not the individual values for \(N_{\mathrm{steps}}\) and \(\tau\). One might expect that reducing \(\tau\) would decrease the overall cost by reducing the number of Trotter steps per time evolution circuit \(N_{\mathrm{steps}}\) for the same \(\delta\tau\). However, reducing \(\tau\) also shrinks the gap, and this increase in cost exactly cancels out the savings from decreasing \(N_{\mathrm{steps}}\).
To solve for \(\delta\tau^{*}\) that minimizes \(N_{\mathrm{tot}}\), we first study the constraints on our parameters from the form of Eq. (25). The parameter \(N_{\mathrm{tot}}\) is a positive real integer. Because the argument of the logarithm must be positive, we find \(\delta\tau<(\epsilon/c)^{(1/p)}\). In words, one must choose a value of \(\delta\tau\) such that the Trotter error is below the total target error \(\epsilon\). Furthermore, notice that the logarithm can return a negative result. A negative \(N_{\mathrm{tot}}\) implies that one can achieve a precision of \(\epsilon\) by setting \(N_{\mathrm{tot}}=1\) and decreasing \(\delta\tau\) until the target precision is achieved.
In App. A, a perturbative solution for \(\delta\tau^{*}\) is presented. Setting \(\frac{\mathrm{d}N_{\mathrm{tot}}}{\mathrm{d}\tau}\) to zero and expanding the log to lowest order, we find \((\delta\tau^{*})^{p}\approx\frac{\epsilon}{c}(1-p/(\log\left(\frac{a}{ \epsilon}\right)+p))\). Because \(a>\epsilon\) in general, the lowest order result for \(\delta\tau^{*}\) is slightly below the maximum value of \((\epsilon/c)^{1/p}\). From this we learn that, for a given choice of \(N_{\mathrm{tot}}\), one should choose a time-step \(\delta\tau\) such that most of the error comes from the Trotter error. This choice can be understood intuitively by comparing the rate of convergence of the two sources of error. Because the Trotter error converges as some power of \(\delta\tau\), while the QETU error converges exponentially in \(N_{\mathrm{tot}}\), it is generally more cost effective for QETU to produce a smaller error than the Trotter error. Therefore, having the Trotter error be the majority of the error results in the smallest value for \(N_{\mathrm{tot}}\).
Once the value of \(\delta\tau^{*}\) has been chosen, one must choose values for \(N_{\mathrm{steps}}\) and \(\tau\). One choice would be to always set \(N_{\mathrm{steps}}=1\) and \(\tau=\delta\tau^{*}\). Another choice would be to choose \(\tau\) to be the largest natural number multiple of \(\delta\tau^{*}\) that is less than \(\tau_{\mathrm{max}}\). This choice leads to the smallest possible degree polynomial that achieves the target error \(\epsilon\), therefore minimizing the classical cost of determining the angles \(\Phi\).
Figure 6 shows plots demonstrating the improvements one can achieve by choosing an optimal value of \(\delta\tau\) from two perspectives, namely fixed computational cost, and fixed target precision. The system studied is a compact U(1) lattice gauge theory in the weaved basis for a \(2\times 2\) lattice with \(n_{q}=2\) qubits per site using gauge coupling \(g=1\). The digitization scheme used is reviewed in Sec. III. The relevant parameters for state preparation are \(\eta=0.05,\eta_{P_{<_{\mu}}}=0,\mu=0.1498\) and \(\Delta=0.1330\). The time evolution operator was implemented using a first order Trotter method with \(N_{\mathrm{steps}}=1\) Trotter steps, and therefore \(\tau=\delta\tau\). The top plot in Fig. 6 shows \(\epsilon\) as a function of \(\delta\tau\) for different degree polynomial approximations. We see that for a fixed computational cost, significant precision improvements can be achieved by using an optimal choice for \(\delta\tau\). The bottom plot in Fig. 6 studies the total computational cost needed to achieve a fixed target precision as a function of \(\delta\tau\). The cost is given in terms of the total number of calls to the circuit that implements a single Trotter time-step. For all choices of fixed target precision, choosing an optimal value of \(\delta\tau\) results in significant cost reductions.
We conclude this section with a discussion of additional improvements which can be gained with the aid of _zero error extrapolation_ in cases when \(\mathrm{e}^{-i\tau H}\) is implemented approximately. For implementations based on product formulae, this would require running simulations at multiple values of \(\epsilon\) and \(\tau\) and extrapolating to \(\epsilon\to 0\) and \(\delta\tau\to 0\). Note that if the optimal value of \(\delta\tau^{*}\) is chosen, the errors from QETU and from Trotter are comparable in magnitude. Therefore, extrapolating to \(\epsilon\to 0\) and \(\delta\tau\to 0\) would involve a fitting function of the form similar to that given in Eq. (24). While this is in principle possible, such a functional form is relatively complicated and undesirable. To simplify the extrapolation, one could
instead work in a regime where the QETU error is negligible compared to the Trotter error. This would allow one to use a fitting function of simple form \(c(\delta\tau)^{p}\). Because of QETU's fast convergence, it is possible that such slight increase in computational cost could be worth the trade-off of better control over systematic errors.
### Wavepacket construction via QETU
While the original application of the QETU algorithm in Ref. [88] was ground state preparation, QETU is a flexible algorithm that can construct general matrix functions of any Hermitian operator. In this section, we provide a procedure for constructing Gaussian wavepackets in the position basis of a quantum mechanical system. We first describe the procedure at a high level. From there, we discuss the procedure for constructing an approximation to the Gaussian filter operator \(\mathrm{e}^{-\hat{x}^{2}/(2\sigma_{x}^{2})}\) using QETU. We first discuss how the naive application of QETU to this problem leads to the error decreasing only polynomially with the number of Chebyshev polynomials. We then discuss a number of modifications to the QETU procedure that achieve an exponential scaling in the error for any desired value of the width. Using the improved methodology, we present a method that allows one to prepare the Gaussian state to high precision while avoiding the costly implementation of LCU, and find that this method outperforms existing methods for preparing Gaussian states for values of \(n_{q}>2-3\). Throughout this section, we will only discuss qualitative results; detailed scaling of the precision, as well as gate cost comparisons to direct state preparation, are presented in Sec. IV.2.
The state we wish to prepare is given by
\[\ket{\psi}=\frac{1}{\sqrt{\mathcal{N}}}\sum_{j=0}^{2^{n_{q}}-1}\mathrm{e}^{- \frac{1}{2}(\frac{\sigma_{x}-\sigma_{0}}{\sigma_{x}})^{2}}\mathrm{e}^{ip_{0}x _{j}}\ket{x_{i}}, \tag{26}\]
where \(n_{q}\) is the number of qubits used to represent hte state, \(\hat{x}\ket{x_{i}}=x_{i}\ket{x_{i}}\), \(\mathcal{N}\) is the normalization factor, \(x_{0}\) is the central value of the Gaussian, \(p_{0}\) is the expectation value of the momentum, and \(\sigma_{x}\) is the width of the wavepacket in position space. To take advantage of the relative simplicity the \(\hat{x}\) and \(\hat{p}^{(\hat{x})}\) operators, we break the construction of the wavepacket in Eq. (26) into three steps. The first, and the most costly step, is constructing a Gaussian with width \(\sigma_{x}\) centered at \(x=0\) using QETU. This will be done by applying an approximate implementation of the Gaussian filter operator \(\mathrm{e}^{-\frac{1}{2}\hat{x}^{2}/\sigma_{x}^{2}}\) to the state that is an equal superposition of all position eigenstates, i.e., \(\ket{\psi_{\mathrm{init}}}=\frac{1}{\sqrt{2^{n_{q}}}}\sum_{j=0}^{2^{n_{q}}-1} \ket{x_{i}}\). The wavepacket is then shifted in position space by \(x_{0}\) and in momentum space by \(p_{0}\) with the aid of the operators \(\mathrm{e}^{-i\alpha p\hat{p}^{(\hat{x})}}\) and \(\mathrm{e}^{-ip_{0}\hat{x}}\), respectively.
To construct the approximate Gaussian filter operator, which is a matrix function of the \(\hat{x}\) operator, we will use QETU. The building block used in the QETU circuit is \(\mathrm{e}^{-i\tau\hat{x}_{\mathrm{sh}}}\), where \(\hat{x}_{\mathrm{sh}}\) is the shifted/scaled position whose spectrum is in the range \([\eta,\pi-\eta]\). Because \(\hat{x}_{\mathrm{sh}}\) is also a diagonal matrix with evenly spaced eigenvalues, as previously discussed, \(\mathrm{e}^{-i\tau\hat{x}_{\mathrm{sh}}}\) can be implemented exactly, using zero CNOT gates and \(n_{q}\) rotation gates. The controlled version can therefore be implemented using \(n_{q}\) CNOT and \(n_{q}\) rotation gates, leading to an asymptotic scaling linear in the number of qubits used to represent
Figure 6: Both plots show results for a compact U(1) lattice gauge theory in the waved basis for \(N_{p}=3\) and \(n_{q}=2\) with \(g=1\). The time evolution circuit was implemented using a single Trotter step with \(\delta\tau=\tau\). The vertical black line indicates the value of \(\tau_{\mathrm{max}}\) for this system, with a value \(\tau_{\mathrm{max}}=1.90\). Top: Fidelity of prepared ground state as a function of \(\delta\tau\). Different colored lines indicate different degree polynomial approximations to the shifted error function. This figure demonstrates that choosing an optimal value of \(\delta\tau\) results in significant precision improvements for a given computational cost. Bottom: Number of total calls to the circuit for a single first order Trotter step as a function of \(\delta\tau\). Each point is the smallest value of \(N_{\mathrm{tot}}\) that achieves a given error \(\epsilon\), with different \(\epsilon\) indicated by different colors/markers. The figure demonstrates that choosing an optimal value of \(\delta\tau\) results in significant cost reductions for a fixed target precision.
the operator, albeit with a large overall prefactor.
The operator \(\hat{x}_{\rm sh}\) is given by
\[\hat{x}_{\rm sh}=c_{1}\hat{x}+c_{2}, \tag{27}\]
where \(c_{1}\) and \(c_{2}\) are the same as in Eq. (4), upon replacing \(E_{0}^{\rm phys}\to\min(\hat{x})\) and \(E_{\rm max}^{\rm phys}\to\max(\hat{x})\). Note that because the spectrum of \(\hat{x}\) is known, one does not have to use upper limits as is generally the case with state preparation where the spectrum of the Hamiltonian is not known _a priori_ (this fact will also allow for other improvements, and will discussed in more detail later in this section).
Let \({\sf f}(\hat{x})=c\,{\rm e}^{-\frac{1}{2}\hat{x}^{2}/\sigma_{x}^{2}}\) denote the exact Gaussian filter operator we wish to approximate using QETU. Again, the parameter \(c\) is chosen to be slightly less than 1 to allow overshooting of the Chebyshev approximation. In order to produce a Gaussian centered at \(x_{0}=0\) with width \(\sigma_{x}\) using \({\rm e}^{-i\tau\hat{x}_{\rm sh}}\) as a building block, one must approximate the function
\[{\sf F}(x)=c\,\exp\!\left(-\frac{1}{2\sigma_{\rm QETU}^{2}}\left(\frac{2}{ \tau}\arccos(x)-x_{0}^{\rm QETU}\right)^{2}\right)\!, \tag{28}\]
where \(x_{0}^{\rm QETU}=c_{1}x_{0}+c_{2}\) and \(\sigma_{\rm QETU}=c_{1}\sigma_{x}\). Note that because the function in Eq. (28) does not have definite parity, one must use QETU to first prepare approximations to the even (\({\sf F}_{+}(x)\)) and odd (\({\sf F}_{-}(x)\)) parts separately, and then add them using, e.g., linear combination of unitaries (LCU) [111]. While performing LCU does not change the overall scaling with \(n_{q}\), it does lead to a larger constant factor in the asymptotic cost of preparing the Gaussian state.
We now describe the procedure to determine the Chebyshev approximation of \({\sf F}_{+}(x)\). The procedure for \({\sf F}_{-}(x)\) is analogous, except that one uses odd Chebyshev polynomials. Let \(F_{+}(x)=\sum_{k=0}^{d/2}c_{2k}T_{2k}(x)\) denote the Chebyshev approximation to \({\sf F}_{+}(x)\). As before, we define the coefficient matrix \(A_{jk}=T_{2k}(x_{j})\), such that \(F_{+}(x_{j})=\sum_{k}c_{2k}A_{jk}\). The coefficients \(c_{k}\) are then determined by solving the following convex optimization problem:
\[\min_{\{c_{k}\}}\,\max_{x_{j}\in[\sigma_{\rm min},\sigma_{\rm max}]}|F_{+}(x_ {j})-{\sf F}_{+}(x_{j})|\,, \tag{29}\]
where the functions are sampled using the roots of Chebyshev polynomials \(x_{j}=-\cos\left(\frac{j\pi}{M-1}\right)\) for some large value of \(M\). The main difference between this optimization problem and the one in Eq. (9) is that the error in Eq. (29) is minimized over the entire range of \(x\) values in \([\sigma_{\rm min},\sigma_{\rm max}]\).
We now discuss how the error is expected to scale as the degree of the Chebyshev approximation is increased. Unlike the case of constructing the shifted error function for ground state preparation, the functions \({\sf F}_{\pm}(x)\) are not infinitely differentiable on the interval \(x\in[-1,1]\) due to the presence of the \(\arccos(x)\). It is well known that Chebyshev approximations of functions that are not infinitely differentiable converge polynomially to the true function with the typical rate \((1/n_{\rm Ch})^{m}\), where \(m\) is the number of times the function is differentiable on \([-1,1]\), and \(n_{\rm Ch}\) is the number of Chebyshev polynomials used in the approximation [104]. We therefore expect the convergence of our Chebyshev approximations to be only polynomial.
This observation, however, leads to a natural method for avoiding the polynomial scaling. As one increases the value of \(\eta\), the interval \(x_{j}\in[\sigma_{\rm min},\sigma_{\rm max}]\) will move farther from the non-differentiable points at \(x=\pm 1\), and improve the precision of approximation for a given degree polynomial. However, as \(\eta\) is increased, the functions \({\sf F}_{\pm}(x)\) become more sharply peaked, and will eventually require more Chebyshev polynomials to achieve the same precision. We therefore expect that, for a given degree approximation, there will be a value of \(\eta\) that results in the smallest error. We find that this procedure of varying \(\eta\) results in the error decreasing more favorably with the degree of the polynomial. This scaling can made exponential, however, by also varying the \(\tau\) parameter. By varying \(\tau\), the shape of the exact functions \({\sf F}_{\pm}\) change, and can improve the quality of the approximation. Note that, unlike the case of constructing the filter operator, there are no theoretical issues with using a value of \(\tau>\tau_{\rm max}\). This is due to the fact that we know exactly the values of the operator we sample. The optimization problem that determines the Chebyshev coefficients using this modification is
\[\min_{\{c_{k},\eta,\tau\}}\,\max_{x_{j}\in[\sigma_{\rm min},\sigma_{\rm max}]} |F_{+}(x_{j})-{\sf F}_{+}(x_{j})|\,, \tag{30}\]
where the parameters \(\sigma_{\rm min}\) and \(\sigma_{\rm max}\) are functions of \(\eta\) and \(\tau\). Note that the optimization problem in this form is no longer a convex optimization problem. The parameters \(\eta\) and \(\tau\) are found by passing the convex optimization problem in Eq. (29) to a numerical minimization procedure that solves for \(\eta\) and \(\tau\).
While varying \(\eta\) and \(\tau\) already results in the error decreasing exponentially with the degree of the polynomial, we can further improve the rate of convergence, in particular for small values of \(n_{q}\). Because we know the spectrum of the operator \(\hat{x}_{\rm sh}\) exactly, we only need the Chebyshev expansion to approximate \({\sf F}(x)\) at those points, not the entire range \(x\in[\sigma_{\rm min},\sigma_{\rm max}]\). Taking into account the cosine transformation, the values of \(x\) we need to faithfully approximate \({\sf F}(x)\) are \(\tilde{x}_{j}=\cos(\tau x_{\rm sh,j}/2)\), where \(x_{\rm sh,j}\) is the \(j^{\rm th}\) eigenvalue of the \(\hat{x}_{\rm sh}\) operator. If we denote the set of all \(\tilde{x}_{j}\) values as \(\tilde{\chi}\), then the Chebyshev coefficients are found by solving the modified optimization problem
\[\min_{\{c_{k},\eta,\tau\}}\,\max_{\tilde{x}_{j}\in\tilde{\chi}}|F_{+}(x_{j})-{ \sf F}_{+}(x_{j})|\,. \tag{31}\]
We find that, in general, this method has slightly better performance compared to sampling all \(x\) values, but for certain values of \(\sigma_{x}/x_{\rm max}\) it can improve the rate of
convergence significantly. Additionally, because we only include \(2^{n_{*}}\) of \(\tilde{x}_{j}\) values in the optimization, the error will be zero when the degree of the polynomial is equal to the number of points. Furthermore, by increasing \(\tau\), one can sample \(\tilde{x}_{j}\) at negative values. This can be used to exploit the parity of the Chebyshev expansions, reducing the effective number of points we need to approximate \(\mathsf{F}(\tilde{x}_{j})\). One important consideration when using this method is that, as one varies \(\eta\) and \(\tau\), care must be taken to ensure that \(|F_{\pm}(x)|\leq 1\) for all \(x\in[-1,1]\), and not just for \(\tilde{x}_{i}\). If this condition is not satisfied, there are no possible values of phases \(\Phi\) that implement the desired function \(F(x)\).
Although we have developed a procedure using QETU that can be used to implement Gaussian states with a cost \(\mathcal{O}(n_{q}\log(1/\epsilon))\), the cost of performing LCU to add the even and odd pieces introduces a large overall constant factor in the asymptotic cost. If one could modify the procedure to avoid using LCU, the gate count reduction would be reduced by a factor of 10 or more. We now discuss how to prepare Gaussian states using only the even component, and eliminate the need for LCU altogether. The main idea is to choose \(\tau\) such that the function \(\mathsf{F}(x)\) becomes a purely even function. To start, note that our choice of digitizing the \(\hat{x}\) operator results in the parameter \(c_{2}=\pi/2\). This, combined with the fact that \(x_{0}=0\), leads to \(x_{0}^{\text{QETU}}=\pi/2\). With this, we see that setting \(\tau=2\) leads to \(\mathsf{F}(x)\) becoming
\[\mathsf{F}(x) =c\,\mathrm{e}^{-(\arccos(x)-\pi/2)^{2}/(2\sigma_{\text{QETU}}^{ 2})} \tag{32}\] \[=c\,\mathrm{e}^{-(\arcsin(x))^{2}/(2\sigma_{\text{QETU}}^{2})}\,,\]
where we have used the relation \(\arccos(x)-\pi/2=-\arcsin(x)\). Because \(\arcsin(x)\) has definite parity, \(\mathsf{F}(x)\) is an even function for \(\tau=2\). After setting \(\tau=2\), the parameter \(\eta\) and the Chebyshev coefficients are found by solving an optimization problem similar to that in Eq. (31), except that \(\tau\) is fixed to 2. We find that this method offers the best precision for a given fixed gate count cost, and outperforms existing methods for Gaussian state preparation for values of \(n_{q}>2-3\). More details of this comparison are given in Sec. IV.2.
We conclude by noting that, due to the simplicity of the building block \(\mathrm{e}^{-i\tau\hat{x}_{\text{sh}}}\), the number of controlled gates in control-free version of QETU is the same as in the original version. We will therefore use the original version when showing numerical results in Sec. IV.2.
## III Lattice formulation of U(1) gauge theory
In this section, we review a formulation of a compact \(\mathrm{U}(1)\) gauge theory in 2 spatial dimensions with a Hilbert space that has been constrained to satisfy Gauss' law in the charge density \(\rho(x)=0\) sector. We also review the representation of the magnetic and electric operators introduced in Ref. [42], which can be used at all values of the gauge coupling. From there, we discuss the basis change introduced in Ref. [89], which is necessary to break the exponential volume scaling in the gate cost when performing time-evolution using Trotter methods. We conclude with a discussion of how the digitization of the theory in the weaved basis must be modified in order to maintain the efficiency of the representation [90]. Note that we provide only the details necessary to understand the numerical results presented in Sec. IV.1. Further details can be found in the original references.
We chose the apply the QETU algorithm to the compact \(\mathrm{U}(1)\) theory because it shares a number of important features with QCD, including the fact that, as the lattice spacing \(a\) goes to zero, the bare coupling \(g(a)\) also goes to zero. Additionally, the gauge field in non-Abelian gauge theories is necessarily compact, proving another motivation for detailed studies of the compact \(\mathrm{U}(1)\) theory.
The Hamiltonian considered in Ref. [42] is formulated in terms of electric rotor and magnetic plaquette operators, given by \(\hat{R}(x)\) and \(\hat{B}(x)\), respectively. These operators satisfy
\[\left[\hat{B}(x),\hat{R}(y)\right]=i\,\delta^{3}(x-y)\,. \tag{33}\]
In the charge density \(\rho(x)=0\) sector, the rotors are defined such that \(\vec{E}(x)=\vec{\nabla}\times R(x)\). In this way, electric Gauss's law is automatically satisfied.
The lattice version of the theory we consider introduces a periodic lattice of \(N_{x}\) and \(N_{y}\) evenly spaced lattice points in the \(\hat{x}\) and \(\hat{y}\) dimensions with a lattice spacing \(a\). The lattice version of the continuum Hamiltonian is defined in terms of operators, \(\hat{R}_{p}\) and \(\hat{B}_{p}\), with the index \(p\) denoting a specific plaquette in the lattice volume. The Hamiltonian can be written in terms of an electric and magnetic component as
\[\hat{H}=\hat{H}_{E}+\hat{H}_{B}\,. \tag{34}\]
After solving the constraint from magnetic Gauss' law, the fully gauge fixed Hamiltonian contains \(N_{p}\equiv N_{x}N_{y}-1\) independent plaquette operators. The fully gauge-fixed electric and compact magnetic Hamiltonians are given by
\[\hat{H}_{E}=\frac{g^{2}}{2a}\sum_{p=1}^{N_{p}}(\vec{\nabla}\times\hat{R}_{p})^ {2}\,, \tag{35}\]
and
\[\hat{H}_{B}=\frac{N_{p}+1}{a\,g^{2}}-\frac{1}{a\,g^{2}}\left[\sum_ {p=1}^{N_{p}}\cos\hat{B}_{p}+\cos\left(\sum_{p=1}^{N_{p}}\hat{B}_{p}\right) \right]\,. \tag{36}\]
The non-compact formulation of this theory can be found by making the replacements \(\cos\hat{B}_{p}\to 1-\frac{1}{2}\hat{B}_{p}^{2}\). The bases where the operators \(\hat{H}_{E}\) and \(\hat{H}_{B}\) are diagonal are referred to as the electric and magnetic basis. Furthermore, operators represented in the electric and magnetic basis will be denoted by superscripts \((\epsilon)\) and \((m)\), respectively.
We represent the operators \(\hat{R}_{p}\) and \(\hat{B}_{p}\) using the procedure given in Ref. [42], which involves carefully choosing the maximum value that the \(\hat{B}_{p}\) operators are sampled, denoted by \(b_{\text{max}}\). The main idea for choosing \(b_{\text{max}}\) is that, in the magnetic basis, the low energy eigenstates of the theory have a typical width proportional to the gauge coupling \(g\). By choosing \(b_{\text{max}}\sim g\), one only samples the wavefunction where it has support, which results in an efficient representation for all values of \(g\). We first describe how the operators are sampled, and then explain how to choose the precise value of \(b_{\text{max}}\).
Each plaquette is represented using \(n_{q}\) qubits. Working in magnetic basis, the magnetic operators \(\hat{B}_{p}^{(m)}\) are diagonal and defined by
\[\hat{B}_{p}^{(m)}\big{|}b_{p,j}\big{>}=(-b_{\text{max},p}+j\,\delta b_{p})\, \big{|}b_{p,j}\big{>}\,, \tag{37}\]
where \(j=0,1,\ldots,2^{n_{q}}-1\) and \(\delta b_{p}=2b_{\text{max},p}/2^{n_{q}}\). Because \(\hat{B}_{p}\) and \(\hat{R}_{p}\) are conjugate operators, the rotor operator in the magnetic basis can be written as
\[\hat{R}_{p}^{(m)} =\text{FT}^{\dagger}\hat{R}_{p}^{(e)}\text{FT}, \tag{38}\] \[\hat{R}_{p}^{(e)}\big{|}r_{p,j}\big{>} =(-r_{\text{max},p}+j\,\delta r_{p})\,\big{|}r_{p,j}\big{>}\,,\]
where \(\text{FT}\) denotes the usual quantum Fourier transform and
\[r_{\text{max},p}=\frac{\pi 2^{n_{q}}}{2b_{\text{max},p}}\,,\qquad\delta r_{p}= \frac{\pi}{b_{\text{max},p}}\,. \tag{39}\]
With these definitions, one can implement Trotter time evolution of the Hamiltonian in Eq. (34) in a similar way as in Eq. (22).
We now discuss the procedure for choosing \(b_{\text{max},p}\), which can in general be different for different plaquettes \(p\). In the compact formulation, it was shown that choosing \(b_{\text{max},p}\) according to
\[b_{\text{max},p}=\min\left(g\frac{2^{n_{q}}}{2}\sqrt{\frac{\beta_{R,p}}{\beta_ {B,p}}}\sqrt{\frac{\sqrt{8}\pi}{2^{n_{q}}}},\pi\right) \tag{40}\]
reproduces the low-lying spectrum of the theory to permille level precision while only sampling the operators a number of times that corresponds to \(n_{q}=3\). The variables \(\beta_{R,p}\) and \(\beta_{B,p}\) are found by matching the non-compact magnetic Hamiltonian to a Hamiltonian of the form
\[\widetilde{H}=\frac{g^{2}}{2}\beta_{R,p}^{2}\hat{R}_{p}^{2}+\frac{1}{2g^{2}} \beta_{B,p}^{2}\hat{B}_{p}^{2}\,, \tag{41}\]
and ignoring the cross-terms. Further details regarding the digitization of the \(\hat{R}_{p}\) and \(\hat{B}_{p}\) operators in this formulation can be found in Refs [42; 90].
While this formulation is efficient in terms of the number of qubits required per site \(n_{q}\) to achieve a high precision in the low energy states, it was shown that performing time evolution using Trotter methods has a gate cost that scales exponentially with volume, i.e., the number of plaquettes \(N_{p}\)[89; 90]. Specifically, the exponential scaling was shown to be caused by the \(\cos\sum_{p}\hat{B}_{p}\) term in the magnetic Hamiltonian, which couples the entire lattice together. This exponential volume scaling can be broken, however, by performing a carefully chosen change of operator basis [89], which we now review.
The rotor and magnetic operators in this so-called weaved basis are given by
\[\hat{B}_{p}\rightarrow\mathcal{W}_{pp^{\prime}}\hat{B}_{p^{\prime}},\qquad\hat {R}_{p}\rightarrow\mathcal{W}_{pp^{\prime}}\hat{R}_{p^{\prime}}, \tag{42}\]
where \(\mathcal{W}\) is an orthogonal matrix of dimension \(N_{p}\times N_{p}\). For any value of \(N_{p}\), there exists an efficient classical algorithm for choosing \(\mathcal{W}\) that reduces the gate count scaling from exponential to polynomial in \(N_{p}\)[89]. Using this change of basis, the dominant contribution to the gate cost of a single Trotter step was shown to scale as \(\mathcal{O}(N_{p}^{n_{q}}+N_{p}(N_{p}/\log_{2}N_{p})^{n_{q}})\), which is polynomial in the volume, with the power of the polynomial determined by \(n_{q}\). This scaling arises because the number of terms appearing in a single cosine in the weaved basis scales as \(\mathcal{O}(\log_{2}N_{p})\). An example demonstrating the connectivity of the magnetic Hamiltonian operators for \(N_{p}=16\) is shown in Fig. 7.
One important assumption that went into choosing the large coupling limit of \(b_{\text{max},p}\) to be \(\pi\) in the original operator basis was that the coefficient of a magnetic field operator \(\hat{B}_{p}\) is equal to 1 anywhere it appears in the compact magnetic Hamiltonian. Because this is generally not true when working in the weaved basis, maintaining an efficient representation requires modifying the prescription for choosing \(b_{\text{max},p}\) in the large \(g\) limit [90]. To understand this, first notice that, in the weaved basis, some of
Figure 7: Visual representation of the connectivity of the cosine terms in the weaved magnetic Hamiltonian for \(N_{p}=16\). Operators appearing inside the same box also appear as a sum inside a single cosine term in \(\hat{H}_{B}\). Boxes with different line-styles or colors correspond to different cosine terms. The red solid square shows the reduced connectivity of the \(\cos\sum_{p}\hat{B}_{p}\) term. The blue dashed and dotted rectangles show the increased connectivity of the previously local \(\cos\hat{B}_{p}\) terms.
the \(\hat{B}_{p}\) operators will have coefficients smaller than one. Consequently, even if \(b_{\text{max},p}=\pi\), an operator \(\hat{B}_{p}\) inside a given cosine will not get sampled between the full range of \([-\pi,\pi]\). It was shown that this problem could be fixed by scaling the upper limit for each \(b_{\text{max},p}\) by the smallest coefficient of the operator \(\hat{B}_{p}\) anywhere it appears in the Hamiltonian. To demonstrate this procedure, we review the example for the \(N_{p}=3\) case given in Ref. [90]. The rotation matrix used is given by
\[\mathcal{W}=\frac{1}{\sqrt{6}}\begin{pmatrix}\sqrt{2}&-2&0\\ \sqrt{2}&1&-\sqrt{3}\\ \sqrt{2}&1&\sqrt{3}\end{pmatrix}\,, \tag{43}\]
which leads to the following weaved magnetic Hamiltonian
\[\hat{H}_{B}^{\text{w}} =\frac{N_{p}+1}{a\,g^{2}}-\frac{1}{a\,g^{2}}\Bigg{(}\cos\left[ \sqrt{3}\hat{B}_{1}\right]+\cos\left[\frac{\hat{B}_{1}-\sqrt{2}\hat{B}_{2}}{ \sqrt{3}}\right]\] \[\quad+\cos\left[\frac{\sqrt{2}\hat{B}_{1}+\hat{B}_{2}-\sqrt{3} \hat{B}_{3}}{\sqrt{6}}\right]\] \[\quad+\cos\left[\frac{\sqrt{2}\hat{B}_{1}+\hat{B}_{2}+\sqrt{3} \hat{B}_{3}}{\sqrt{6}}\right]\Bigg{)}\,. \tag{44}\]
In order to ensure that each \(\hat{B}_{p}\) operator gets sampled between \([-\pi,\pi]\) in each of the cosine terms, one must scale the upper limit for each \(b_{\text{max},p}\); in this case, the upper limits for \(b_{\text{max},1},b_{\text{max},2}\), and \(b_{\text{max},3}\), are chosen as \(\sqrt{2}\pi,\sqrt{6}\pi\), and \(\sqrt{3}\pi\), respectively. It was shown in Ref. [90] that this procedure for scaling \(b_{\text{max},p}\) results in a precision of the low-lying spectrum similar to that of the original basis.
We conclude this section by discussing the expected behavior of \(\Delta\) with \(g\) for the compact U(1) gauge theory. As explained in Ref. [112], because the untruncated U(1) electric Hamiltonian is unbounded, as one approaches the continuum limit, the physical energy gap \(E_{1}^{\text{phys}}-E_{0}^{\text{phys}}\) diverges; the value \(a(E_{1}^{\text{phys}}-E_{0}^{\text{phys}})\) approaches a constant as \(a\to 0\). This, combined with the fact that the physical energy difference \(E_{\text{max}}^{\text{phys}}-E_{0}^{\text{phys}}\) scales as \(\sim 1/a\), implies that \(\Delta\) will approach a constant as \(a\to 0\). This scaling is qualitatively different than that of gauge theory with a finite physical energy gap (such as QCD), which is discussed in detail in Sec. V.
## IV Numerical results
### U(1) gauge theory
In this section, we prepare the ground state of the previously described formulation of a compact U(1) lattice gauge theory using QETU. We study the algorithm cost by determining how its parameters \(\Delta\) and \(\gamma\) depend on the parameters of the Hamiltonian: the number of plaquettes \(N_{p}\), number of qubits per plaquette \(n_{q}\), and gauge coupling \(g\) (or lattice spacing \(a\)). We also study how the fidelity of the final state scales with the number of calls to \(\text{e}^{-irH}\), where the time evolution operator is implemented both exactly as well as using Trotter methods. The scaling results from our analysis are summarized in Table. 2. A general discussion regarding the physical reasoning for the observed scaling of these parameters is given in Sec. V.
Preparing the ground state requires knowledge of \(aE_{0}^{\text{phys}},aE_{1}^{\text{phys}}\), and \(aE_{\text{max}}^{\text{phys}}\). In a realistic quantum simulation, one would likely be able to estimate \(aE_{0}^{\text{phys}}\) beforehand. Calculating \(aE_{1}^{\text{phys}}\) and \(aE_{\text{max}}^{\text{phys}}\) is generally more difficult; still, one can often use physical arguments to bound both \(a(E_{1}^{\text{phys}}-E_{0}^{\text{phys}})\) and \(E_{\text{max}}^{\text{phys}}\). For the purposes the current study, we calculate \(aE_{0}^{\text{phys}}\) and \(aE_{1}^{\text{phys}}\) using exact diagonalization. Regarding \(aE_{\text{max}}^{\text{phys}}\), we provide arguments for placing upper bounds on its value, and compare our bounds to the exact result. One important consideration is that, because \(\Delta\) is a ratio of energies, the explicit dependence on the lattice spacing \(a\) cancels. The value of \(\Delta\) therefore only depends on the lattice spacing \(a\) through discretization effects.
We begin by placing an upper bound on \(aE_{\text{max}}^{\text{phys}}\). In the digitization scheme we use, one can write the U(1) Hamiltonian as \(H^{\text{phys},(b)}=\text{FT}^{\dagger}H_{E}^{(e)}\text{FT}+H_{B}^{(b)}\), where the superscript \(e\left(b\right)\) indicates that the matrix is represented in the electric (magnetic) basis, where it is implied the Fourier transform is performed locally at each lattice site. In this section, it is understood that the symbols \(H_{E}\) and \(H_{B}\) denote the unscaled Hamiltonians. Using the fact that the Fourier transform is unitary and therefore does not change the eigenvalues of \(H_{E}^{(e)}\), we see
\[\begin{split}\text{max}(H^{\text{phys}})&\leq\text{ max}(\text{FT}^{\dagger}H_{E}^{(e)}\text{FT})+\text{max}(H_{B}^{(b)})\\ &=\text{max}(H_{E}^{(e)})+\text{max}(H_{B}^{(b)})\,.\end{split} \tag{45}\]
Because \(H_{E}^{(e)}\) and \(H_{B}^{(b)}\) are diagonal matrices, the maximum eigenvalue of each matrix is simply the largest entry on the diagonal. To proceed, we must look in more detail at the forms of the two Hamiltonians.
Because the magnetic Hamiltonian is a constant term \((N_{p}+1)/(ag^{2})\) minus a sum of \(N_{p}+1\) cosine terms, the maximum value any single diagonal entry can take is \(2(N_{p}+1)/(ag^{2})\). This upper bound, however, overestimates the actual value, especially at small values of \(g\). To understand this, recall that the magnetic operators are sampled from \(-b_{\text{max}}\) to \(b_{\text{max}}\), where \(b_{\text{max}}\sim g\). As \(g\to 0\), the compact magnetic Hamiltonian approaches the non-compact version, with each term of the form \(B^{2}/g^{2}\). If \(B\sim g\), then \(B^{2}/g^{2}\sim 1\) is roughly independent of \(g\), while our upper bound scaled as \(1/g^{2}\). This issue can be avoided by taking advantage of the structure of the weaved magnetic Hamiltonian. It was shown in Ref. [89] that, after changing to the weaved basis, each cosine term in the magnetic Hamiltonian contains a sum of no more than \(\mathcal{O}(\log_{2}N_{p})\) magnetic field operators. Thus, the spectrum of each individual cosine term can
be found exactly using classical resources that scale only polynomially with \(N_{p}\). Once the maximum entry of each individual term is known, we can then place an upper bound on the maximum energy of \(H_{B}\) through
\[\begin{split}\max(H_{B}^{(b)})\leq\frac{1}{a\,g^{2}}\Bigg{[}& (N_{p}+1)-\sum_{j=0}^{N_{p}-1}\max\left(\cos\widetilde{B}_{j}\right)\\ &-\max\left(\cos\sum_{j=0}^{N_{p}-1}\widetilde{B}_{j}\right) \Bigg{]},\end{split} \tag{46}\]
where \(\widetilde{B}_{j}\) is the \(j^{\text{th}}\) magnetic operator in the weaved basis.
In a similar way, an upper bound of the maximum value of \(H_{E}\) can also be found. In both the original and weaved basis, we can write the Hamiltonian generally as \(g^{2}/2\sum_{i,j=0}^{N_{p}-1}c_{ij}R_{i}R_{j}\) (note that many of the \(c_{ij}\)'s are zero). Because each \(R_{i}R_{j}\) term in this sum is a \(2^{2n_{q}}\times 2^{2n_{q}}\) diagonal matrix, we can explicitly evaluate the spectrum of each term classically at a cost quadratic in \(N_{p}\). The upper bound placed on the maximum energy of \(H_{E}\) is given by
\[\max(H_{E}^{(e)})\leq\frac{g^{2}}{2a}\sum_{i,j=0}^{N_{p}-1}\max(c_{ij}R_{i}R_{ j}). \tag{47}\]
Our final upper bound on the maximum energy of the full Hamiltonian is then found using Eq. (45). Using this upper bound, combined with the exact values for \(aE_{1}\) and \(aE_{0}\), we can place a lower bound on \(\Delta\). This lower bound is compared to the exact value in Fig. 8, as a function of \(N_{p}\), \(n_{q}\), and \(g\). We discuss each plot individually.
The top plot shows the exact value and upper bound of \(\Delta\) as a function of \(N_{p}\) for \(n_{q}=2\) and \(g=1.4\). Before discussing the results, we point out that \(N_{p}=3,5,7\) correspond to lattices with sites \(N_{x}\times N_{y}\) of \(2\times 2\), \(2\times 3\), and \(2\times 4\), respectively. Due to the inherent limitations of classical simulation, we only increase the number of sites in a single dimension, and so we expect the finite volume errors to remain roughly the same size for all values of \(N_{p}\). From the plot we see that the upper bound is larger than the exact value, with the difference generally growing with \(N_{p}\). The overall scaling of \(\Delta\) is roughly as \(1/(N_{p}\log^{2}(N_{p}))\). To understand this behavior, we can study how the number of terms in the Hamiltonian grows with \(N_{p}\). For \(H_{B}\), the number of terms grows linearly with \(N_{p}\), implying that the maximum entry in \(H_{B}\) scales linearly with \(N_{p}\) as well. While the number of terms in \(H_{E}\) depends on the specific weaved matrix used, general statements can be made about the scaling. As demonstrated in Ref. [89], in order to break the exponential volume scaling in the gate count, a single rotor operator in the original basis is generally expressed as \(\mathcal{O}(\log N_{p})\) operators in the weaved basis. Because \(H_{E}\) is a sum of \(N_{p}\) terms that are squares of differences of rotors, this leads to the number of terms scaling as \(\mathcal{O}(N_{p}\log_{2}^{2}N_{p})\). For these reasons, we expect gap to scale roughly as \(\log_{2}N_{p}\). However, for smaller values of \(g\), we expect that \(H_{B}\) and \(H_{E}\) become of similar magnitude, and cancellations between \(H_{B}\) and \(H_{E}\) could lead to a milder scaling with \(N_{p}\).
The middle plot shows the exact value and upper bound of \(\Delta\) as a function of \(n_{q}\) for \(N_{p}=3\) and \(g=1.4\). These results show that the quality of the upper bound generally increases with \(n_{q}\). Furthermore, we see that the lower bound generally scales as \(\mathcal{O}(2^{-2n_{q}})\). To understand this scaling, we start with \(H_{B}\), which is a sum of the cosine of sums of magnetic operators. Even though the number of Pauli-Z operators in each magnetic operator grows exponentially with \(n_{q}\), the maximum value a given cosine can take is \(1\), regardless of \(n_{q}\). For \(H_{E}\) on the other hand, the rotor operators do not appear inside a cosine, and therefore the maximum eigenvalue grows with the number of terms. Additionally, the maximum eigenvalue of a single rotor operator scales as \(\mathcal{O}(2^{n_{q}})\), as seen from the relation \(r_{\max,p}=\pi 2^{n_{q}}/(2b_{\max,p})\). Because each term in \(H_{E}\) is bilinear, the maximum eigenvalue of each term grows as \(\mathcal{O}(2^{2n_{q}})\), leading to the observed scaling. For smaller values of \(g\), however, we expect that \(H_{B}\) and \(H_{E}\) become of similar magnitude, and the scaling with \(n_{q}\) will likely be more mild due to cancellations between \(H_{B}\) and \(H_{E}\).
Lastly, the bottom plot shows \(\Delta\) as a function of \(g\) (which is a function of the lattice spacing \(a\)) for \(N_{p}=3\) and \(n_{q}=3\). First, notice that the quality of the upper bound is higher for large \(g\). Second, except for the region of \(g\sim 1\), the value of \(\Delta\) is roughly independent of \(g\). The roughly constant behavior for small and large \(g\) can be understood by the weak and strong coupling limits of the Hamiltonian. For large \(g\), the electric Hamiltonian dominates. This, combined with the fact that \(b_{\max,p}\) approaches a constant for large \(g\) and \(H_{E}\sim g^{2}\), leads to the spectrum of \(H_{E}\) increasing as \(g^{2}\) for large \(g\). Because \(\Delta\) depends only on ratios of energy differences, the \(g\) dependence cancels, and we expect \(\Delta\) to approach a constant for large \(g\). Similarly, \(\Delta\) approaches a constant for small \(g\), which can be understood by recalling that as \(g\to 0\), the compact theory approaches the non-compact version. The non-compact theory is a free theory, i.e., a theory of non-interacting harmonic oscillators, with the gauge coupling \(g\) playing the role of the mass \(m\) of the canonical quantum harmonic oscillator. The spectrum of the compact U(1) Hamiltonian in the small \(g\) limit is therefore independent of \(g\), leading to the observed behavior. The large dip near \(g\sim 1\) is an artifact of using the weaved basis, and is not present in the original basis. It was found in Ref. [90] that controlling digitization errors near \(g=1\) required a tuning of the choices of \(b_{\max,p}\), which we did not perform here. It is therefore possible that after performing this tuning, the dip near \(g=1\) will disappear.
We conclude this discussion by pointing out that, even though one can argue how \(\Delta\) scales with various parameters, significant savings can still be achieved by either
improving the lower bound, or performing a dedicated calculation to determine the exact value. Our studies indicate that the cost reduction of such a study will be more significant as one increases \(N_{p}\) towards realistic values, and at small \(g\).
Next, we study how the parameter \(\gamma\) scales with \(N_{p},n_{q}\) and \(g\). As will be argued in Sec. V, using direct state preparation methods to implement the initial guess wavefunction \(|\psi_{\text{init}}\rangle\) results in an overlap \(\gamma\) that is exponentially suppressed in the number of sites. In this study, we instead consider implementing \(|\psi_{\text{init}}\rangle\) using adiabatic state preparation, with the objective of demonstrating that, even for a simple adiabatic state preparation procedure, \(\gamma\) can be made to decrease only polynomially with the number of sites. Note that because we use a simple adiabatic procedure, our results likely contain considerable Trotter and adiabatic violation errors. For this reason, the observed scaling of \(\gamma\) will not necessarily align with the expectation drawn from physical intuition. However, we stress once more that the purpose of this study is to demonstrate that \(\gamma\) can be made to decrease only polynomially with the number of sites. A dedicated study comparing more sophisticated adiabatic state preparation procedures is left for future work.
The general strategy is to start in the ground state of the large coupling limit of our theory. At large \(g\), the electric Hamiltonian dominates and the ground state approaches a state with constant entries; this state can be prepared by applying a Hadamard gate on each qubit. Once the large coupling ground state is prepared, the ground state of the target theory at some smaller coupling is prepared by adiabatically evolving between the two Hamiltonians.
We follow closely the procedure and notation given in Ref. [113]. The initial strong coupling Hamiltonian is denoted by \(H_{1}\) and the target Hamiltonian by \(H_{2}\), with gauge couplings \(g_{1}\) and \(g_{2}\), respectively. The adiabatic evolution is performed according to a time-dependent Hamiltonian \(H[u(t)]=(1-u(t))H_{1}+u(t)H_{2}\), where \(u(t)\in[0,1]\) is the ramping function satisfying \(u(0)=0\) and \(u(T)=1\). The parameter \(T\) is the total time the system is adiabatically evolved. The exact time-ordered time-evolution operator \(U(T)=\mathcal{T}\mathrm{e}^{-i\int_{0}^{T}H[u(t)]dt}\) is implemented using Trotter methods in two stages. First, the integral over \(t\) is split into \(M\) discrete steps of size \(\delta t=T/M\), such that
\[U(T)\approx\prod_{k=0}^{M-1}U_{1}(k,\delta t)U_{2}(k,\delta t)\,, \tag{48}\]
where \(U_{1}(k,\delta t)=\mathrm{e}^{-iH_{1}\delta t_{1}}\) and \(U_{2}(k,\delta t)=\mathrm{e}^{-iH_{2}\delta t_{2}}\). The values of \(\delta t_{1}\) and \(\delta t_{2}\) are given by
\[\delta t_{1} =\int_{k\delta t}^{(k+1)\delta t}dt(1-u(t))\,, \tag{49}\] \[\delta t_{2} =\int_{k\delta t}^{(k+1)\delta t}dt\,u(t)\,. \tag{50}\]
Second, for \(i=1,2\), each operator \(U_{i}(k,\delta t)\) is approximated using a single step of a first-order Trotter formula, written explicitly as \(U_{i}(k,\delta t)\approx\mathrm{e}^{-i\delta t_{i}H_{E}}\mathrm{e}^{-i\delta t _{i}H_{B}}\).
For our study, we choose the strong coupling Hamiltonian with \(g_{1}=10\) as \(H_{1}\). The ground state is prepared by applying a Hadamard gate on each qubit. We choose the simple linear ramp with \(u(t)=t/T\). Furthermore, we set \(T=1\) and perform the adiabatic evolution using \(M=2\) steps. While it is known that, in order for the adiabatic theorem to be satisfied, the parameter \(T\) needs to scale as one over the square of the smallest energy gap along the adiabatic trajectory of the unscaled/unshifted Hamiltonian, i.e., \(T=\mathcal{O}((aE_{1}-aE_{0})^{-2})\), see, e.g., Ref. [114], we find that our simple parameter choices lead to polynomial scaling with all parameters. To be conservative,
Figure 8: Comparison of the exact value of \(\Delta\) to the upper bound calculated using the procedure in Sec. IV.1. Top: \(\Delta\) as a function of \(N_{p}\) using \(n_{q}=2\) and \(g=1.4\). The gap scales asymptotically as \(1/(N_{p}\log^{2}(N_{p}))\). Middle: \(\Delta\) as a function of \(n_{q}\) using \(N_{p}=3\) and \(g=1.4\). The gap generally decreases \(1/2^{2n_{q}}\). Bottom: \(\Delta\) as a function of \(g\) using \(N_{p}=3\) and \(n_{q}=3\). The gap is generally independent of \(g\), with a large dip near \(g\sim 1\), which is due to the gap \(a(E_{1}-E_{0})\) becoming small. This behavior near \(g=1\) is an artifact of using the weaved basis.
we choose a scaling that is equal to or worse than the observed scaling in our numerical results.
The top plot in Fig. 9 shows the value of \(\gamma\) as a function of \(N_{p}\) for multiple values of \(g_{2}\). For \(g_{2}=1.2\), \(\gamma\) is large and only has a mild dependence on \(N_{p}\). For \(g_{2}=0.2,0.7\) the value of \(\gamma\) decreases from \(N_{p}=3\) to \(N_{p}=5\), but from \(N_{p}=5\) to \(N_{p}=7\) is relatively constant. We observe a scaling no worse than \(\gamma\sim\mathcal{O}(1/N_{p})\). The middle plot in Fig. 9 shows the value of \(\gamma\) as a function of \(n_{q}\) for multiple values of \(g_{1}\). For \(g_{2}=1.2\), we observe a mild dependence on \(n_{q}\). For \(g_{2}=0.2,0.7\) we observe a general \(1/n_{q}\) scaling. In this study we observe a scaling no worse than \(\gamma\sim\mathcal{O}(1/n_{q})\). Lastly, the bottom plot in in Fig. 9 shows \(\gamma\) as a function of \(g_{2}\) for \(N_{p}=3\) and \(n_{q}=3\). For \(g_{2}\gtrsim 2\) we find \(\gamma\) to be close to \(1\). This is expected as the large coupling ground state with \(g_{1}=10\) and the ground state for \(g_{2}\gtrsim 2\) have reasonable overlap. As one decreases \(g_{2}\) further, we observe a steep decrease in \(\gamma\) near \(g_{2}=1\). The value of \(\gamma\) approaches a constant in the small \(g_{2}\) limit. This behavior is consistent with the decrease in overlap between the ground state of the strong coupling Hamiltonian \(H_{1}\) and the ground state of the target Hamiltonian \(H_{2}\).
For the purpose of quoting how \(\gamma\) scales with the parameters of our system, we interpret the numerical results in a conservative way; we choose a scaling that is equal to or worse than the observed scaling in our numerical results. Due to the simplicity of our adiabatic procedure combined with the small volumes that are accessible to classical simulation, predicting the scaling for realistic lattice sizes will require further dedicated studies.
We now study the fidelity of the prepared state as a function of the number of calls to the _exact_ time evolution operator \(\mathrm{e}^{-i\tau H}\). The spectral norm and energy gap were determined exactly by diagonalizing the Hamiltonian. The value of \(\Delta=(E_{1}-E_{0})/1.5\) was used, with \(\tau=1\). We denote the state prepared using QETU by \(\ket{\psi_{\mathrm{prepared}}}\), and the exact ground state by \(\ket{\psi_{0}}\). The error is defined to be \(1-\ket{\bra{\psi_{\mathrm{prepared}}}\ket{\psi_{0}}}\). Figure 10 shows the error as a function of the degree of the polynomial \(d\) used to approximate the shifted sign function for \(N_{p}=3,5\), \(n_{q}=1\), and \(g=1.4\). The error decreases exponentially as we increase \(d\), with the rate of convergence being slower for \(N_{p}=5\) due to the smaller value of \(\Delta\).
We now move to the studies of _approximate_ implementations of the time-evolution operator with the aid of Trotter methods. The first study compares the overall Trotter error between the standard controlled version of QETU and the control-free version. As explained in Sec. II.1, the control-free version of QETU algorithm is designed to avoid using controlled calls to the time-evolution circuit. This method requires a Hamiltonian dependent procedure; the details for the U(1) case are given in App. B. By repeatedly calling \(\mathrm{e}^{-i\tau H}\), the standard version returns \(F(\cos(\tau H/2))\), while the control-free version of QETU returns \(F(\cos(\tau H))\). This implies that one can use instead \(\mathrm{e}^{-i\tau H/2}\) as a building block for control-free QETU. Because one only has to time evolve by half the total time \(\tau/2\), for the same number of Trotter steps, one can use a step size half as large, leading to a smaller overall error relative to the standard controlled version of QETU. Figure 11 shows the error as a function of \(d\) when using both the standard and control-free versions of QETU. The system parameters used are \(N_{p}=3,n_{q}=2\) and \(g=0.6\). We approximated
Figure 9: This figure shows how \(\gamma\) scales with the parameters of our system when the initial guess is prepared using the adiabatic state preparation procedure described in the main text. Top: \(\gamma\) as a function of \(N_{p}\) for \(n_{q}=2\). The different colored lines indicate different values of \(g_{2}\). The black dashed line shows a curve scaling as \(\sim 1/N_{p}\). Middle: \(\gamma\) as a function of \(n_{q}\) for \(N_{p}=3\). The different colored lines indicate different values of \(g_{2}\). The black dashed line shows a curve scaling as \(\sim 1/n_{q}\). Bottom: \(\gamma\) as a function of the gauge coupling \(g\) for \(N_{p}=3\) and \(n_{q}=3\). The value is near \(1\) for large \(g\), and approaches a constant value for small \(g\).
with \(\tau=1.78\) using a single Trotter step of \(\delta\tau=\tau\) and \(\delta\tau=\tau/2\) for the standard and control-free versions of QETU, respectively. As \(d\) is increased, the error of both methods leveled out, with the error of the control-free version being an order of magnitude smaller due to the smaller time-step used. We see that, in addition to requiring less gates, the control-free version of QETU also results in a smaller Trotter error for the same number of calls to the time-evolution circuit. The remainder of the results in this section were obtained using the control-free version of QETU.
We now study how the maximum achievable precision depends on the Trotter step size \(\delta\tau\). Figure 12 shows the error as a function of calls to the time evolution operator \(\mathrm{e}^{-i\tau H}\) for \(\tau=1.5\), approximated using a first order Trotter formula with \(N_{\mathrm{steps}}=1,2,4\). The parameters of the system studied are \(N_{p}=3,n_{q}=2\) and \(g=0.6\). We see that the error decreases exponentially at first and then levels out for large number of calls. This occurs because the error is now dominated by the Trotter error, and improving the quality of the approximation of the projector is no longer beneficial. Even though we use the first order Trotter formula, the maximum precision achievable scales as \(\mathcal{O}(\delta\tau^{2})\). This is due to the fact that the leading order error term of the form \(\delta\tau[H_{E},H_{B}]\) is zero for this system.
The final study we perform regarding the Trotter error is how, for some fixed total precision, the Trotter step size \(\delta\tau\) must scale with the volume. Typically, as one increases the number of terms in a Hamiltonian, for fixed step size, the Trotter error increases due to the increased number of terms that do not commute. From this argument, as we increase \(N_{p}\) or \(n_{q}\), we expect that \(\delta\tau\) must be decreased accordingly if one wants to maintain a constant level of precision. However, scaling \(H_{E}\) and \(H_{B}\) by the parameter \(c_{1}\) is equivalent to scaling the Trotter step size by \(c_{1}\). Because \(c_{1}\) generally increases with \(N_{p}\) and \(n_{q}\), the effective Trotter step size decreases with \(N_{p}\) and \(n_{q}\). If the decrease in error from the smaller effective step size is more significant than the increase in error from the extra non-commuting terms, then the Trotter error will _decrease_ as we increase \(N_{p}\) or \(n_{q}\). We observe this to be
Figure 11: Error as a function of \(d\) using both the standard and control-free versions of QETU. For both cases, a single Trotter step was used to approximate \(\mathrm{e}^{-i\tau H}\) with \(\tau=1.78\). The step sizes used were \(\delta\tau=\tau\) and \(\delta\tau=\tau/2\) for the standard and control-free versions, respectively. The Trotter error in the control-free version is smaller than the standard version by an order of magnitude.
Figure 12: Error of the state prepared using the control-free version of QETU as a function of \(d\), where different colored and shaped data points correspond to different numbers of Trotter steps used to approximate the time-evolution circuit. The results are shown for \(N_{p}=3\), \(n_{q}=2\), \(g=0.6\), and \(\tau=1.5\). As the number of steps is increased, the error saturates at large \(d\) to smaller values due to the reduced Trotter errors.
Figure 10: Error of the ground state prepared using QETU as a function of the degree of the Chebyshev approximation \(d\), where the time evolution operator was implemented _exactly_. Different colored points correspond to different number of plaquettes \(N_{p}\). All results used \(n_{q}=1\) and \(g=1.4\). The error decreases exponentially with \(d\), with the rate being slower for \(N_{p}=5\) due to a smaller gap \(\Delta\).
the case, and show an example of this counter-intuitive behavior in Fig. 13. In this plot, we show the error as a function of \(d\) for three values of \(N_{p}=3,5,7\) using \(n_{q}=1\) and \(g=1.4\). For each value of \(N_{p}\), we use a single Trotter step with \(\delta\tau=1.5\). Notice that we can use a value of \(\delta\tau>1\) and still see convergence due to the fact that the effective Trotter step size is scaled by \(c_{1}\). Looking at Fig. 13, we see that as we increase \(N_{p}\), the maximum achievable precision increases. This behavior was also observed as \(n_{q}\) was increased. Because this behavior is expected to continue as one increases \(N_{p}\) and \(n_{q}\) towards realistic values, the Trotter error will eventually become negligible for any realistic precision requirements. Practically speaking, this implies that for a realistic calculation, one can approximate the time evolution operator using a single Trotter step of a first order Trotter formula with \(\delta\tau=\tau_{\text{max}}\), independent of \(N_{p}\) or \(n_{q}\). What at first seemed like a technical feature of QETU, turns out to offer a powerful protection against further \(N_{p}\) or \(n_{q}\) scaling. We conclude by stressing that, even though the effective step size is scaled by \(c_{1}\), the un-scaled \(\delta\tau\) must still satisfy \(\delta\tau\leq\tau_{\text{max}}\) in order to guarantee isolation of the ground state when applying the filter operator.
We conclude with the classical computational cost of calculating the angles \(\Phi\) needed in the QETU circuit. We found numerically that the cost scales quadratically with the number of angles. Because the number of angles is proportional to \(\Delta^{-1}\), the associated classical cost scales as \((\Delta^{-1})^{2}=\mathcal{O}(N_{p}^{2}2^{4n_{q}})\).
### Wavepacket construction
In this section, we use QETU to prepare a Gaussian state, defined in Eq. (26), with \(x_{0}=0\) and \(\sigma_{x}=0\). We first show an example of how naively applying QETU according to Eq. (9) leads to the error decreasing only polynomially with the number of Chebyshev polynomials. Next, we show how the modifications described in Sec. II.3 achieve an exponential scaling in the error for any desired value of the width, including a method that avoids the costly implementation of LCU to add the even and odd components of \(\mathsf{F}(x)\). From there, we compare the gate count cost of our method to that of exact state preparation methods, and find that our method requires less gates than exact state preparation methods for states represented by \(>2-3\) qubits.
As explained in Sec. II.3, due to the presence of the \(\arccos(x)\) term, we expect the error of the approximation to decrease only polynomially with the number of Chebyshev polynomials. An example of the polynomial convergence is shown in Fig. 14, using parameters \(n_{q}=4\), \(\sigma_{x}/x_{\text{max}}=0.4\), \(\tau=1\) and \(\eta=0\). Looking at the data labeled Method I, we observe that the error converges quadratically as the number of Chebyshev polynomials is increased. We now discuss modifications that can improve the scaling.
The first modification we study is to determine the parameter \(\eta\) and the Chebyshev coefficients according to the optimization problem in Eq. (30), except with \(\tau\) fixed to \(\tau=1\). Looking at the data labeled Method II in Fig. 14, we see that while varying \(\eta\) improves the scaling to a point, the error eventually starts to decrease polynomially.
The final method we study solves for the Chebyshev coefficients, and values of \(\eta\) and \(\tau\), by solving the optimiza
\begin{table}
\begin{tabular}{c c} Parameter & Scaling \\ \hline \hline \(\gamma\) & \(\mathcal{O}(N_{p}^{-1}n_{q}^{-1})\) \\ \(\Delta\) & \(\mathcal{O}(N_{p}^{-1}2^{-2n_{q}})\) \\ \(N_{\text{steps}}=\tau/\delta\tau\) & \(\mathcal{O}(1)\) \\ Gates(\(\mathrm{e}^{-i\delta\tau H}\)) & \(\mathcal{O}(N_{p}^{n_{q}})\) \\ \end{tabular}
\end{table}
Table 2: Scaling of parameters in the cost of state preparation using QETU in terms of \(N_{p}\), \(n_{q}\), and \(g\). The \(\gamma\) parameter defines the number of measurements needed to measure the ancillary qubit in the zero state, which scales as \(O(1/\gamma^{2})\). This scaling can be can be improved to \(O(1/\gamma)\) using amplitude amplification at the cost of increasing the circuit depth by a factor of \(\gamma^{-1}\). The \(\Delta\) parameter defines the query depth of the time evolution circuit \(\mathrm{e}^{-i\tau H}\), which scales as \(\mathcal{O}(\Delta^{-1})\). Note that while the gauge coupling \(g\) does not appear in the asymptotic scaling of the \(\gamma\) and \(\Delta\) parameters, their values still depend on on \(g\). The \(\tau/\delta\tau\) parameter defines the number of Trotter steps used when approximating \(\mathrm{e}^{-i\tau H}\), which, surprisingly, does not scale with \(N_{p}\) or \(n_{q}\). The Gates(\(\mathrm{e}^{-i\delta\tau H}\)) parameter defines the number of gates required to implement a single Trotter step for the particular formulation of U(1) gauge theory we consider [89, 90]. This unusual scaling is due to the fact that the theory we consider is highly non-local.
Figure 13: Error of the state prepared with the control-free version of QETU as a function of \(d\), where different colored/shaped data points correspond to different values of \(N_{p}\). The results are shown for \(n_{q}=1\), \(g=1.4\), and \(\tau=1.5\). A single Trotter step was used to approximate the time-evolution circuit. As the number of plaquettes \(N_{p}\) increases, the Trotter error decreases. This behavior is due to the effective Trotter step size decreasing with \(N_{p}\).
tion problem in Eq. (31) where only the error between the sample points \(\tilde{x}_{j}\) are considered. During our numerical tests, we found that standard minimization techniques were highly sensitive to the initial values of \(\eta\) and \(\tau\). To avoid this problem, we determine the optimal values of \(\eta\) and \(\tau\) using a brute force approach. While brute force approaches are technically inefficient, for our two dimensional parameter space and cheap cost function, we find this method to be an appropriate choice. Note that negative values of \(\eta\) are allowed, and often result in the smallest error. Our numerical studies routinely found that using a value of \(\tau=4\) and varying \(\eta\) leads to the smallest errors. One possible explanation for this is that, for \(\tau=4\), the parameter \(\eta\) can be varied to change the shape of the function \(\mathsf{F}(\tilde{x}_{i})\) to be approximately linear and quadratic, for the odd and even components, respectively; functions resembling linear and quadratic dependence are approximated well using only a few Chebyshev polynomials. Figure 14 shows the error using this procedure as a function of the number of Chebyshev polynomials, labeled as Method III. The error is five orders of magnitude smaller than using the previous methods, and decreases exponentially. Note that this procedure of varying \(\eta\) and \(\tau\) when sampling the entire range of \(x\in[\sigma_{\min},\sigma_{\max}]\) has almost identical performance to only considering values \(\tilde{x}_{j}\). However, we choose to follow the procedure in Eq. (31) where we only sample \(\tilde{x}_{j}\) because when the number of polynomials matches the number of sampled points, the error is zero.
While we have demonstrated that QETU can be used to implement Gaussian states with a cost \(\mathcal{O}(n_{q}\log(1/\epsilon))\), the cost of performing LCU to add the even and odd pieces introduces a large overall coefficient. If one could modify the procedure to avoid using LCU, the gate count would be reduced by a factor of 10 or more. As discussed in Sec. II.3, because of our specific digitization of the \(\hat{x}\) operator, choosing a value of \(\tau=2\) results in the function \(\mathsf{F}(x)\) being purely even.
After setting \(\tau=2\), the parameter \(\eta\) and Chebyshev coefficients are determined by solving the optimization problem in Eq. (31) with \(\tau\) set to \(\tau=2\). Figure 15 shows the error as a function of the degree of the Chebyshev polynomial used. The error decreases exponentially at first, levels off, then reaches zero as the number of parameters equals the number of points. Importantly, the error levels off at a value of \(\sim 10^{-9}\), which will be completely negligible for a realistic simulation. While the error for the same number of Chebyshev polynomials is generally larger than if one implemented the even and odd pieces separately and then added them using LCU, the cost of performing LCU leads to over an order of magnitude larger gate count. For this reason, converting \(\mathsf{F}(x)\) into an even function to avoid the cost of LCU results in the best precision for a given gate cost. The rest of the results in this section were calculated using this method.
We now study how the precision varies with the number of qubits \(n_{q}\). Figure 16 shows the error of the prepared state as a function of \(n_{q}\) for different degree polynomial approximations, using \(\sigma_{x}/x_{\max}=0.2\). Our results indicate that the precision is independent of \(n_{q}\), except in the cases where the small number of sample points results in an exactly prepared Gaussian state. From this we learn that the number of Chebyshev polynomials required to achieve some desired precision is independent of \(n_{q}\). This result will be important when comparing the gate cost to exact state preparation methods.
The final component of our precision study is to understand how the precision scales as the width \(\sigma_{x}/x_{\max}\) of the wavepacket is varied. Figure 17 shows the error of the prepared state as a function of the number of Chebyshev polynomials used for different values of \(\sigma_{x}/x_{\max}\), using \(n_{q}=5\) qubits. We find that the error decreases exponentially for all values of the width, with more sharply peaked Gaussian states requiring more Chebyshev polynomials to achieve the same precision as more broadly peaked states. This intuitive result is similar to the fact that the cost of approximating the shifted error function increases as the energy gap \(\Delta\) is decreased.
We conclude this section with a comparison of the gate cost when preparing a Gaussian state using QETU to us
Figure 14: Error of the prepared Gaussian state using QETU as a function of the number of Chebyshev polynomials included in both the even and odd components. Different colored points correspond to different methods of construction. Method I determined the Chebyshev expansion by solving the optimization problem in Eq. (9), using \(\eta=0\) and \(\tau=1\). Method II shows results solving the optimization problem in Eq. (30) for \(\tau\) fixed to \(\tau=1\). Method III used values for \(\eta\), \(\tau\), and the Chebyshev coefficients by solving the optimization problem in Eq. (31), where one only samples the function at points \(\tilde{x}_{j}\). The black dashed line shows the curve \(\sim 1/n^{2}\). The results are shown for \(n_{q}=4\) and \(\sigma_{x}/x_{\max}=0.4\). The error using Method I decreases quadratically with the number of Chebyshev polynomials. Using Method II, the error appears to at first decrease exponentially, then decreases only polynomially. The error using Method III decreases exponentially, and reaches floating point precision with only five Chebyshev polynomials.
ing an exact state preparation procedure. For this comparison, we compile the resulting quantum circuits using a universal gate-set consisting of CNOT, \(R_{z}\), and \(R_{x}\) gates using QISKIT [115]. Before discussing precise gate counts, we argue the expected scaling for both methods. Starting with QETU, one controlled call to the \(\mathrm{e}^{-i\tau\hat{x}_{\mathrm{sh}}}\) operator requires \(\mathcal{O}(n_{q})\) CNOT and \(R_{z}\) gates. The number of \(R_{x}\) gates is simply equal to the number of Chebyshev polynomials included in the approximation. Exact state preparation methods on the other hand require \(\mathcal{O}(2^{n_{q}})\) CNOT, \(R_{z}\) and \(R_{x}\) gates. Even though QETU has a better asymptotic scaling, because of the relatively large coefficient in the overall QETU cost, we expect that for small values of \(n_{q}\), exact state preparation will be less costly than using QETU. The question we now answer is at what value of \(n_{q}\) does the cost of QETU become cheaper than exact state preparation methods.
For the gate count comparison, we compare the number of gates and neglect the scaling from the \(\gamma\) factor for the QETU cost. We do this because algorithms with shorter depths for a single run, rather than total gates required, when using NISQ and early-fault tolerant devices are preferred. For the gate count comparison, because the cost of \(R_{z}\) and \(R_{x}\) gates are similar, we chose to compare the number of total rotation gates, as well as the number of CNOT gates. It is important to note that the gate count using QETU for a fixed number of calls to the \(\mathrm{e}^{-i\tau\hat{x}_{\mathrm{sh}}}\) is independent of value of \(\sigma_{x}/x_{\mathrm{max}}\). What is important, however, is the precision one can achieve for that particular value of \(\sigma_{x}/x_{\mathrm{max}}\). The cost for exact state preparation methods is independent of \(\sigma_{x}/x_{\mathrm{max}}\).
Figure 18 shows the gate counts required to prepare the Gaussian state, using both QETU and exact state preparation methods, as a function of \(n_{q}\). The top and bottom plots show the number of CNOT and rotation gates, respectively. For QETU, the gate count is shown for 2, 3, and 4 calls to the \(\mathrm{e}^{-i\tau\hat{x}_{\mathrm{sh}}}\) operator. As expected, the scaling for the number of CNOT and rotation gates is linear in \(n_{q}\) when using QETU, and exponential in \(n_{q}\) for exact preparation methods. Comparing first the CNOT count, looking at the top plot in Fig. 18, we notice that for \(n_{q}=5\), the CNOT count when using QETU starts to be cheaper than exact state preparation methods.
We now compare the number of rotation gates for both methods. Looking at the bottom plot in Fig. 18, we see that already for \(n_{q}=2,3\), QETU requires less rotation gates than exact state preparation methods. As already shown in Fig. 17, for certain values of \(\sigma_{x}/x_{\mathrm{max}}\), using only 2-3 Cheybshev polynomials can achieve a subpercent precision on the prepared state. Additionally, we showed in Fig. 16 that for fixed degree Chebyshev approximations, the precision is independent of the value of \(n_{q}\). Taken together, all of these results imply that, depending on the desired error tolerance and value of \(\sigma_{x}/x_{\mathrm{max}}\), it makes sense to consider using QETU for values of \(n_{q}\) as small as \(n_{q}=3\). Note that it is in principle possible to reduce the cost of exact state preparation methods by dropping gates with small rotation angles below
Figure 16: Error of the prepared Gaussian state using QETU as a function of the number of qubits \(n_{q}\) used to represent the state. The state was prepared by setting \(\tau=2\) and solving the optimization problem in Eq. (31). Different colored points show different degree Chebyshev expansions. The results for \(n_{q}=3\) using \(d=6,10\) are not shown because the error was zero. Except for small values of \(n_{q}\) where the state is prepared exactly, the error is independent of \(n_{q}\). Results are shown for \(\sigma_{x}/x_{\mathrm{max}}=0.2\).
Figure 15: Error of the prepared Gaussian state using QETU as a function of the number of Chebyshev polynomials included in the even and odd components. The blue points on this plot are the same blue points labeled as Method III in Fig. 14. The orange squares show the error if one sets \(\tau=2\) and solves the optimization problem in Eq. (31). As explained in the main text, setting \(\tau=2\) results in \(\mathsf{F}(x)\) being even and no error is incurred by dropping the odd piece. The error using both the even and odd components decreases exponentially. The error using only the even component is comparable to using both pieces using only two more Chebyshev polynomials. This cost is more than compensated for by not having to perform LCU.
some threshold. The gate count savings and the error introduced will require a dedicated study, which will be interesting to perform in future work.
We conclude by discussing how the cost of our method compares to the cost of the Kitaev-Webb (KW) algorithm for preparing Gaussian states [116]. While the KW algorithm can implement Gaussian states with a cost polynomial in the number of qubits, the algorithm comes with a large overall prefactor due to the need to perform arithmetic on the quantum computer. A detailed study comparing the gate cost of using the KW algorithm to the gate cost of exact state preparation methods was performed in Ref. [117]. It was found that, due to a large prefactor, state preparation using the KW algorithm for a single Guassian wavepacket was more expensive than exact state preparation methods up to \(n_{q}\sim 14\) qubits. This, combined with the fact that the cost of KW scales more quickly than linear in \(n_{q}\), implies that using QETU to prepare a one dimensional Gaussian wavefunction will also be cheaper than using the KW algorithm for any value of \(n_{q}\).
In principle, it is also possible to use QETU to also prepare multi-dimensional Gaussian states. Because of the probabilistic nature of using QETU, one will have to ensure that the \(\gamma\) factor does not decrease exponentially with the dimension of the Gaussian state being prepared. This would be nontrivial to achieve, given the non-unitary nature of the Gaussian filter transformation (26). It will be interesting to explore this application of QETU in a future work.
The final study we perform is on how \(\gamma\) depends on \(n_{q}\), the number of Chebyshev polynomials, and the width \(\sigma_{x}/x_{\text{max}}\). Recall that \(\gamma\) is defined to be the magnitude of the final state prepared using QETU. Because the initial state guess and implemented operator \(F(\cos(\tau\hat{x}_{\text{sh}}/2))\) are known exactly, we can directly evaluate \(\gamma\) in the limit where we reproduce \(\mathsf{F}(x)\) exactly. Doing so gives
\[\begin{split}\gamma&=|\bra{\psi_{\text{init}}}| \mathsf{F}(\cos(\tau\hat{x}_{\text{sh}}/2))^{2}\ket{\psi_{\text{init}}}|\\ &=\frac{c^{2}}{2^{n_{q}}}\sum_{j=0}^{2^{n_{q}}-1}\mathrm{e}^{-x_ {j}^{2}/\sigma_{x}^{2}}\,,\end{split} \tag{51}\]
where going to the second line we used
Figure 17: Error of the prepared Gaussian state using QETU as a function of the number of Chebyshev polynomials used to construct the Guassian filter operator. The state was prepared by setting \(\tau=2\) and solving the optimization problem in Eq. (31). Different colored points show different values of the wavepacket width \(\sigma_{x}/x_{\text{max}}\). In all cases, the error decreases exponentially. More sharply peaked wavepackets require more Chebyshev polynomials to achieve the same level of precision. Results are shown for \(n_{q}=5\).
Figure 18: Top: Number of CNOT gates required to prepare a Gaussian state using both QETU and exact state preparation methods as a function of the number of qubits used to represent the state \(n_{q}\). The factor of \(\gamma\) is not included in the QETU count. The black squares with a solid line show the count for exact state preparation method. The different colored circles with dashed lines show the cost of QETU for different number of calls to the \(\mathrm{e}^{-i\tau\hat{x}_{\text{sh}}}\) operator. Exact state preparation methods become more expensive around \(n_{q}=5\). Bottom: Same as top but for the number of rotation gates required. Exact state preparation becomes more expensive around \(n_{q}=3\).
\(2^{-n_{q}/2}\sum_{j=0}^{2^{n_{q}}-1}|x_{j}\rangle\) and \(F(\cos(\tau\hat{x}_{\rm sh}/2))=c\,{\rm e}^{-\hat{x}^{2}/(2\sigma_{x}^{2})}\). If we replace \({\sf F}\) with the approximation \(F\) with error \(\epsilon\), the above expression for \(\gamma\) is expected to be correct up to \({\cal O}(\epsilon)\) corrections. In the limit of \(n_{q}\to\infty\), the value of \(\gamma\) is simply the area under the Gaussian curve. We therefore expect \(\gamma\) to approach a constant as \(n_{q}\) is increased. The value of this constant will depend on \(\sigma_{x}/x_{\rm max}\), and is given by \(c^{2}\int_{-x_{\rm max}}^{x_{\rm max}}dx\,{\rm e}^{-x^{2}/\sigma_{x}^{2}}\). From this we expect that \(\gamma\) decreases as the ratio \(\sigma_{x}/x_{\rm max}\) decreases.
Using this expression for \(\gamma\), we can identify a scenario where \(\gamma\) can be prohibitively small. Suppose that one uses a small value of \(n_{q}\) to implement a sharply peaked Gaussian state. It is possible that one only samples at values of \(x\) where the function \({\rm e}^{-x^{2}/(2\sigma_{x}^{2})}\) is exponentially suppressed, resulting in an exponentially small value of \(\gamma\). However, because sampling a sharply peaked function a small number of times will introduce large digitization errors, this situation is not likely to occur in practice; in a realistic scenario, one will sample the wavepacket using a large enough value of \(n_{q}\) to avoid a small value of \(\gamma\).
Figure 19 shows the dependence of \(\gamma^{-1}\) on \(n_{q}\) and \(\sigma_{x}/x_{\rm max}\). All results were calculated using a degree 18 even Chebyshev approximation. For the data shown, the largest error from the finite Chebyshev approximation of the state produced using QETU was \(\sim 10^{-9}\). The top plot in Fig. 19 shows \(\gamma^{-1}\) as a function of \(n_{q}\) for different values of \(\sigma_{x}/x_{\rm max}\). Using \(n_{q}=2\) for \(\sigma_{x}/x_{\rm max}=0.1\) results in \(\gamma^{-1}\sim 10^{5}\) because the wavepacket is only sampled at values where the value is exponentially suppressed. This problem can be avoided by increasing \(n_{q}\), and is less severe for larger values of \(\sigma_{x}/x_{\rm max}\). We notice that by \(n_{q}=5\) the value of \(\gamma^{-1}\) levels out. The bottom plot in Fig. 19 shows \(\gamma^{-1}\) as a function of \(\sigma_{x}/x_{\rm max}\) for \(n_{q}=5\). We see that \(\gamma\) is directly proportional to \(\sigma_{x}/x_{\rm max}\), with more sharply peaked states having smaller values of \(\gamma\). For a sharply peaked value of \(\sigma_{x}/x_{\rm max}=0.1\), we find \(\gamma^{-1}\sim 13\).
The main value of our scrupulous numerical investigations in this section is the demonstrated ability to prepare wavepackets with exponential precision, for arbitrary values of the distribution parameters. This required several optimization procedures and could not have been trivially predicted based on the construction proposed in Sec. II.3. Our result proves an important point that, despite the presence of \(\arccos(x)\) in the QETU approximation, it can be used for representing functions of Hermitian operators, with exponential precision. (In Ref [88] this issue was avoided by constructing a polynomial approximation to the error function which, in turn, was approximating the cosine-transformed original step function.)
## V Ground state preparation cost for a general gauge theory
In this section, we discuss how the cost of ground state preparation using QETU is expected to scale with the parameters of a general lattice pure gauge theory in \(d\) spatial dimensions. Understanding the cost in terms of a total gate count requires detailed information regarding the specific formulation of the lattice gauge theory in question, as well as the algorithm used to implement \({\rm e}^{-i\tau H}\). To keep the discussion in this section as general as possible, we consider the cost in terms of the number of calls to the time evolution circuit. For ease of notation, throughout this section, we denote energies of the unscaled Hamiltonian of the general lattice gauge theory by \(E_{i}\).
We consider a pure gauge theory on a hypercubic spatial lattice with \(N_{s}\) sites in each dimension, and a lattice spacing \(a\) between neighboring sites. The \(d\) dimensional volume is given by \(V=(aN_{s})^{d}\). To truncate the infinite dimensinoal Hilbert space of this bosonic theory, we represent each lattice site using \(n_{q}\) qubits. We consider a general gauge theory sharing the same qualitative
Figure 19: Dependence of \(\gamma\) on various parameters when constructing the Gaussian state by setting \(\tau=2\) and solving the optimization problem in Eq. (31). Top: \(\gamma^{-1}\) as a function of \(n_{q}\). Different colored points indicate different values of \(\sigma_{x}/x_{\rm max}\). The value of \(\gamma^{-1}\) is large for \(n_{q}=2\) and \(\sigma_{x}/x_{\rm max}=0.1\) due to sampling the sharply peaked Gaussian only at points where the value is exponentially suppressed. Bottom: \(\gamma^{-1}\) as a function of \(\sigma_{x}/x_{\rm max}\) for \(n_{q}=5\). The value of \(\gamma\) is proportional to \(\sigma_{x}/x_{\rm max}\).
features as QCD. First, the theory is assumed to have a mass gap between the vacuum and the first excited state. Second, we assume that the theory is asymptotically free, i.e., that as the lattice spacing \(a\) goes to zero, the bare coupling \(g(a)\) also goes to zero. The true, continuous, infinite volume theory is recovered while simultaneously taking \((aN_{s})^{d}\to\infty\), \(n_{q}\to\infty\), and \(a\to 0\) (while appropriately adjusting \(g\) according to the chosen renormalization scheme). We work in the units of \(\hbar=c=1\).
One important consideration is that, in a lattice gauge theory, one does not directly choose the lattice spacing \(a\). One can only choose the value of the bare gauge coupling \(g\). The two parameters are related via the renormalization group; one should view the gauge coupling as a function of the lattice spacing, i.e, \(g=g(a)\). What this means in practice is that one can only calculate dimensionless quantities in a lattice gauge theory. For example, if one considers an energy \(E\), one can only calculate the dimensionless value \(aE\). Once the lattice spacing \(a\) has been determined using some renormalization scheme, the dimensionful energy \(E\) can be extracted. However, this consideration can be avoided for our discussion using the fact that the cost of QETU depends only on ratios of energies. The explicit lattice spacing dependence cancels, and we can consider directly energy differences \(E_{1}-E_{0}\) and \(E_{\rm max}-E_{0}\) in physical units.
While the explicit dependence on the lattice spacing cancels, the energies still have implicit dependence on \(a\). The energies also have an implicit dependence on the physical volume \(V\) and the number of qubits per lattice site \(n_{q}\). To better understand this dependence, it is useful to recall that the lattice acts as both a high and low energy regulator of our quantum field theory. Using a finite lattice spacing introduces an energy cutoff \(\sim 1/a\), while using a finite volume provides a low energy cutoff of \(\sim 1/V\). In addition, using a finite value for \(n_{q}\) provides another high energy cutoff, denoted as \(\Lambda_{n_{q}}\sim 2^{n_{q}}\). Assuming one has properly renormalized the theory, as these regulators are removed, i.e., sending \(a\to 0\), \(V\to\infty\) and \(\Lambda_{n_{q}}\to\infty\), one should obtain the correct physical energies. We proceed with the discussion assuming access to (early) fault-tolerant quantum computers, where finite volume errors \(\varepsilon_{V}\), finite lattice spacing errors \(\varepsilon_{a}\), and finite \(n_{q}\) errors \(\varepsilon_{n_{q}}\) are controlled.
We now discuss the scaling of \(E_{1}-E_{0}\) and \(E_{\rm max}-E_{0}\) with \(N_{s},a\) and \(n_{q}\). The energy difference \(E_{1}-E_{0}\) in a continuous theory with a mass gap has a finite value. Therefore, as one removes the regulators, the gap \(E_{1}-E_{0}\) is expected to approach a constant. However, \(E_{\rm max}-E_{0}\) diverges in this limit. We now consider how quickly this term grows as we remove each of the regulators. Starting with the volume, it is helpful to think from the perspective of fixed \(a\) and \(n_{q}\). Increasing the volume is achieved by increasing the number of sites \(N_{s}\). In the simplest possible case of a free theory, each additional site increases the maximum possible energy by the maximum energy of a single site. For a local gauge theory, the maximum energy is expected to increase in a similar way when using more lattice sites.2 The maximum energy of our interacting theory is therefore expected to grow linearly with the total number of sites \(N_{s}^{d}\). For fixed \(a\), this is equivalent to growing linearly with the volume \(V\). Turning now to the lattice spacing, recall that a finite value of \(a\) imposes a high energy cutoff given by \(\sim 1/a\). We therefore expect the maximum energy to scale in the same way as \(1/a\). Lastly, it is known that the maximum energy of a bosonic theory generally increases exponentially with \(n_{q}\)[109; 118].
Footnote 2: Note that, because gauge fixing generally leads to some nonlocality in the resulting Hamiltonian, this argument does not necessarily apply to such formulations. If one does use a nonlocal Hamiltonian, understanding how \(E_{\rm max}-E_{0}\) scales with the number of sites will likely require a dedicated study.
In terms of these energy gaps, we know \(\Delta\sim(E_{1}-E_{0})/(E_{\rm max}-E_{0})\). Combining our scaling arguments, we find
\[\Delta^{-1}={\cal O}\left(\frac{E_{\rm max}-E_{0}}{E_{1}-E_{0}}\right)={\cal O }\left(a^{-1}N_{s}^{d}2^{n_{q}}\right). \tag{52}\]
Numerical studies of this scaling were performed for a compact U(1) lattice gauge theory in 2 spatial dimensions in Sec. IV.1.
We now turn our attention to how the overlap \(\gamma\) between the initial guess and exact ground state scales. For this discussion, we assume that the vacuum is translationally invariant, which is true for theories in the Standard Model. There are many ways to prepare this initial guess, including adiabatic state preparation [119; 120; 121], variational methods [122; 123; 124; 125; 71; 126], and direct preparation [127; 128; 129; 130; 131; 132; 133; 134; 135; 136]. We now argue that preparing such an initial guess wavefunction using direct state preparation will lead to exponential scaling in the cost of preparing the ground state with QETU.
Consider a situation where one prepares regions of the lattice using exact state preparation, where for simplicity we assume that each region contains \(N_{r}\) sites in each dimension. The number of these regions is given by \((N_{s}/N_{r})^{d}\). The gate cost of preparing the state of a single region, each with overlap \(\gamma_{i}\), scales as \(2^{(N_{r})^{d}}\), with the total overlap given by \(\gamma\sim(\gamma_{i})^{(N_{s}/N_{r})^{d}}\). Any choice of \(N_{r}\) that breaks the exponential scaling in one of these costs necessarily introduces an exponential scaling in the other. Even though each overlap \(\gamma_{i}\) can in principle be improved using amplitude amplification from \(\gamma_{i}\) to \(\sqrt{\gamma_{i}}\)[137; 92], the overall \(\gamma\) parameter still decreases exponentially as one increases the number of lattice sites. Creating a trial state using direct state preparation methods is therefore inefficient. As a proof of principle that this exponential volume scaling can be overcome, we studied how \(\gamma\) scales with the volume if one uses a simple adiabatic state preparation procedure to prepare the initial guess wavefunction \(|\psi_{\rm init}\rangle\) for the U(1) formulation we considered. We found that doing so results in \(\gamma\sim{\cal O}((N_{s}^{d})^{-1})\).
We conclude by noting that it is possible that more sophisticated adiabatic preparation procedures, or the use of variational state preparation, could result in a better scaling of \(\gamma^{-1}\) with the volume. It will be interesting to explore this in future work.
Another component of the cost is how the number of Trotter steps used to approximate the time evolution circuit scales with the system size. As one increases \(N_{s}\) or \(n_{q}\), the number of non-commuting terms in the Hamiltonian increases, and we therefore expect the number of steps required to maintain a constant precision to increase accordingly. However, as discussed in Sec. IV.1, scaling the Hamiltonian such that the spectrum is in the range \([0,\pi]\) can be equivalently viewed as scaling the Trotter step size. Because the maximum energy generally grows with \(N_{s}\) and \(n_{q}\), the effective Trotter step size will decrease with \(N_{s}\) and \(n_{q}\). If this decrease in the Trotter step size results in a smaller error that the increase from the additional non-commuting terms, the overall Trotter error would actually _decrease_ with \(N_{s}\) and \(n_{q}\). As shown in Sec. IV.1, this was indeed found to be the case for the U(1) gauge theory we considered. Due to the similarity of Hamiltonian structure of general lattice gauge theories, it is possible that this trend will also be present for other lattice gauge theories.
Lastly, we point out an observation that may help dampen the scaling of \(\Delta^{-1}\) as one increases \(N_{s}\) and \(n_{q}\). The main idea is that in general, an initial guess with good overlap with the ground state will generally have smaller overlap with excited states, with the overlap continuing to decrease for higher excited states. As a simple example, consider the quantum harmonic oscillator, with operators sampled using \(n_{q}\) qubits. One possible initial guess is a constant wavefunction for all \(x\). Because the \(n^{\text{th}}\) excited state has \(n\) nodes, the overlap of this initial guess and higher excited states will decrease with \(n\), due to the fact that one sums more highly oscillatory functions. Let us denote the overlap of the highest energy state and the initial guess as \(\gamma_{n_{q},\text{min}}\), with associated energy \(E_{2^{n_{q}}-1}\). Now suppose that one increases the number of qubits to \(m_{q}>n_{q}\). This increases the size of the Hilbert space, but the key point is that only states with higher energies are added. Because these higher energy states also have more oscillations, the overlap the initial guess has with these additional states are all smaller than \(\gamma_{n_{q},\text{min}}\). Depending on the error threshold, it is conceivable that, for a small enough value of \(\gamma_{n_{q},\text{min}}\), one can simply ignore the states with energy larger than \(E_{2^{n_{q}}-1}\). With regards to using QETU, this implies we only have to divide our energy gap by \(E_{2^{n_{q}}-1}-E_{0}\) instead of \(E_{2^{n_{q}}-1}-E_{0}\). Because this argument is true for any \(m_{q}>n_{q}\), the amount one must scale down the physical energy gap eventually becomes independent of \(n_{q}\).
It is possible that these arguments can be applied in a similar way to lattice gauge theories in order to argue that the scaling with \(N_{s}\) and \(n_{q}\) could be milder than previously argued. In the best case scenario where the dependence on \(N_{s}\) and \(n_{q}\) vanishes for some values of the parameters, \(\Delta\) would only depend on physical energies \(E_{0}\), \(E_{1}\), and some \(E_{n^{*}}\), where \(E_{n^{*}}\) is the highest energy state that must be filtered out using QETU. In this way, \(\Delta\) is no longer explicitly dependent on the lattice spacing, only through finite lattice spacing errors. If such a scenario is true, then \(\Delta\) would become independent of \(N_{s}\), \(n_{q}\), and \(a\). The cost would then be some overall large pre-factor, dependent on the value of \(\Delta\) where it becomes independent of \(N_{s},n_{q}\) and \(a\), multiplied by the cost of implementing \(\mathrm{e}^{-i\tau H}\) using Trotter methods. Our preliminary numerical investigations showed that the validity of such a hypothesis is heavily dependent on both the initial guess state and the value of the coupling constant. We leave further investigations for future work.
Importantly, the above-described method of dampening the scaling with \(N_{s}\) or \(n_{q}\) is specific to the QETU approach, and would not be possible if one instead used the Hamiltonian input model, as in Ref. [95]. In the Hamiltonian input model, one performs repeated calls to a block encoding of \(H\) and uses the Quantum Eigenvalue Transformation to implement the projector. Already at the stage of constructing the block encoding circuit for \(H\), this method requires the Hamiltonian to be scaled so that \(||H||\leq 1\). Scaling down the Hamiltonian, and therefore the gap, by a factor of \(E_{\text{max}}-E_{0}\) is thus unavoidable in this scenario. On the contrary, in the QETU case, where one performs repeated calls to the unitary operator \(\mathrm{e}^{-i\tau H}\), there are no fundamental obstacles for constructing the time-evolution circuit for some \(H\) with a large spectral norm. The spectrum of \(H\) is shifted solely to avoid the problems associated with the periodic nature of the matrix function which QETU implements. Consequently, this technical difference between the Hamiltonian and time-evolution input models could result in the cost of QETU having better asymptotic scaling with \(N_{s}\) or \(n_{q}\).
## VI Discussion and Conclusion
In this work, we performed an extensive study of the QETU algorithm and its applications to state preparation in simulations of quantum field theory. By modifying the original algorithm in Ref. [88], we were able to achieve significant cost savings when the time evolution circuit is implemented both exactly as well as approximately using Trotter methods.
We applied our improved procedure to prepare a ground state in a particular lattice formulation of U(1) gauge theory in 2 spatial dimensions. To avoiding the costly controlled calls to the time evolution circuit, we based our circuits on the control-free version of the QETU algorithm. The considered control-free implementation of QETU generalizes to any Hamiltonian of the form \(H=H_{x}+H_{p}\), where the bases of the kinetic piece \(H_{p}\) and the potential piece \(H_{x}\) are related by a Fourier transformation; a form that is common to many types of lattice field theories and their different formula
tions [42; 45; 109; 112; 118; 138; 139].
We studied how the cost of the QETU-based state preparation algorithm scales with parameters of our physical system. In particular, we discussed how scaling down the spectrum of the physical Hamiltonian ensues the scaling of the energy gap \(\Delta\), and placed upper bounds on this parameter for arbitrary system sizes. Next, we discovered that scaling down the Hamiltonian spectrum leads to unexpected positive consequences; namely, it makes the Trotter error decrease as the number of sites \(N_{p}\) or the number of qubits per site \(n_{q}\) are increased. This behavior is due to the fact that scaling down the Hamiltonian spectrum is equivalent to scaling down the Trotter step size. Additionally, using a simple adiabatic state preparation procedure, we demonstrated that one can achieve a value of \(\gamma^{-1}\) that scales only polynomially with the system size. These studies lay the basis for the further studies on applications of QETU-based state preparation techniques for alternative formulations of gauge theories.
We followed our numerical results with a discussion of asymptotic costs of using QETU for state preparation in general lattice field theories with properties similar to QCD. Assuming that the scaling of \(\Delta\) and \(\gamma\) are similar to the scaling we found for the test theory, shown in Tab. 2, QETU can be used to prepare the ground state of a general lattice gauge theory with a volume scaling 2 powers higher than the cost of implementing a single Trotter step. If one considers a local theory, this leads to the cost scaling as the cube of the number of sites. We then argued that the scaling of \(\Delta\) with the number of sites or qubits per site can be reduced by exploiting the fact that, for a good initial guess, highly excited states will have a negligible overlap, and do not need to be filtered out.
In this work, we also developed a novel application of the QETU algorithm for the preparation of Gaussian states. The main idea is to use QETU to implement the Gaussian filter operator \(\mathrm{e}^{-\hat{x}^{2}/(2\sigma_{x}^{2})}\). We argued that a naive application of QETU to this problem results in the error decreasing only polynomially with the degree of the Chebyshev expansion. We showed, however, that one could instead achieve an exponential scaling, for any value of the width of the Gaussian state, by performing simple modifications to the QETU procedure. With these improvements, we presented a procedure that allows one to prepare Gaussian states while avoiding the costly step of adding the even and odd pieces using LCU. By performing a gate cost analysis, we showed that preparing Gaussian states with QETU using our improved procedure outperforms exact state preparation methods for as few as \(n_{q}>2-3\) qubits.
This work leads naturally to a number of additional interesting applications. While we used QETU to prepare the vacuum state of a pure gauge theory, it is in principle possible to extend QETU to prepare hadronic states in QCD, another important step towards simulating QCD. For concreteness, consider the task of preparing the quantum state of the pion. While the pion is not the ground state of the QCD Hamiltonian, it is, however, the lowest energy state with the quantum numbers of the pion. Additionally, the gap between the pion state and the next excited state with the desired quantum numbers is at least twice the pion mass. If one prepares an initial state \(|\psi_{\mathrm{init}}\rangle\) with the quantum numbers of the pion, e.g., using the high quality interpolating fields that have been developed for use in lattice QCD (see, _e.g._, Ref. [140] for a pedagogical introduction) then one can construct a filter operator using the QCD Hamiltonian and isolate the pion state. Because these interpolating fields are not unitary but Hermitian operators, a dedicated study to how best implement these operators will be required. It is possible, in principle, to use QETU to construct interpolating fields. An interesting question to ask is if one needs to apply the interpolating field to the interacting vacuum prepared using, e.g., QETU, or if one can simply apply the interpolating field to any state with the quantum numbers of the vacuum. If one could avoid preparing the interacting ground state while avoiding \(\gamma\) scaling exponentially poorly with the number of sites, this would result in a significant cost reduction.
Another interesting followup is exploring the use of QETU for preparing multi-dimensional Gaussian states, which are relevant for lattice gauge theories as well. The general form of such a state is \(\mathrm{e}^{-\sum_{ij}c_{ij}x_{i}x_{j}}\). A simple procedure for constructing this state using QETU is to set \(\mathsf{f}(x)=\mathrm{e}^{-x}\) and use \(\mathrm{e}^{i\sum_{ij}c_{ij}\,\hat{x}_{i}\hat{x}_{j}}\) as a building block in the QETU circuit. One important consideration using this method, or any variation, is to ensure that the \(\gamma\) parameter does not decrease exponentially as one increases the dimension of the Gaussian. If this can be avoided, it will be interesting to compare the cost of QETU to the Kitaev-Webb algorithm for preparation of such states.
Our studies provide the foundation for further investigations on the applicability of QETU in the context of highly efficient preparation of various functions of Hermitian operators. In particular, it would be interesting to investigate whether the QETU algorithm can be utilized for implementing block encodings, and even QSP-based time evolution operators in field theories whose Hamiltonian have the form \(H=H_{x}+H_{p}\).
## Acknowledgements
CFK would like to thank Stefan Meinel for access to their computing cluster for some of the calculations. CFK would also like to thank Dorota Grabowska for useful discussions regarding adiabatic state preparation for the U(1) formulation considered in the paper. We thank Neel Modi and Siddharth Hariprakash for access to their code and Christian Bauer for useful discussions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Department of Energy Computational Science Graduate Fel
lowship under Award Number DE-SC0020347. MK acknowledges support from the DOE grant PH-HEP24-QuantISED, B&R KA2401032/34. This work was supported by DOE Office of Advanced Scientific Computing Research (ASCR) through the ARQC program (NG). Support is also acknowledged from the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator.
|
2303.14100 | Errors are Useful Prompts: Instruction Guided Task Programming with
Verifier-Assisted Iterative Prompting | Generating low-level robot task plans from high-level natural language
instructions remains a challenging problem. Although large language models have
shown promising results in generating plans, the accuracy of the output remains
unverified. Furthermore, the lack of domain-specific language data poses a
limitation on the applicability of these models. In this paper, we propose
CLAIRIFY, a novel approach that combines automatic iterative prompting with
program verification to ensure programs written in data-scarce domain-specific
language are syntactically valid and incorporate environment constraints. Our
approach provides effective guidance to the language model on generating
structured-like task plans by incorporating any errors as feedback, while the
verifier ensures the syntactic accuracy of the generated plans. We demonstrate
the effectiveness of CLAIRIFY in planning chemistry experiments by achieving
state-of-the-art results. We also show that the generated plans can be executed
on a real robot by integrating them with a task and motion planner. | Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, Animesh Garg | 2023-03-24T16:06:11Z | http://arxiv.org/abs/2303.14100v1 | Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting
###### Abstract
Generating low-level robot task plans from high-level natural language instructions remains a challenging problem. Although large language models have shown promising results in generating plans, the accuracy of the output remains unverified. Furthermore, the lack of domain-specific language data poses a limitation on the applicability of these models. In this paper, we propose CLAIRify, a novel approach that combines automatic iterative prompting with program verification to ensure programs written in data-scarce domain-specific language are syntactically valid and incorporate environment constraints. Our approach provides effective guidance to the language model on generating structured-like task plans by incorporating any errors as feedback, while the verifier ensures the syntactic accuracy of the generated plans. We demonstrate the effectiveness of CLAIRify in planning chemistry experiments by achieving state-of-the-art results. We also show that the generated plans can be executed on a real robot by integrating them with a task and motion planner.
## I Introduction
Leveraging natural language instruction to create a plan comes naturally to humans. However, when a robot is instructed to do a task, there is a communication barrier: the robot does not know how to convert the natural language instructions to lower-level actions it can execute, and the human cannot easily formulate lower-level actions. Large language models (LLMs) can fill this gap by providing a rich repertoire of _common sense reasoning_ to robots [1, 2].
Recently, there has been impressive progress in using LLMs [1, 3, 4] for problems involving structured outputs, including code generation [5, 6, 7] and robot programming [8]. These code generation models are often trained on code that is widely available on the Internet and perform well in few-shot settings for generating code in those languages. However, to employ LLMs for task-plan generation there are two main issues to address: (1) _lack of task-plan verification_ and (2) poor _performance for data-scarce domain-specific languages_.
_Lack of task plan verification_ - Task plans generated by LLMs, often, cannot be executed out-of-the-box with robots. There are two reasons for that. First, machine-executable languages are bound by strict rules [9]. If the generated task plan does not adhere to them, it will not be executable. Hence, we need a way to verify the syntactic correctness of the structured task plan. Second, LLMs might generate a task plan that looks reasonable (i.e. is syntactically correct) but is not actually executable by a robot. Avoiding this problem requires information about the world state and robot capabilities, as well as general reasoning about the physical world [10].
_Data scarcity for domain-specific languages -_ It is difficult for LLMs to generate task plans in a zero-shot manner for domain-specific languages (DSLs), such as those in chemistry and physics because there is significantly less data on the Internet for those specific domains, so LLMs are unable to generalize well with no additional information [11, 12]. It is possible to address this by fine-tuning models on pairs of natural-language inputs and structured-language outputs, but it is very difficult to acquire training datasets large enough for the model to learn a DSL reasonably well [13], and there is a large computation cost for fine-tuning LLMs [14]. However, it has been shown that LLMs can adapt to new domains with _effective prompting_[15]. Our insight is _to leverage the in-context ability of an LLM by providing the rules of a structured language as input_, to generate a plan according to the template of the target DSL.
In this work, we propose to address the verification and data-scarcity challenges. We introduce CLAIRify1, a frame
Fig. 1: Task plans generated by LLMs may contain syntactical errors in domain-specific languages. By using verifier-assisted iterative prompting, CLAIRify can generate a valid program, which can be executed by a robot.
work that translates natural language into a domain-specific structured task plan using an automated iterative verification technique to ensure the plan is syntactically valid in the target DSL (Figure 1) by providing the LLM a description of the target language. Our model also takes into account environment constraints if provided. The generated structured-language-like output is evaluated by our verifier, which checks for syntax correctness and for meeting environment constraints. The syntax and constraint errors are then fed back into the LLM generator to generate a new output. This iterative interaction between the generator and the verifier leads to grounded syntactically correct target language plans.
We evaluate the capabilities of CLAIRify using a domain-specific language called Chemical Description Language (XDL) [16] as the target structured language unfamiliar to the LLM. XDL is an XML-based DSL to describe action plans for chemistry experiments in a structured format, and can be used to command robots in self-driving laboratories [17]. Converting experiment descriptions to a structured format is nontrivial due to the large variations in the language used. Our evaluations show that CLAIRify outperforms the current state-of-the-art XDL generation model in [16]. We also demonstrate that the generated plans are executable by combining them with an integrated task and motion planning (TAMP) framework and running the corresponding experiments in the real world. Our contributions are:
* We propose a framework to produce task plans in a DSL using an iterative interaction of an LLM-based generator and a rule-based verifier.
* We show that the interaction between the generator and verifier improves zero-shot task plan generation.
* Our method outperforms the existing XDL generation method in an evaluation by human experts.
* We integrate our generated plans with a TAMP framework, and demonstrate the successful translation of elementary chemistry experiments to a real robot execution.
## II Related work
### _Task Planning_
High-level task plans are often generated from a limited set of actions [9], because task planning becomes intractable as the number of actions and time horizon grows [18]. One approach to do task planning is using rule-based methods [16, 19]. More recently, it has been shown that models can learn task plans from input task specifications [20, 21, 22], for example using hierarchical learning [23, 24], regression based planning [25], reinforcement learning [26]. However, to effectively plan task using learning-based techniques, large datasets are required that are hard to collect in many real-world domains. Our approach, on the other hand, generates a task plan directly from an LLM in a zero-shot way on a constrained set of tasks which are directly translatable to robot actions. We ensure that the plan is syntactically valid and meets environment constraints using iterative error checking.
### _Task Planning with Large Language Models_
Recently, many works have used LLMs to translate natural language prompts to robot task plans [2, 8, 10, 27]. For example, Inner Monologue [27] uses LLMs in conjunction with environment feedback from various perception models and state monitoring. However, because the system has no constraints, it can propose plans that are nonsensical. SayCan [10], on the other hand, grounds task plans generated by LLMs in the real world by providing a set of low-level skills the robot can choose from. A natural way of generating task plans is using code-writing LLMs because they are not open-ended (i.e. they have to generate code in a specific manner in order for it to be executable) and are able to generate policy logic. Several LLMs trained on public code are available, such as Codex [5], CodeT5 [6], AlphaCode [7] and CodeRL [28]. LLMs can be prompted in a zero-shot way to generate task plans. For example, Code as Policies [8] repurposes code-writing LLMs to write robot policy code and ProgPrompt [2] generates plans that take into account the robot's current state and the task objectives. However, these methods generate Pythonic code, which is abundant on the Internet. For DSLs, naive zero-shot prompting is not enough; the prompt has to incorporate information about the target language so that the LLM can produce outputs according to its rules.
### _Leveraging Language Models with External Knowledge_
A challenge with LLMs generating code is that the correctness of the code is not assured. There have been many interesting works on combining language models with external tools to improve the reliability of the output. Mind's Eye [12] attempts to ground large language model's reasoning with physical simulation. They trained LLM with pairs of language and codes and used the simulation results to prompt an LLM to answer general reasoning questions. Toolformer [29] incorporates API calls into the language model to improve a downstream task, such as question answering, by fine-tuning the model to learn how to call API. LEVER [30] improves LLM prompting for SQL generation by using a model-based verifier trained to verify the generated programs. As SQL is a common language, the language model is expected to understand its grammar. However, for DSLs, it is difficult to acquire training datasets and expensive to execute the plans to verify their correctness. Our method does not require fine-tuning any models or prior knowledge on the target language within the language model. Our idea is perhaps closest to LLM-Augmenter [31], which improves LLM outputs by giving it access to external knowledge and automatically revising prompts in natural language question-answering tasks. Our method similarly encodes external knowledge in the structure of the verifier and prompts, but for a structured and formally verifiable domain-specific language.
## III Task Programming with CLAIRify
CLAIRify_Overview_ - We present a system that takes as input the specifications of a structured language (i.e. all its rules and permissible actions) as well as a task we want to execute written in natural language and outputs a syntactically correct task plan. A general overview of the CLAIRify pipeline is in Figure 2. We combine input instruction and language description into a prompt and pass the prompt into
the structured language generator (here we use GPT-3 [1], a large language model). However, we cannot guarantee the output from the generator is syntactically valid, meaning that it would definitely fail to compile into lower-level robot actions. To generate syntactically valid programs, we pass the output of the generator through a verifier. The verifier determines whether the generator output follows all the rules and specifications of the target structured language and can be compiled without errors. If it cannot, the verifier returns error messages stating where the error was found and what it was. This is then appended to the generator output and added to the prompt for the next iteration. This process is repeated until a valid program is obtained, or until the timeout condition is reached (if that happens, an error is thrown for the user). Algorithm 1 describes of our proposed method.
Once the generator output passes through the verifier with no errors, we are guaranteed that it is syntactically valid structured language. This program can then be translated into lower-level robot actions by passing through TAMP for robot execution. Each component of the pipeline is described in more detail below.
```
0: Structured language description \(\mathcal{L}\), instruction \(x\)
0: Structured language task plan, \(y_{SL}\)
0: IterativePrompting(\(\mathcal{L}\), \(x\)) \(y_{SL}=\text{Generator}(\mathcal{L},x)\) \(\text{errors}=\text{Verifier}(y_{SL^{\prime}})\) while len(errors) \(>0\) or timeout condition!= True do \(y_{SL^{\prime}}=\text{Generator}(\mathcal{L},x,y_{SL^{\prime}},\text{errors})\) \(\text{errors}=\text{Verifier}(y_{SL^{\prime}})\) \(y_{SL}=y_{SL^{\prime}}\)
```
**Algorithm 1** CLAIRify: Verifier-Assisted Iterative Prompts
### _Generator_
The generator takes a user's instruction and generates structured-language-like output using a large language model (LLM) using a description of the structured language. The input prompt skeleton is shown in Snippet 1, Figure 3. The description of the XDL language includes its file structure and lists of the available actions (can be thought of as functions), their allowed parameters and their documentation.
Although the description of the target structured language information is provided, the candidate task plan is not guaranteed to be syntactically correct (hence we refer to is as "structured-language-like"). To ensure the syntactically correctness of the generated code, the generator is iteratively prompted by the automated interaction with the verifier. The generated code is passed through the verifier, and if no errors are generated, then the code is syntactically correct. If errors are generated, we re-prompt the LLM with the incorrect task plan from the previous iteration along with the list of errors indicating why the generated steps were incorrect. The skeleton of the iterative prompt is shown in Snippet 2, Figure 3. Receiving the feedback from the verifier is used by the LLM to correct the errors from the previous iteration. This process is continued until the generated code is error-free or a timeout condition is reached, in which case we say we were not able to generate a task plan.
### _Verifier_
The verifier works as a syntax checker and static analyzer to check the output of the generator and send feedback to the generator. It first checks whether the input can be parsed as a correct XML and then checks the allowance of action tags, the existence of mandatory properties, and the correctness of optional properties. This evaluates if the input is syntactically correct XDL. It also checks the existence of definitions of hardware and reagents used in the procedure or provided as environment constraints, which works as a simple static analysis of necessary conditions for executability. If the verifier catches any of these errors, the candidate task plan is considered to be an invalid. The verifier returns a list of errors it found, which is then fed back to the generator.
### _Incorporating Environment Constraints_
Because resources in a robot workspace are limited, we need to consider those constraints when generating task
Fig. 2: **System overview**: The LLM takes the input (1), structured language definition, and (optionally) resource constraints and generates unverified structured language (2). The output is examined by the verifier, and is passed to LLM with feedback (3). The LLM-generated outputs passes through the verifier (4). The correct output (5) is passed to the task and motion planner to generate robot trajectories. The robot executes the planned trajectory (6).
plans. If specified, we include the available resources into the generator prompt. The verifier also catches if the candidate plan uses any resources aside from those mentioned among the available robot resources. Those errors are included in the generator prompt for the next iteration. If a constraint list is not provided, we assume the robot has access to all resources. In the case of chemistry lab automation, those resources include experiment hardware and reagents.
### _Interfacing with Planner on Real Robot_
In many cases, the target DSL is not embodied, and it is hardware independent. Our verified task plan only contains high-level descriptions of actions. To execute those actions by a robot, we need to map them to low-level actions and motion that the robot can execute. To ensure the generated structured language is executable by the robot, we employ a task and motion planning (TAMP) framework. In our case, we use PDDLStream [9] to generate robot action and motion plans simultaneously. In this process, visual perception information, coming from the robot camera, grounds the predicates for PDDLStream, and verified task plans are translated into problem definitions in PDDLStream.
For the chemistry lab automation domain, high-level actions in XDL are mapped to intermediate goals in PDDLStream, resulting in a long-horizon multistep planning problem definition. To also incorporate safety considerations for robot execution, We use a constrained task and motion planning framework for lab automation [33] to execute the XDL generated by CLAIRify.
## IV Experiments and Evaluation
Our experiments are designed to evaluate the following hypotheses: i) Automated iterative prompting increases the success rate of unfamiliar language generation, ii) The quality of generated task plans is better than existing methods, iii) Generated plans can be executed by actual hardware.
### _Experimental Setup_
To generate XDL plans, we use text-davinci-003, the most capable GPT-3 model at the time of writing. We chose to use this instead of code-davinci-002 due to query and token limits.
To execute the plans in the real world, we use an altered version of Franka Emika Panda arm robot, equipped with a Robotiq 2F-85 gripper, to handle vessels. The robot also communicates with instruments in the chemistry laboratory, such as a weighing scale and a magnetic stirrer. These devices are integrated to enable pouring and stirring skills.
_Datasets_ - We evaluated our method on two different datasets:
(1) **Chem-RnD [Chemistry Research & Development]**: This dataset consists of 108 detailed chemistry-protocols for synthesizing different organic compounds in real-world chemistry labs, sourced from the Organic Syntheses dataset (volume 77) [34]. Due to GPT-3 token limits, we only use experiments with less than 1000 characters. We use Chem-RnD as a proof-of-concept that our method can generate task plans for complex chemistry methods. We do not aim to execute the plans in the real world, and so we do not include any constraints.
(2) **Chem-EDU [Everyday Educational Chemistry]**: We evaluate the integration of CLAIRify with real-world robots through a dataset of 40 natural language instructions containing only safe (edible) chemicals and that are, in principle, executable by our robot. The dataset consists of basic chemistry experiments involving edible household chemicals, including acid-base reactions and food preparation procedures2. When generating the XDL, we also included environment constraints based on what equipment our robot had access to (for example, our robot only had access to a mixing container called "beaker").
Footnote 2: **CLAIRify Data & code:**[https://github.com/ac-rad/xdl-generation/](https://github.com/ac-rad/xdl-generation/)
### _Metrics and Results_
The results section is organized based on the four performance metrics that we will consider, namely: Ability to generate structured-language output, Quality of the generated plans, Number of interventions required be the verifier, and Robotic validation capability. We compared the performance of our method with SynthReader, a state-of-the-art XDL generation algorithm which is based on rule-based tagging and grammar parsing of chemical procedures [16]
(1) **Ability to generate a structured language plan.** First, we investigate the success probability for generating plans. For CLAIRify, if it is in the iteration loop for more than \(x\) steps (here, we use \(x=10\)), we say that it is unable to generate a plan and we exit the program. When comparing with SynthReader, we consider that approach unable to generate a structured plan if the SynthReader IDE (called ChemIDE3)
\begin{table}
\begin{tabular}{l l c c} \hline \hline Dataset & Method & Number generated \(\uparrow\) & Expert preference \(\uparrow\) \\ \hline \hline Chem-RD & SynthReader [16] & 92/108 & 13/108 \\ & CLAIRify [ours] & **105/108** & 75/108 \\ \hline Chem-EDU & SynthReader [16] & 0/40 & - \\ & CLAIRify [ours] & **40/40** & - \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of our method with existing methods on the number of successfully generated valid XDL plans and their quality on 108 organic chemistry experiments from [34].
Fig. 3: **Prompt skeleton:** (1) At the initial generation, we prompt the LLM with a description of XDL and the natural language instruction. (2) After the LLM generates structured-language-
# <Description of XDL>
# <Dscription of XDL>
# <Hardware constraints(optional)>
# <Ragent constraints (optional)>
Convert to XDL:
# <Mural language instruction>
<XDL from previous iteration>
This XDL was not correct.
There were the errors
<List of errors, one per line>
***
**Snippet 2:** Iterative prompt
throws a fatal error when asked to create a plan. For both models, we also consider them unable to generate a plan if the generated plan only consists of empty XDL tags (i.e. no experimental protocol). For all experiments, we count the total number of successfully generated language plans divided by the total number of experiments. Using this methodology, we tested the ability of the two models to generate output on both the Chem-RnD and Chem-EDU datasets. The results for both models and both datasets are shown in Table I. We find that out of 108 Chem-RnD experiments, CLAIRify successfully returned a plan 97% of the time, while SynthReader returned a plan 85% of the time. For the Chem-EDU dataset, CLAIRify generated a plan for all instructions. SynthReader was unable to generate any plans for that dataset, likely because the procedures are different from typical chemical procedures (they use simple action statements). This demonstrates the generalizability of our method: we can apply it to different language styles and domains and still obtain coherent plans.
(2) **Quality of the predicted plan (without executing the plan)**. To determine if the predicted task plans actually accomplish every step of their original instructions, we report the number of actions and parameters that do not align between the original and generated plan, as annotated by expert experimental chemists. To compare the quality of the generated plans between CLAIRify and SynthReader, we ask expert experimental chemists to, given two anonymized plans, either pick a preferred plan among them or classify them as equally good. We also ask them to annotate errors in the plans in the following categories: Missing action, Missing parameter, Wrong action, Wrong parameter, Ambiguous value, Other error. Here, actions refer to high-level steps in the procedure (e.g., <Add reagent="acetic acid"> is an action) and parameters refer to reagents, hardware, quantities and experiment descriptors (e.g., in <HeatChill vessel="beaker" temp="100C">, vessel and temp are both parameters). The annotations were performed using the LightTag Text Annotation Tool [35].
_Chem-RnD dataset_ - The results for the Chem-RnD dataset with respect to expert preference are reported in the last column of Table I. We found that out of 108 experiments, experts preferred the XDL plan generated from CLAIRify 75 times and the one from SynthReader 13 times (the remaining 20 were considered to be of similar quality).
The distributions of the annotated errors are shown in Figure 4. We find that for 4 out of 6 error categories, our model does at least as well as or better than the baseline method when considering the mean and median of the distributions. We also find that for those categories, our method produces more experiments with 0 errors.
One advantage of our method is that it generates less plans with missing actions compared with the baseline. As XDL generation in SynthReader is implemented by rule-based pattern-matching techniques, any actions that do not match those templates would not appear in the final XDL. For example, for the protocol:
To a solution of m=CPA (200 mg, 0.8 mmol) in dichloromethane (10 ml), cooled to 0 deg 0.5c, was added dropwise solution of 5-chloro-oxa-3-thatiscriptol (15.2:10.51 dex-8-8-mea (150 mg, 0.8 mmol) in dichloromethane(10 ml).
the plan generated by CLAIRify was
<Add vessel="v1" reagent="m-CPA" amount=200 mg/>="dof vessel="v2" reagent="dof chloromethane volume="10 ml"/> <Add vessel="v1" reagent="dof chloromethane-0x-3-thatis-tricylcolo (5.2,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.
However, our model produced plans with a greater number of wrong actions than SynthReader. This is likely because our model is missing domain knowledge on certain actions that need to be included in the prompt or verifier. For example, given the instruction _"Dry solution over magnesium sulfate"_, our model inserts a <Dry.../> into the XDL plan, but the instruction is actually referring to a procedure where ones passes the solution through a short cartridge containing magnesium sulphate, which seems to be encoded in SynthReader. Another wrong action our model performs is reusing vessels. In chemistry, one needs to ensure a vessel is uncontaminated before using it. However, our model generates plans that can use the same vessel in two different steps without washing it in between. Our model also sometimes generates plans with ambiguous values. For example, many experiment descriptions include conditional statements such as "Heat the solution at the boiling point until it becomes white". Conditions in XDL need a numerical condition as a parameter. Our model tries to incorporate them by including actions such as <HeatChill temp="boiling point" time="until it becomes white"/>, but they are ambiguous. We can make our model better in the future by incorporating more domain knowledge into our structured language description and improving our verifier with real-world constraints. For example, we can incorporate visual feedback from the environment, include look-up tables for common boiling points, and ensure vessels are not reused before cleaning.
Despite the XDL plans generated by our method containing errors, we found that the experts placed greater emphasis on missing actions than ambiguous or wrong actions when picking the preferred output, indicating larger severity of this class of error for the tasks and outputs investigated here.
_Chem-EDU dataset_ - We annotated the errors in the Chem-EDU datasets using the same annotation labels as for the Chem-RnD dataset. The breakdown of the errors is in the left plot of Figure 4. Note that we did not perform a comparison with SynthReader as no plans were generated from it. We find that the error breakdown is similar to that from Chem-RnD, where we see ambiguous values in experiments that have conditionals instead of precise values. We also encounter a few wrong parameter errors, where the model does not include units for measurements. This can be fixed in future work by improving the verifier to check for these constraints.
(3) **Number of interventions required by the verifier.** To better understand the interactions between the generator and verifier in CLAIRify, we analyzed the number of interactions that occur between the verifier and generator for each dataset to understand the usefulness of the verifier. In Table II, we show that each experiment in the Chem-RnD dataset runs through the verifier on average 2.6 times, while the Chem-EDU dataset experiments runs through it 1.15 times on average. The difference between the two datasets likely exists because the Chem-EDU experiments are shorter and less complicated. The top Chem-EDU error encountered by the verifier was that an item in the plan was not define in the Hardware or Reagents list, mainly because we included hardware constraints for this dataset that we needed to match in our plan. In Figure 5, we show a sample loop series between the generator and verifier.
(4) **Robotic validation (Chem-EDU only).** To analyze how well our system performs in the real world, we execute a few experiments from the Chem-EDU dataset on our robot. Three experiments from the Chem-EDU dataset were selected to be executed.
_Solution Color Change Based on pH_ - As a basic chemistry experiment, we demonstrated the color change of a solution containing red cabbage juice. This is a popular introductory demonstration in chemistry education, as the anthocyanin pigment in red cabbage can be used as a pH indicator [36]. We prepared red cabbage solution by boiling red cabbage leaves in hot water. The colour of the solution is dark purple/red. Red cabbage juice changes its color to bright pink if we add an acid and to blue if we add a base, and so we acquired commercially-available vinegar (acetic acid, an acid) and baking soda (sodium bicarbonate, a base).
In this experiment, we generated XDL plans using CLAIRify from two language inputs:
[1] Add 40 g of red cabbage solution into a beaker; Add 10 g of red cabbage solution into the beaker, then stir the solution for 10 seconds.
[2] Add 40 g of red cabbage solution into a beaker, Add 10 g of baking soda into the beaker, then stir the solution for 10 seconds.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Average num. & Max/min \\ verifier calls & verifier calls & Error type caught by verifier [count] \\ \hline \hline Chem-RnD & \(2.58\pm 2.00\) & 10/1 & - missing property in action [306] \\ & & & - property not allowed [174] \\ & & & - wrong tag [120] \\ & & & - action does not exist [21] \\ & & & - item not defined in Hardware or Reagents list [15] \\ & & & - plan cannot be parsed as XML [6] \\ \hline Chem-EDU & \(1.15\pm 0.45\) & 3/1 & - item not defined in Hardware or Reagents list [47] \\ & & & - property not allowed [26] \\ & & & - wrong tag [40] \\ & & & - missing property in action [3] \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Verifier Analysis**. We report the average number of times CLAIRify calls the experiment for the experiments in a given dataset, as well as the minimum and maximum number of times. We also report the type of error encountered by the verifier and the number of times it caught that type.
\begin{table}
\begin{tabular}{l l l} \hline \hline Variations of Iterative Prompt Design using Verifier & Plan’s generated \\ Error Messages & success rate (\%) \(\uparrow\) \\ \hline \hline \multicolumn{3}{l}{_Naive:_ XDL from previous iteration and string “This} & 0 \\ XDL was not correct. Please fix the errors.” & \\ \hline \multicolumn{3}{l}{_Last Error:_ Error List from verifier from previous iteration} & 30 \\ \hline \multicolumn{3}{l}{_All Errors cumulative:_ Accumulated error List from all previous iterations} & 50 \\ \hline \multicolumn{3}{l}{_XDL + Last Error:_ XDL and Error List from verifier from previous iteration} & 100 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Number of XDL plans successfully generated for different error message designs in the iterative prompting scheme on a validation set from Chem-RnD.
Figure 6 shows the flow of the experiment. Our generated a XDL plan that correctly captured the experiment; the plan was then passed through TAMP to generate a low-level action plan and was then executed by the robot.
_Kitchen Chemistry_ - We then tested whether our robot could execute a plan generated by our model for a different application of household chemistry: food preparation. We generated a plan using CLAIRify for the following lemonade beverage, which can be viewed on our website:
Add15g ofemonJuiue and sugarmixture to a cupcontaining
30.9of.sparkling water.Stir vigorously for 20 sec.
### _Ablation Studies_
We assess the impact of various components in our prompt designs and feedback messaging from the verifier. We performed these tests on a small validation set of 10 chemistry experiments from Chem-RnD (not used in the test set) and report the number of XDL plans successfully generated (i.e., was not in the iteration loop for \(x=10\) steps).
_Prompt Design_ - To evaluate the prior knowledge of the GPT-3 on XDL, we first tried prompting the generator without a XDL description, i.e., with the input:
initial_prompt = ***
convert to XDL:
#<Natural language instruction>***
The LLM was unable to generate XDL for any of the inputs from the small validation set that contains 10 chemistry experiments. For most experiments, when asked to generated XDL, the model output a rephrased version of the natural language input. In the best case, it output some notion of structure in the form of S-expressions or XML tags, but the outputs were very far away from correct XDL and were not related to chemistry. We tried the same experiment with code-davinci-002; the outputs generally had more structure but were still nonsensical. This result suggests the LLM does not have the knowledge of the target language and including the language description in the prompt is essential to generate an unfamiliar language.
_Feedback Design_ - We experimented with prompts in our iterative prompting scheme containing various levels of detail about the errors. The baseline prompt contains a description as well as natural language instruction. We wanted to investigate how much detail is needed in the error message for the generator to be able to fix the errors in the next iteration. For example, is it sufficient to write "There was an error in the generated XDL", or do we need to include a list of errors from the verifier (such as "Quantity is not a permissible attribute for the Add tag"), or do we also need to include the erroneous XDL from the previous iteration?
We find that including the erroneous XDL from the previous iteration and saying why it was wrong resulted in the high number of successfully generated XDL plans. Including a list of errors was better than only writing "This XDL was not correct. Please fix therors", which was not informative enough to fix any errors. Including the erroneous XDL from the previous iteration is also important; we found that including only a list of the errors without the context of the XDL plan resulted in low success rates.
## V Conclusion and Future Work
In this paper, we introduce CLAIRify to generate structured language task plans in a DSL by providing an LLM with a description of the language in a zero-shot manner. We also ensure that the task plan is syntactically correct in the DSL by using a verifier and iterative prompting. Finally, we show that our plans can incorporate environmental constraints. We evaluated the performance of CLAIRify on two datasets and find that our method was able to generate better plans than existing baselines. Finally, we translate a select number of these plans to real-world robot demonstrations.
In the future, we will incorporate feedback from the robot planner and environment into our plan generation process. We will also improve the verifier to encode knowledge of the target domain nuances. With these technical refinements, we expect our system can become integrated even better in
Fig. 5: **Feedback loop between the Generator and Verifier. The input text is converted to structured-like language via the generator and is then passed through the verifier. The verifier returns a list of errors (marked with a yellow 1). The feedback is passed back to the generator along with the erroneous task plan, generating a new task plan. Now that previous errors were fixed and the tags could be processed, new errors were found (including a constraint error that the plan uses a vessel not in the environment). These errors are denoted with a blue 2. This feedback loop is repeated until no more errors are caught, which in this case required 3 iterations.**
robotic task planning for different domains.
## VI Acknowledgements
We would like to thank members of the Matter Lab for annotating task plans. We would also like to thank the Acceleration Consortium for their generous support, as well as the Carlsberg Foundation.
|
2309.01230 | lfads-torch: A modular and extensible implementation of latent factor
analysis via dynamical systems | Latent factor analysis via dynamical systems (LFADS) is an RNN-based
variational sequential autoencoder that achieves state-of-the-art performance
in denoising high-dimensional neural activity for downstream applications in
science and engineering. Recently introduced variants and extensions continue
to demonstrate the applicability of the architecture to a wide variety of
problems in neuroscience. Since the development of the original implementation
of LFADS, new technologies have emerged that use dynamic computation graphs,
minimize boilerplate code, compose model configuration files, and simplify
large-scale training. Building on these modern Python libraries, we introduce
lfads-torch -- a new open-source implementation of LFADS that unifies existing
variants and is designed to be easier to understand, configure, and extend.
Documentation, source code, and issue tracking are available at
https://github.com/arsedler9/lfads-torch . | Andrew R. Sedler, Chethan Pandarinath | 2023-09-03T17:33:24Z | http://arxiv.org/abs/2309.01230v1 | lfads-torch: A modular and extensible implementation of latent factor analysis via dynamical systems
###### Abstract
Latent factor analysis via dynamical systems (LFADS) is an RNN-based variational sequential autoencoder that achieves state-of-the-art performance in denoising high-dimensional neural activity for downstream applications in science and engineering. Recently introduced variants and extensions continue to demonstrate the applicability of the architecture to a wide variety of problems in neuroscience. Since the development of the original implementation of LFADS, new technologies have emerged that use dynamic computation graphs, minimize boilerplate code, compose model configuration files, and simplify large-scale training. Building on these modern Python libraries, we introduce lfads-torch--a new open-source implementation of LFADS that unifies existing variants and is designed to be easier to understand, configure, and extend. Documentation, source code, and issue tracking are available at: [https://github.com/arsedler9/lfads-torch](https://github.com/arsedler9/lfads-torch).
**Keywords:** deep learning, neuroscience, dynamical systems
## 1 Introduction
Modern technologies for monitoring neural systems include a wide range of modalities, many of which record noisy, high-dimensional observations. These data are often processed and denoised to yield representations of activity that are useful for studying neural function, or for decoding or classifying behaviors. Neural population models (NPMs) accomplish this denoising by leveraging spatiotemporal structure in neural recordings, yielding denoised activity patterns on a single-trial basis and millisecond timescale. NPMs based on artificial neural networks offer significant performance and modularity advantages over previous approaches, as architectures and loss functions can easily be modified to support new modeling goals and data modalities. Latent factor analysis via dynamical systems (LFADS) [1; 2] is one such model that has matured over the the last several years to support automated hyperparameter tuning [3; 4], electromyography (EMG) and calcium imaging modalities [5; 6; 7], and stabilization of long-term recordings [8].
The initial implementation of LFADS used TensorFlow 1.5, which relied on a static computation graph and required users to internalize additional concepts like placeholders and sessions. This made modification, debugging, and inspecting intermediate outputs opaque and cumbersome. Additionally, functions like dataset handling (shuffling, batching, etc.), optimization loops, and logging had to be implemented manually. Finally, model configuration was handled using a command-line interface, so alternative architectures had
to be implemented using control flow within the code itself. These factors added substantial complexity to the code base, which made further experimentation and development more challenging and less accessible to the larger neuroscience community.
Since the development of the original model, the ecosystem of deep learning technologies has seen significant growth and maturation. PyTorch introduced dynamic computation graphs and fast eager execution, which allow an intuitive model development and debugging workflow [9]. PyTorch Lightning virtually eliminated the need for engineering boilerplate and provided a template for principled compartmentalization of modeling code [10]. Hydra provided a mechanism for composable configurations, allowing models and their components to be defined in separate configuration files and instantiated on the fly [11]. Finally, Ray's Tune framework greatly simplified large-scale hyperparameter tuning [12]. Together, these technologies provide an opportunity to substantially lower the barriers to modeling neural data with LFADS and further enhance its capabilities.
In this work we introduce lfads-torch, an implementation of LFADS and related models that uses modern Python libraries to achieve intuitive and user-friendly development and debugging. We describe how the code base allows rapid iteration and present results validating the implementation against its predecessors.
## 2 The lfads-torch Library: An Overview
Basic advantages of lfads-torch over previous implementations are eager execution, improved modularity, and less engineering boilerplate code. The advanced functionality spans three main categories: _modules_ (SS2.1), _configuration_ (SS2.2), and _large-scale runs_ (SS2.3).
### Modules
#### 2.1.1 augmentations and the AugmentationStack
Data augmentation is an effective tool for training LFADS models. In particular, augmentations that act on both the input data and the reconstruction cost gradients enable resistance to identity overfitting (coordinated dropout; [3, 4]) and the ability to infer firing rates with spatiotemporal superresolution (selective backpropagation through time [6, 7]). Other augmentations can reduce the impact of correlated noise [5].
In lfads-torch, we provide a simple interface for applying data augmentations via the AugmentationStack class. The user creates the object by passing in a list of transformations and specifying the order in which they should be applied to the data batch, the loss tensor, or both. Separate AugmentationStacks are applied automatically by the LFADS object during training and inference, making it much easier to experiment with new augmentation strategies. Notably, the augmentations module provides implementations of CoordinatedDropout [3, 4], SelectiveBackpropThruTime [6], and TemporalShift [5].
#### 2.1.2 priors: Modular priors
The LFADS model computes KL penalties between posteriors and priors for both initial condition and inferred input distributions, which are added to the reconstruction cost in the variational ELBO [1, 2]. In the original implementation, priors were multivariate normal and autoregressive multivariate normal for the initial condition and inferred inputs, respectively.
As priors impact the form of inferred inputs, users may want to experiment with alternate distributions that are more appropriate for certain brain areas and tasks.
With lfads-torch, users can implement custom priors by writing make_posterior and forward functions and passing the modules to the LFADS constructor. In addition to the standard MultivariateNormal and AutoregressiveMultivariateNormal priors, we provide a MultivariateStudentT prior to encourage inference of heavy-tailed inputs [13].
#### 2.1.3 Recons: Modular reconstruction
The original LFADS implementation provided Poisson and Gaussian observation models. The former is often applied to binned spike counts and the latter has proven useful for electrocorticography [14]. Recent work has benefited from changing the output distribution for different data modalities [5, 7].
In lfads-torch, we include implementations of Poisson, Gaussian, Gamma [5], and ZeroInflatedGamma[7] distributions, widening the applicability of the implementation to EMG, calcium imaging, and more. We also provide an interface for users to implement new observation models by subclassing the abstract Reconstruction class.
Another noteworthy change is the separation of the data used as input (encod_data) from the data being reconstructed (recon_data). This enables applications like co-smoothing [15], forward prediction [15], and reconstruction across data modalities.
#### 2.1.4 Other PyTorch Lightning capabilities
By building lfads-torch within the PyTorch Lightning ecosystem, we give users access to a large and growing array of complex functionality that would be technically challenging or tedious to implement manually. Examples include logging, checkpointing, profiling, mixed precision training, learning rate scheduling, pruning, accelerators, and more (see the docs).
### Configuration
In lfads-torch, a flexible configuration process gives users significant control over model and architecture configuration without needing to edit source code. Run configurations are built by composing a main YAML config file with selected model, datamodule, and callbacks configs. The target classes and arguments specified in these configs are directly used by Hydra to recursively instantiate the various objects used in the training pipeline. We provide several preprocessed datasets and example configurations for the Neural Latents Benchmark (NLB) [15] to help users understand the intended workflow.
### Large-scale runs
While the LFADS class and other training objects can be instantiated directly like any other Python class, we provide flexible machinery for instantiating and training many models in parallel with different configurations for hyperparameter tuning. The run_model function uses a config path, along with a set of overrides, to instantiate training objects and run the training loop. We provide run_single.py for training single models in this manner, as well as run_multi.py and run_pbt.py for specifying hyperparameter spaces and performing large-scale searches and Population-based Training [4] with Tune.
For users who want to perform large-scale hyperparameter tuning using AutoLFADS [4] with minimal setup and infrastructure costs, we made lfads-torch available on NeuroCAAS [16] for a convenient drag-and-drop modeling workflow.
## 3 Validation
We first verified that forward and backward passes of lfads-torch were identical to a previous implementation [17] in the deterministic case. We then trained paired models with identical hyperparameters and initializations on several NLB datasets (5 ms; MC_Maze, MC_RTT, Area2_Bump, DMFC_RSG; Figure 1) and found that final losses were nearly identical across the hyperparameter space. Finally, we replicated performance of the NLB AutoLFADS baseline across all 5 ms datasets (see Table 1, Appendix Table 2 and the EvalAI Leaderboard).
## 4 Summary
lfads-torch is an implementation of LFADS that leverages modern deep learning frameworks, allowing easier application, robust hyperparameter tuning, and experimentation with new architectures and training methods. We hope that lfads-torch helps researchers push the boundaries of high-performance neural population modeling.
## Acknowledgments and Disclosure of Funding
This work was supported by NSF Graduate Research Fellowship DGE-2039655 (ARS), NIH-NINDS/OD DP2NS127291, NIH BRAIN/NIDA RF1 DA055667, and the Alfred P. Sloan Foundation (CP). We declare no competing interests.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline Dataset & Implementation & co-bps & vel R2 & psh R2 & fp-bps & tp-corr \\ \hline MC\_Maze & lfads-tf2 & 0.3364 & **0.9097** & **0.6360** & 0.2349 & \\ & lfads-torch & **0.3497** & 0.9027 & 0.6170 & **0.2447** & \\ \hline MC\_RTT & lfads-tf2 & 0.1868 & 0.6167 & & 0.1213 & \\ & lfads-torch & **0.1882** & **0.6176** & & **0.1240** & \\ \hline Area2\_Bump & lfads-tf2 & **0.2569** & 0.8492 & **0.6318** & **0.1505** & \\ & lfads-torch & 0.2535 & **0.8516** & 0.6104 & 0.1455 & \\ \hline DMFC\_RSG & lfads-tf2 & **0.1829** & & **0.6359** & 0.1844 & **-0.8248** \\ & lfads-torch & 0.1820 & & 0.5873 & **0.1960** & -0.7627 \\ \hline \end{tabular}
\end{table}
Table 1: Test performance of lfads-tf2 and lfads-torch for four NLB datasets (5 ms). |
2302.12880 | A proof using Böhme's Lemma that no Petersen family graph has a flat
embedding | Sachs and Conway-Gordon used linking number and a beautiful counting argument
to prove that every graph in the Petersen family is intrinsically linked (have
a pair of disjoint cycles that form a nonsplit link in every spatial embedding)
and thus each family member has no flat spatial embedding (an embedding for
which every cycle bounds a disk with interior disjoint from the graph). We give
an alternate proof that every Petersen family graph has no flat embedding by
applying B\"{o}hme's Lemma and the Jordan-Brouwer Separation Theorem. | Joel Foisy, Catherine Jacobs, Trinity Paquin, Morgan Schalizki, Henry Stringer | 2023-02-24T20:31:32Z | http://arxiv.org/abs/2302.12880v1 | # A proof using Bohme's Lemma that no Petersen
###### Abstract
Sachs and Conway-Gordon used linking number and a beautiful counting argument to prove that every graph in the Petersen family is intrinsically linked (have a pair of disjoint cycles that form a nonsplit link in every spatial embedding) and thus each family member has no flat spatial embedding (an embedding for which every cycle bounds a disk with interior disjoint from the graph). We give an alternate proof that every Petersen family graph has no flat embedding by applying Bohme's Lemma and the Jordan-Brouwer Separation Theorem.
## 1 Introduction
Sachs [5], [6] and Conway-Gordon [3] (for \(K_{6}\)) proved, using the linking number and a beautiful counting argument, that every graph in the Petersen family is intrinsically linked (that is, has a pair of disjoint cycles that form a nonsplit link in every spatial embedding). As a consequence of being intrinsically linked, every graph in the Petersen family has no flat embedding. A _flat_ spatial embedding is one in which every cycle bounds a disk with interior disjoint from the graph. When such a disk is attached to a cycle, we say that cycle has been _paneled_.
We have devised a new family of proofs that every Petersen family graph has no flat embedding, without using intrinsic linking. We sketch our methods here. Given a graph \(G\) with a flat embedding, consider a set of cycles in \(G\) such that, pair-wise, intersections are either connected or empty. We call such a set of cycles a _Bohme system_. Bohme's Lemma [2] concludes that for a Bohme system of cycles in a flat embedding of \(G\), it is possible to panel the cycles in the system simultaneously-that is, each cycle bounds a disk with interior disjoint from the graph and from all other disks. The Jordan-Brouwer Separation Theorem then allows us to conclude that any sphere resulting from paneling a Bohme system (which we will call a _Bohme sphere_) will separate space into an inside (bounded) and an outside (unbounded) component. Since we work in the PL category, by Alexander's result [1], the inside component is a topological ball.
For the sake of contradiction, we suppose a Petersen family graph has a flat embedding and then find an appropriate Bohme system. We then argue that some resulting Bohme sphere must be intersected by an edge of the embedded graph, which contradicts flatness. In this way, we prove that each Petersen family graph cannot have a flat embedding. In the next section, we provide further details for each graph in the Petersen Family.
We conclude this introductory section with some terminology and background. We work in the PL category throughout.
Suppose a graph \(G\) has a three cycle with vertices \(a,b,c\). To perform a \(\Delta\) -Y exchange on the three-cycle in \(G\), an extra vertex \(y\) is added, edges \(ab,bc\), and \(ca\) are removed, and the edges \(ay,by\), and \(cy\) are added to transform \(G\) into \(G^{\prime}\). If a graph \(H\) has a degree 3 vertex, \(y\), to perform a Y-\(\Delta\) exchange on \(H\) at \(y\), we begin with the Y, having vertices \(a,b,c,y\) and edges \(ay,by\), and \(cy\). We remove the \(y\) vertex and all edges incident to \(y\) and then add edges \(ab,bc\), and \(ca\) to make a triangle and transform \(H\) to the graph \(H^{\prime}\).
The Petersen family of graphs consists of \(K_{6}\) and \(K_{3,3,1}\) and the graphs obtained from \(K_{6}\) and \(K_{3,3,1}\) by \(\Delta-Y\) exchanges (equivalently, from \(K_{6}\) by \(\Delta-Y\) and \(Y-\Delta\) exchanges). We will denote the remaining graphs in this family, with subscripts indicating the number of vertices, as \(P_{7}\), \(P_{8}\), \(P_{9}\), \(P_{10}\) (which is the classic Petersen graph) and \(K_{4,4}-e\). Robertson, Seymour and Thomas proved Sachs' conjecture that the Petersen family of graphs form the complete minor-minimal set of graphs that are intrinsically linked. They further showed that a graph has a flat embedding if and only if it has a linkless embedding [4].
## 2 Proofs
In this section, we go through details of the proof for each graph in the Petersen family.
### \(K_{6}\)
Denote the vertices of \(K_{6}\) as \(\{a,b,c,d,e,f\}\). By way of contradiction suppose that \(G=K_{6}\) has a flat embedding. Consider the collections of cycles:
\(C_{1}=\{(b,e,f),(b,c,e),(c,e,f),(b,c,f)\}\)
\(C_{2}=\{(b,e,f),(a,b,e),(a,e,f),(a,b,f)\}\)
\(C_{3}=\{(b,e,f),(b,d,e),(d,e,f),(b,d,f)\}\)
Let \(\mathcal{C}=\cup_{i=1}^{3}C_{i}\). Note that the cycles above are induced cycles where \(C_{1}\subset G[\{b,c,e,f\}],C_{2}\subset G[\{a,b,e,f\}]\), and \(C_{3}\subset G[\{b,d,e,f\}]\), where \(G[\{a,b,c,d\}]\) is the subgraph of \(K_{6}\) induced by the vertices \(a,b,c,d\). Notice \((b,e,f)\in C_{i}\) for \(i=1,2,3\), see Figure 1.
Take a flat embedding of \(K_{6}\). As all the cycles of \(\mathcal{C}\) have connected pairwise intersections, we can apply Bohme's Lemma to get simultaneous paneling disks of the ten 3-cycles that make up \(\mathcal{C}\). As we panel each \(C_{i}\), we form a sphere \(T_{i}\), then by Jordan-Brouwer Theorem applied to, say, \(T_{1}\), there are inside and
outside components, \(B_{1}\) and \(B_{2}\) respectively, with \(B_{1}\cup B_{2}=\mathbb{R}^{3}-T_{1}\), and, say, \(B_{1}\) a ball (which follows from Alexander's result [1] in the PL category). Without loss of generality as the edge \(\overline{ad}\) is disjoint from \(T_{1}\), we may assume that \(ad\subset B_{1}\). As \(ad\subset B_{1}\) and \(a\in T_{2}\), \(d\in T_{3}\) then \(T_{2},T_{3}\subset B_{1}\cup T_{1}\). This means that \(T_{2}\) and \(T_{3}\) are in the closure of \(B_{1}\) (this is what we mean when we later say that \(T_{2}\) and \(T_{3}\) lie inside of \(T_{1}\)). Now similarly observe there exists, via Jordan-Brouwer, an inside and outside region for \(T_{2}\). Without loss of generality suppose that \(T_{3}\) is in the closure of the inner region (ball) of \(T_{2}\). As \(d\not\in T_{2}\) and \(d\in T_{3}\), \(d\) is in the inner ball of \(T_{2}\). As \(T_{2}\) is inside \(T_{1}\) and \(c\in T_{1}\), c is outside \(T_{2}\). As edge \(cd\) exists and as c and d are on opposite sides of \(T_{2}\), edge \(cd\) must intersect a disk of \(T_{2}\). This brings us a contradiction. Thus \(K_{6}\) has no flat embedding.
**The remaining cases**
For all remaining Petersen family graphs except \(P_{10}\) and \(K_{4,4}-e\), similarly to the proof for \(K_{6}\), we consider a flat embedding, and a cycle \(C\) with vertex set \(V(C)\), together with three vertices disjoint from \(V(C)\); \(v_{1}\), \(v_{2}\) and \(v_{3}\), such that each \(G[V(C)\cup\{v_{i}\}]\) forms a Bohme system. After panelling the Bohme system, the result is three spheres that intersect only along \(C\) and the disk that panels \(C\). In each case, two vertices are on opposite sides of a resulting Bohme sphere. In each case, the two vertices are connected either by an edge, or through a \(Y\) that is disjoint from \(G[V(C)\cup\{v_{i}\}]\) for the appropriate \(i\), and the result is a contradiction of flatness. More details follow.
### \(P_{7}\)
Denote the six vertices of \(K_{6}\) by \(\{a,b,c,1,2,3\}\), and suppose that a \(\Delta-Y\) exchange took place between \(1,2,\) and \(3\). By way of contradiction, suppose that \(P_{7}\) has a flat embedding. Consider the induced subgraphs:
\[\begin{array}{l}G_{1}=G[\{a,b,c,1\}]\\ G_{2}=G[\{a,b,c,2\}]\\ G_{3}=G[\{a,b,c,3\}]\end{array}\]
Note that each \(G_{i}\) contains the cycle \((a,b,c)\). One can check that the induced cycles in \(G_{1}\cup G_{2}\cup G_{3}=K_{3,1,1,1}\) form a Bohme system. Take a flat embedding
Figure 1: A spatial embedding of the subgraph of \(K_{6}\) formed by \(\mathcal{C}\).
of \(P_{7}\). The simultaneous paneling of the Bohme system turns each \(G_{i}\) into a Bohme sphere. From here, the argument follows analogously to the argument for \(K_{6}\), except the vertices \(\{1,2,3\}\) are connected by a \(Y\) instead of a triangle.
### \(K_{4,4}-e\)
Denote the vertices of \(K_{4,4}-e\) as \(V_{1}=\{a,b,c,d\}\) and \(V_{2}\{1,2,3,4\}\) where all possible edges from \(V_{1}\) to \(V_{2}\) are present, except \(e=4a\)
Assume by way of contradiction that \(K_{4,4}-e\) has a flat embedding.
Consider the set of cycles \(A_{i}\) listed bellow, where \(i\in\{1,2,3\}\).
\[\begin{array}{l}A_{1}=\{(1,b,4,c),(1,c,4,d),(1,b,4,d)\}\subseteq G[1,4,b,c,d] \\ A_{2}=\{(2,b,4,c),(2,c,4,d),(2,b,4,d)\}\subseteq G[2,4,b,c,d]\\ A_{3}=\{(3,b,4,c),(3,c,4,d),(3,b,4,d)\}\subseteq G[3,4,b,c,d]\end{array}\]
Notice that \(A_{1}\cup A_{2}\cup A_{3}\) satisfies the hypotheses of Bohme's Lemma with edges forming \(K_{4,3}\). By Bohme's lemma, we can simultaneously panel each of these cycles. Note that all paneled cycles pass through the vertex \(4\). Denote the
Figure 3: A spatial embedding of \(K_{4,4}-e\). Possibly the edge \(a3\) intersects a panel.
Figure 2: A spatial embedding of \(P_{7}\).
sphere resulting from paneling \(A_{i}\) as \(S_{i}\), where \(i\in\{1,2,3\}\). Note that each \(A_{i}\) contains the \(Y\) on vertices \(4,b,c,d\), as well as the vertex \(i\). From here, the proof follows similarly to the argument used for \(P_{7}\) (\(K_{6}\)), with the base cycle (triangle) \(C\) replaced with the \(Y\), where for \(i\neq j\), \(S_{i}\cap S_{j}\) is precisely the \(Y\).
### \(K_{3,3,1}\)
Label the vertices of \(K_{3,3,1}\) as in Figure 4. Assume by way of contradiction that \(K_{3,3,1}\) has a flat embedding. Consider the collection of cycles:
\[\begin{array}{c}\mathcal{S}_{1}=\{(a,2,c,3),(a,1,c,2),(a,1,c,3)\}\subseteq G [a,2,c,3,1]\\ \mathcal{S}_{2}=\{(a,2,c,3),(3,b,2,c),(3,b,2,a)\}\subseteq G[a,2,c,3,b]\\ \mathcal{S}_{3}=\{(a,2,c,3),(c,2,v),(c,v,3),(3,v,a),(a,v,2)\}\subseteq G[a,2,c, 3,v]\end{array}\]
Notice that \(\mathcal{S}_{1}\cup\mathcal{S}_{2}\cup\mathcal{S}_{3}\) satisfies Bohme's lemma, and after paneling each, \(\mathcal{S}_{i}\) forms a Bohme sphere. These Bohme spheres share precisely the same base cycle \((a,2,c,3)\) with the corresponding panelling disk. From here, the argument follows similarly to the argument for \(K_{6}\).
### \(P_{8}\)
Denote the vertices of \(P_{8}\) by \(\{v,a,b,c,1,2,3,y\}\), where \(\{v\}\), \(\{a,b,c\}\), and \(\{1,2,3\}\) are the partitions in \(K_{3,3,1}\), and triangle \((v,a,1)\) was exchanged to a \(Y\), with resulting new \(Y\) vertex \(y\) in the resulting \(P_{8}\).
By way of contradiction, suppose that \(P_{8}\) is has a flat embedding. Consider the induced subgraphs:
\[G_{1}=G[a,y,1,b,2,v]\] \[G_{2}=G[a,y,1,b,2,c]\] \[G_{3}=G[a,y,1,b,2,3]\]
Figure 4: A spatial embedding of \(K_{3,3,1}\).
and their respective Bohme spheres formed by panelling their induced cycles (which is possible by Bohme's Lemma):
\[\begin{array}{c}\mathcal{B}_{1}=\{(a,y,1,b,2),(v,2,a,y),(v,y,1,b),(v,b,2)\}\\ \mathcal{B}_{2}=\{(a,y,1,b,2),(a,y,1,c,2),(b,1,c,2)\}\\ \mathcal{B}_{3}=\{(a,y,1,b,2),(a,y,1,b,3),(a,2,b,3)\}\end{array}\]
Note that each \(\mathcal{B}_{i}\) contains the cycle \((a,y,1,b,2)\). From here, the argument follows analogously to the argument for \(K_{6}\).
### \(P_{9}\)
Let \(P_{9}\) be labelled as in Figure 6.
Assume by way of contradiction that \(P_{9}\) is has a flat embedding. Consider a flat embedding, and consider the sets of cycles (\(S_{i}\) where \(i\in\{1,2,3\}\)):
\[\begin{array}{c}S_{1}=\{(1,2,3,4,5,6),(6,7,3,4,5),(6,7,3,2,1)\}\subset G[1,2, 3,4,5,6,7]\\ S_{2}=\{(1,2,3,4,5,6),(2,8,5,4,3),(2,8,5,6,1)\}\subset G[1,2,3,4,5,6,8]\\ S_{3}=\{(1,2,3,4,5,6),(1,9,4,3,2),(1,9,4,5,6)\}\subset G[1,2,3,4,5,6,9]\end{array}\]
Note that each \(S_{i}\) contains the cycle \(C=(1,2,3,4,5,6)\) and notice that by Bohme's lemma each of these cycles can be simultaneously panelled so each \(S_{i}\) determines a sphere for \(i=1,2,3\) with, for \(i\neq j\), the sphere determined by
Figure 5: A spatial embedding of \(P_{8}\).
Figure 6: Spatial embeddings of \(P_{9}\).
intersecting the sphere determined by \(S_{j}\) only along \(C\) and its corresponding panelling disk. From here, the proof follows analogously to the proof for \(K_{6}\).
### \(P_{10}\)
Suppose by way of contradiction that \(P_{10}\) has a flat embedding. Consider the induced subgraphs formed by each set of vertices below, where labels are from Figure 7.
\[\begin{array}{l}G_{1}=G[\{1,2,3,4,5,6,9\}]\\ G_{2}=G[\{1,2,3,4,5,7,10\}]\\ G_{3}=G[\{1,2,3,4,5,6,8\}]\end{array}\]
Consider a flat embedding of \(P_{10}\). Then by Bohme's lemma, each 5-cycle and 6-cycle in \(G_{1},G_{2}\), and \(G_{3}\) can simultaneously bound disks to form three spheres, corresponding to each \(G_{i}\). Note that the cycle \((1,2,3,4,5)\) is in each \(G_{i}\).
By the Jordan-Brouwer Theorem, since vertex 7 is connected to vertex 10 by an edge, they must both be outside or both be inside of the sphere formed by \(G_{1}\). Without loss of generality, assume both vertices are outside. Then the sphere created by \(G_{2}\) is in the closure of the region outside of the sphere created by \(G_{1}\). Since vertex 8 is connected to both 6 and 10 by an edge, vertex 8 must be between the sphere created by \(G_{2}\) and the sphere created by \(G_{1}\). Thus the sphere formed by \(G_{3}\) is sandwiched between the sphere created by \(G_{2}\) and the sphere created by \(G_{1}\) (that is, it lies in the closure of the outer region formed by \(G_{2}\) and in the closure of the inner region formed by \(G_{1}\)). But now since 7 is connected to 9 by an edge, and 7 is in \(G_{2}\) and 9 is in \(G_{1}\), the sphere formed by \(G_{3}\) intersects the edge connecting 7 to 9. Hence a paneled disk must have been punctured by edge 79, contradicting flatness. In Figure 7, the panelled cycle \((2,3,4,10,8)\) from \(G_{3}\) would be punctured. Therefore \(P_{10}\) has no flat embedding.
We note that the \(P_{10}\) case was similar to the earlier cases, except we needed a shared cycle \(C\) and 5 additional vertices, and \(G_{1}\) and \(G_{3}\) shared the vertex 6, in addition to the cycle \(C\). The spheres they formed shared the base cycle (and disk) as well as the edge \(5,6\). Instead of having a triangle or \(Y\) connecting the extra vertices, here we have edges \(8,10\) and \(7,9\). Since 6 lies in both \(G_{1}\) and \(G_{3}\) but not in \(G_{2}\); \(G_{1}\) and \(G_{3}\) must be on the same side of \(G_{2}\). Even with a swap
Figure 7: A spatial embedding of the Petersen graph.
of positions of \(G_{1}\) and \(G_{3}\) from our original proof, the edge \(8,10\) would have intersected the Bohme sphere formed by \(G_{1}\).
We could have done a \(\Delta-Y\) exchange on \(P_{9}\), on triangle \((7,8,9)\), to get \(P_{10}\) with the same Bohme system we used on \(P_{9}\) and the proof details analogous to those used on \(P_{9}\).
## 3 Acknowledgements
This work was completed as part of a United States National Security Agency grant H89230-22-1-0008 Clarkson University: Summer (2022) Research Experience for Undergraduates in Mathematics.
|
2308.01119 | Unlearning Spurious Correlations in Chest X-ray Classification | Medical image classification models are frequently trained using training
datasets derived from multiple data sources. While leveraging multiple data
sources is crucial for achieving model generalization, it is important to
acknowledge that the diverse nature of these sources inherently introduces
unintended confounders and other challenges that can impact both model accuracy
and transparency. A notable confounding factor in medical image classification,
particularly in musculoskeletal image classification, is skeletal
maturation-induced bone growth observed during adolescence. We train a deep
learning model using a Covid-19 chest X-ray dataset and we showcase how this
dataset can lead to spurious correlations due to unintended confounding
regions. eXplanation Based Learning (XBL) is a deep learning approach that goes
beyond interpretability by utilizing model explanations to interactively
unlearn spurious correlations. This is achieved by integrating interactive user
feedback, specifically feature annotations. In our study, we employed two
non-demanding manual feedback mechanisms to implement an XBL-based approach for
effectively eliminating these spurious correlations. Our results underscore the
promising potential of XBL in constructing robust models even in the presence
of confounding factors. | Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee | 2023-08-02T12:59:10Z | http://arxiv.org/abs/2308.01119v2 | # Unlearning Spurious Correlations in Chest X-ray Classification
###### Abstract
Medical image classification models are frequently trained using training datasets derived from multiple data sources. While leveraging multiple data sources is crucial for achieving model generalization, it is important to acknowledge that the diverse nature of these sources inherently introduces unintended confounders and other challenges that can impact both model accuracy and transparency. A notable confounding factor in medical image classification, particularly in musculoskeletal image classification, is skeletal maturation-induced bone growth observed during adolescence. We train a deep learning model using a Covid-19 chest X-ray dataset and we showcase how this dataset can lead to spurious correlations due to unintended confounding regions. eXplanation Based Learning (XBL) is a deep learning approach that goes beyond interpretability by utilizing model explanations to interactively unlearn spurious correlations. This is achieved by integrating interactive user feedback, specifically feature annotations. In our study, we employed two non-demanding manual feedback mechanisms to implement an XBL-based approach for effectively eliminating these spurious correlations. Our results underscore the promising potential of XBL in constructing robust models even in the presence of confounding factors.
Keywords:Interactive Machine Learning eXplanation Based Learning Medical Image Classification Chest X-ray
## 1 Introduction
While Computer-Assisted Diagnosis (CAD) holds promise in terms of cost and time savings, the performance of models trained on datasets with undetected biases is compromised when applied to new and external datasets. This limitation hinders the widespread adoption of CAD in clinical practice [21, 16]. Therefore, it is crucial to identify biases within training datasets and mitigate their impact on trained models to ensure model effectiveness.
For example, when building models for the differential diagnosis of pathology on chest X-rays (CXR) it is important to consider skeletal growth or ageing as a confounding factor. This factor can introduce bias into the dataset and potentially mislead trained models to prioritize age classification instead of accurately distinguishing between specific pathologies. The effect of skeletal growth on the appearance of bones necessitates careful consideration to ensure that a model focuses on the intended classification task rather than being influenced by age-related features.
An illustrative example of this scenario can be found in a recent study by Pfeuffer et al. [12]. In their research, they utilized the Covid-19 CXR dataset [4], which includes a category comprising CXR images of children. This dataset serves as a pertinent example to demonstrate the potential influence of age-related confounders, given the presence of images from pediatric patients. It comprises CXR images categorized into four groups: Normal, Covid, Lung opacity, and Viral pneumonia. However, a notable bias is introduced into the dataset due to the specific inclusion of the Viral pneumonia cases collected exclusively from children aged one to five years old [9]. This is illustrated in Figure 1 where confounding regions introduced due to anatomical differences between a child and an adult in CXR images are highlighted. Notably, the presence of Epiphyses in images from the Viral pneumonia category (which are all from children) is a confounding factor, as it is not inherently associated with the disease but can potentially mislead a model into erroneously associating it with the category. Addressing these anatomical differences is crucial to mitigate potential bias and ensure accurate analysis and classification in pediatric and adult populations.
Biases like this one pose a challenge to constructing transparent and robust models capable of avoiding spurious correlations. Spurious correlations refer to image regions that are mistakenly believed by the model to be associated with a specific category, despite lacking a genuine association.
While the exact extent of affected images remains unknown, it is important to note that the dataset also encompasses other confounding regions, such as
Figure 1: In the left image, representing a child diagnosed with Viral pneumonia, the presence of Epiphyses on the humerus heads is evident, highlighted with red ellipses. Conversely, the right image portrays an adult patient with Covid-19, where the Epiphyses are replaced by Metaphyses, also highlighted with red ellipses.
texts and timestamps. However, it is worth mentioning that these confounding regions are uniformly present across all categories, indicating that their impact is consistent throughout. For the purpose of this study, we specifically concentrate on understanding and mitigating the influence of musculoskeletal age in the dataset.
eXplanation Based Learning (XBL) represents a branch of Interactive Machine Learning (IML) that incorporates user feedback in the form of feature annotation during the training process to mitigate the influence of confounding regions [17]. By integrating user feedback into the training loop, XBL enables the model to progressively improve its performance and enhance its ability to differentiate between relevant and confounding features [6]. In addition to unlearning spurious correlations, XBL has the potential to enhance users' trust in a model [5]. By actively engaging users and incorporating their expertise, XBL promotes a collaborative learning environment, leading to increased trust in the model's outputs. This enhanced trust is crucial for the adoption and acceptance of models in real-world applications, particularly in domains where decisions have significant consequences, such as medical diagnosis.
XBL approaches typically add regularization to the loss function used when training a model, enabling it to disregard the impact of confounding regions. A typical XBL loss can be expressed as:
\[L=L_{CE}+L_{expl}+\lambda\sum_{i=0}\theta_{i}^{2}\;, \tag{1}\]
where \(L_{CE}\) is categorical cross entropy loss that measures the discrepancy between the model's predictions and ground-truth labels; \(\lambda\) is a regularization term; \(\theta\) refers to network parameters; and \(L_{expl}\) is an explanation loss. Explanation loss can be formulated as:
\[L_{expl}=\sum_{i=0}^{N}M_{i}\odot Exp(x_{i})\;, \tag{2}\]
where \(N\) is the number of training instances, \(x\in X\); \(M_{i}\) is a manual annotation of confounding regions in the input instance \(x_{i}\); and \(Exp(x_{i})\) is a saliency-based model explanation for instance \(x_{i}\), for example generated using Gradient weighted Class Activation Mapping (GradCAM) [17]. GradCAM is a feature attribution based model explanation that computes the attention of the learner model on different regions of an input image, indicating the regions that significantly contribute to the model's predictions [18]. This attention serves as a measure of the model's reliance on these regions when making predictions. The loss function, \(L_{expl}\), is designed to increase as the learner's attention to the confounding regions increases. Overall, by leveraging GradCAM-based attention and the associated \(L_{expl}\) loss, XBL provides a mechanism for reducing a model's attention to confounding regions, enhancing the interpretability and transparency of a model's predictions.
As is seen in the inner ellipse of Figure 2, in XBL, the most common mode of user interaction is image feature annotation. This requires user engagement
that is considerably more demanding than the simple instance labeling that most IML techniques require [22] and increases the time and cost of feedback collection. As can be seen in the outer ellipse of Figure 2, we are interested in lifting this pressure from users (feedback providers) and simplifying the interaction to ask for identification of two explanations as exemplary explanations and ranking them as good and bad explanations. This makes collecting feedback cheaper and faster. This kind of user interaction where users are asked for a ranking instead of category labels has also been found to increase inter-rater reliability and data collection efficiency [11]. We incorporate this feedback into model training through a contrastive triplet loss [3].
The main contributions of this paper are:
1. We propose the first type of eXplanation Based Learning (XBL) that can learn from only two exemplary explanations of two training images;
2. We present an approach to adopt triplet loss for XBL to incorporate the two exemplary explanations into an explanation loss;
3. Our experiments demonstrate that the proposed method achieves improved explanations and comparable classification performance when compared against a baseline model.
## 2 Related Work
### Chest x-ray classification
A number of Covid-19 related datasets have been collated and deep learning based diagnosis solutions have been proposed due to the health emergency caused by Covid-19 and due to an urgent need for computer-aided diagnosis (CAD) of the disease [8]. In addition to training deep learning models from scratch,
Figure 2: The inner ellipse shows the typical mode of feedback collection where users annotate image features. The outer ellipse shows how our proposed approach requires only identification of one good and one bad explanation.
transfer learning, where parameters of a pre-trained model are further trained to identify Covid-19, have been utilized [20]. Even though the array of datasets and deep learning models show promise in implementing CAD, care needs to be taken when the datasets are sourced from multiple imaging centers and/or the models are only validated on internal datasets. The Covid-19 CXR dataset, for example, has six sources at the time of writing this paper. This can result in unintended confounding regions in images in the dataset and subsequently spurious correlations in trained models [16].
### eXplanation Based Learning
XBL can generally be categorized based on how feedback is used: (1) augmenting loss functions; and (2) augmenting training datasets.
Augmenting Loss Functions.As shown in Equation 1, approaches in this category add an explanation loss, \(L_{expl}\), during model training to encourage focus on image regions that are considered relevant by user(s), or to ignore confounding regions [7]. Ross et al. [14] use an \(L_{expl}\) that penalizes a model with high input gradient model explanations on the wrong image regions based on user annotation,
\[L_{expl}=\sum_{n}^{N}\left[M_{n}\odot\frac{\partial}{\partial x_{n}}\sum_{k=1 }^{K}\log\hat{y}_{nk}\right]^{2}, \tag{3}\]
for a function \(f(X|\theta)=\hat{y}\in R^{N\times K}\) trained on \(N\) images, \(x_{n}\), with \(K\) categories, where \(M_{n}\in\ \{0,\ 1\}\) is user annotation of confounding image regions. Similarly, Shao et al. [19] use influence functions in place of input gradients to correct a model's behavior
Augmenting Training Dataset.In this category, a confounder-free dataset is added to an existing confounded training dataset to train models to avoid learning spurious correlations. In order to unlearn spurious correlations from a classifier that was trained on the Covid-19 dataset, Pfeuffer et al. [12] collected feature annotation on 3,000 chest x-ray images and augmented their training dataset. This approach, however, doesn't target unlearning or removing spurious correlations, but rather adds a new variety of data. This means models are being trained on a combination of the existing confounded training dataset and the their new dataset.
One thing all approaches to XBL described above have in common is the assumption that users will provide feature annotation for all training instances to refine or train a model. We believe that this level of user engagement hinders practical deployment of XBL because of the demanding nature and expense of feature annotation that is required [22]. It is, therefore, important to build an XBL method that can refine a trained model using a limited amount of user interaction and we propose eXemplary eXplanation Based Learning to achieve this.
## 3 eXemplary eXplanation Based Learning
User annotation of image features, or \(M\), is an important prerequisite for typical XBL approaches (illustrated in Equation 1). We use eXemplary eXplanation Based Learning (eXBL) to reduce the time and resource complexity caused by the need for \(M\). eXBL simplifies the expensive feature annotation requirement by replacing it with identification of just two exemplary explanations: a _Good explanation_ (\(C_{good_{i}}\)) and a _Bad explanation_ (\(C_{bad_{j}}\)) of two different instances, \(x_{i}\) and \(x_{j}\). We pick the two exemplary explanations manually based on how much attention a model's explanation output gives to relevant image regions. A good explanation would be one that gives more focus to the lung and chest area rather than the irrelevant regions such as the Epiphyses, humerus head, and image backgrounds, while a bad explanation does the opposite.
We choose to use GradCAM model explanations because they have been found to be more sensitive to training label reshuffling and model parameter randomization than other saliency based explanations [1]; and they provide accurate explanations in medical image classifications [10].
We then compute product of the input instances and the Grad-CAM explanation in order to propagate input image information towards computing the loss and to avoid a bias that may be caused by only using a model's GradCAM explanation,
\[C_{good} :=x_{i}\odot C_{good_{i}} \tag{4}\] \[C_{bad} :=x_{j}\odot C_{bad_{j}} \tag{5}\]
We then take inspiration from triplet loss [3] to incorporate \(C_{good}\) and \(C_{bad}\) into our explanation loss, \(L_{expl}\). The main purpose of \(L_{expl}\) is to penalize a trainer according to similarity of model explanations of instance \(x\) to \(C_{good}\) and its difference from \(C_{bad}\). We use Euclidean distance as a loss to compute the measure of dissimilarity, \(d\) (loss decreases as similarity to \(C_{good}\) is high and to \(C_{bad}\) is low).
\[d_{xg} :=d(x\odot GradCAM(x),C_{good}) \tag{6}\] \[d_{xb} :=d(x\odot GradCAM(x),C_{bad}) \tag{7}\]
We train the model \(f\) to achieve \(d_{xg}\ll d_{xb}\) for all \(x\). We do this by adding a \(margin=1.0\) and translating it to: \(d_{xg}<d_{xb}+margin\). We then compute the explanation loss as:
\[L_{expl}=\sum_{i}^{N}\max(d_{x_{i}g}-d_{x_{i}b}+margin,0) \tag{8}\]
In addition to correctly classifying \(X\), which is achieved through \(L_{CE}\), this \(L_{expl}\) (Equation 8) trains \(f\) to output GradCAM values that resemble the good explanations and that differ from the bad explanations, thereby refining the model to focus on the relevant regions and to ignore confounding regions.
is zero, for a given sample \(x\), unless \(x\odot GradCAM(x)\) is much more similar to \(C_{bad}\) than it is to \(C_{good}\)--meaning \(d_{xg}>d_{xb}+margin\).
## 4 Experiments
### Data Collection and Preparation
To demonstrate eXBL we use the Covid-19 CXR dataset [4, 13] described in Section 1. For model training we subsample 800 x-ray images per category to mitigate class imbalance, totaling 3,200 images. For validation and testing, we use 1,200 and 800 images respectively. We resize all images to 224 \(\times\) 224 pixels. The dataset is also accompanied with feature annotation masks that show the lungs in each of the x-ray images collected from radiologists [13].
### Model Training
We followed a transfer learning approach using a pre-trained MobileNetV2 model [15]. We chose to use MobileNetV2 because it achieved better performance at the CXR images classification task at a reduced computational cost after comparison among pre-trained models. In order for the training process to affect the GradCAM explanation outputs, we only freeze and reuse the first 50 layers of MobileNetV2 and retrain the rest of the convolutional layers with a custom classifier layer that we added (256 nodes with a ReLu activation with a 50% dropout followed by a Softmax layer with 4 nodes).
We first trained the MobileNetV2 to categorize the training set into the four classes using categorical cross entropy loss. It was trained for 60 epochs4. We refer to this model as the Unrefined model. We then use the Unrefined model to select the good and bad explanations displayed in Figure 2. Next, we employ our eXBL algorithm using the good and bad explanations to teach the Unrefined model to focus on relevant image regions by tuning its explanations to look like the good explanations and to differ from the bad explanations as much as possible. We use Euclidean distance to compute dissimilarity in adopting a version of the triplet loss for XBL. We refer to this model as the eXBL\({}_{EUC}\) model and it was trained for 100 epochs using the same early stopping, learning rate, and optimizer as the Unrefined model.
Footnote 4: The model was trained with an early stop monitoring the validation loss at a patience of five epochs and a decaying learning rate = 1e-04 using an Adam optimizer.
For model evaluation, in addition to classification performance, we compute an objective explanation evaluation using Activation Precision [2] that measures how many of the pixels predicted as relevant by a model are actually relevant using existing feature annotation of the lungs in the employed dataset,
\[AP=\frac{1}{N}\sum_{n}^{N}\frac{\sum(T_{\tau}(GradCAM_{\theta}(x_{n}))\odot A_{ x_{n}})}{\sum(T_{\tau}(GradCAM_{\theta}(x_{n})))}\, \tag{9}\]
where \(x_{n}\) is a test instance, \(A_{x_{n}}\) is feature annotation of lungs in the dataset, \(GradCAM_{\theta}(x_{n})\) holds the GradCAM explanation of \(x_{n}\) generated from a trained model, and \(T_{\tau}\) is a threshold function that finds the (100-\(\tau\)) percentile value and sets elements of the explanation, \(GradCAM_{\theta}(x_{n})\), below this value to zero and the remaining elements to one. In our experiments, we use \(\tau=5\%\).
Figure 3: Sample outputs of Viral Pneumonia category. (A) Input images; (B) GradCAM outputs for Unrefined model and (C) their overlay over input images; (D) GradCAM outputs for eXBL\({}_{EUC}\) and (E) their overlay over input images.
## 5 Results
Table 1 shows classification and explanation performance of the Unrefined and eXBL\({}_{EUC}\) models. Sample test images, GradCAM outputs, and overlaid GradCAM visualizations of x-ray images with Viral pneumonia category are displayed in Figure 3. From the sample GradCAM outputs and Table 1, we observe that the eXBL\({}_{EUC}\) model was able to produce more accurate explanations that avoid focusing on irrelevant image regions such as the Epiphyses and background regions. This is demonstrated by how GradCAM explanations of the eXBL\({}_{EUC}\) model tend to focus on the central image regions of the input images focusing around the chest that is relevant for the classification task, while the GradCAM explanations generated using the Unrefined model give too much attention to areas around the shoulder joint (humerus head) and appear angular shaped giving attention to areas that are not related with the disease categories.
## 6 Conclusion
In this work, we have presented an approach to debug a spurious correlation learned by a model and to remove it with just two exemplary explanations in eXBL\({}_{EUC}\). We present a way to adopt the triplet loss for unlearning spurious correlations. Our approach can tune a model's attention to focus on relevant image regions, thereby improving the saliency-based model explanations. We believe it could be easily adopted to other medical or non-medical datasets because it only needs two non-demanding exemplary explanations as user feedback.
Even though the eXBL\({}_{EUC}\) model achieved improved explanation performances when compared to the Unrefined model, we observed that there is a classification performance loss when retraining the Unrefined model with eXBL to produce good explanations. This could mean that the initial model was exploiting the confounding regions for better classification performance. It could also mean that our selection of good and bad explanations may not have been optimal and that the two exemplary explanations may be degrading model performance.
Since our main aim in this study was to demonstrate effectiveness of eXBL\({}_{EUC}\) based on just two ranked feedback, the generated explanations were evaluated using masks of lung because it is the only body part with pixel-level annotation in the employed dataset. However, in addition to the lung, the disease categories might be associated with other areas of the body such as the throat and torso. For this reason, and to ensure transparency in practical deployment of such systems in clinical practice, future work should involve expert end users for evaluation of the classification and model explanations.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Activation Precision} \\ \cline{2-5} & Validation & Test & Validation & Test \\ \hline Unrefined & 0.94 & 0.95 & 0.32 & 0.32 \\ eXBL\({}_{EUC}\) & 0.89 & 0.90 & 0.34 & 0.35 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification and explanation performance.
## Acknowledgements
This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2310.15326 | Specialist or Generalist? Instruction Tuning for Specific NLP Tasks | The potential of large language models (LLMs) to simultaneously perform a
wide range of natural language processing (NLP) tasks has been the subject of
extensive research. Although instruction tuning has proven to be a
data-efficient method for transforming LLMs into such generalist models, their
performance still lags behind specialist models trained exclusively for
specific tasks. In this paper, we investigate whether incorporating
broad-coverage generalist instruction tuning can contribute to building a
specialist model. We hypothesize that its efficacy depends on task specificity
and skill requirements. Our experiments assess four target tasks with distinct
coverage levels, revealing that integrating generalist instruction tuning
consistently enhances model performance when the task coverage is broad. The
effect is particularly pronounced when the amount of task-specific training
data is limited. Further investigation into three target tasks focusing on
different capabilities demonstrates that generalist instruction tuning improves
understanding and reasoning abilities. However, for tasks requiring factual
knowledge, generalist data containing hallucinatory information may negatively
affect the model's performance. Overall, our work provides a systematic guide
for developing specialist models with general instruction tuning. Our code and
other related resources can be found at
https://github.com/DavidFanzz/Generalist_or_Specialist. | Chufan Shi, Yixuan Su, Cheng Yang, Yujiu Yang, Deng Cai | 2023-10-23T19:46:48Z | http://arxiv.org/abs/2310.15326v1 | # Specialist or Generalist? Instruction Tuning for Specific NLP Tasks
###### Abstract
The potential of large language models (LLMs) to simultaneously perform a wide range of natural language processing (NLP) tasks has been the subject of extensive research. Although instruction tuning has proven to be a data-efficient method for transforming LLMs into such generalist models, their performance still lags behind specialist models trained exclusively for specific tasks. In this paper, we investigate whether incorporating broad-coverage generalist instruction tuning can contribute to building a specialist model. We hypothesize that its efficacy depends on task specificity and skill requirements. Our experiments assess four target tasks with distinct coverage levels, revealing that integrating generalist instruction tuning consistently enhances model performance when the task coverage is broad. The effect is particularly pronounced when the amount of task-specific training data is limited. Further investigation into three target tasks focusing on different capabilities demonstrates that generalist instruction tuning improves understanding and reasoning abilities. However, for tasks requiring factual knowledge, generalist data containing hallucinatory information may negatively affect the model's performance. Overall, our work provides a systematic guide for developing specialist models with general instruction tuning. Our code and other related resources can be found at [https://github.com/DavidFanzz/Generalist_or_Specialist](https://github.com/DavidFanzz/Generalist_or_Specialist).
## 1 Introduction
The latest generation of large language models (LLMs), such as ChatGPT [1] and GPT4 [1], are often referred to as _generalist_ models for their exceptional generalizability to perform various natural language processing (NLP) tasks. Recent studies [11, 12, 13, 14] suggest that (1) the foundation of their superior performance (i.e., knowledge and capabilities) is predominantly acquired during large-scale unsupervised pre-training; and (2) instruction tuning [15, 16, 17] is an incredibly data-efficient method for unleashing the power of LLMs to complete realistic NLP tasks. However, under rigorous evaluation, the performance of those instruction-following generalist models often falls short compared to traditional task-specific specialist models [16, 17, 18, 19]. Recently, there has also been a growing trend towards developing specialist models using instruction tuning [16, 17, 18, 19, 20, 21].
In this paper, we study how to better harness the power of LLM for specific NLP tasks using instruction tuning. Our research is motivated by the existence of various broad-coverage general-purpose instruction-following datasets [11, 12, 13, 14, 15] and their surprising efficiency for turning LLMs into capable instruction-following generalists. For instance, Zhou et al. [20] show that merely one thousand supervised input-output pairs are necessary to build a competent generalist. In contrast to general-purpose instruction tuning, our preliminary experiments show that a sufficiently large set of task-specific data is still required for transforming an LLM into a superior specialist. This leads us to a pivotal research question: How to better unleash the power of LLMs for specific NLP tasks by marrying the best of two worlds? More specifically, can general-purpose instruction-following datasets aid in the transformation of an LLM into a specialist? If so, when and how?
We hypothesize the answers to the previous ques
tions depend on (1) how specific the target task is; and (2) what skills the target task requires. To test this hypothesis, we first assess four target tasks with distinct levels of coverage. Our findings reveal that integrating general instruction tuning--that is, training with generalist data enhances the model's performance on specific NLP tasks with broad task coverage, particularly when the amount of task-specific training data is limited. To gain a deeper understanding of the improvements elicited by training with generalist data, we subsequently examine three target tasks that focus on distinct skill sets. Our results suggest that general instruction tuning improves the model's understanding and reasoning capabilities. However, when it comes to tasks that demand factual knowledge from the LLM, instructional data generated through self-instruct Wang et al. (2022) harms the model's performance due to the intrinsic hallucinations brought by such data creation approach.
In sum, to the best of our knowledge, our work is the first effort to present a systematic guide for building and improving specialist models with general instruction tuning.
## 2 Background: Instruction Tuning
In recent years, large language models (LLMs) have undergone rapid development and have dominated the field of natural language processing (NLP) Radford et al. (2018); Devlin et al. (2019); Radford et al. (2019); Brown et al. (2020). Today's LLMs, such as ChatGPT OpenAI (2022) and GPT-4 OpenAI (2023), can perform complex and diverse tasks in the unified form of following natural language instructions. Generally, these models are trained in three separate stages: (1) large-scale unsupervised pre-training on raw text; and (2) instruction tuning via supervised learning Sanh et al. (2022); Wei et al. (2022); Mishra et al. (2021); Su and Collier (2022); Su et al. (2023); and (3) reinforcement learning from human feedback Stiennon et al. (2020); Bai et al. (2022); Ouyang et al. (2022). Recent studies Zhou et al. (2023); Gudibande et al. (2023) argued that almost all capabilities of LLMs are learned during unsupervised pre-training, and instruction tuning with a limited amount of supervised data is sufficient. However, this observation refers to the process of constructing general-purpose instruction-following models--generalists. In the following, we separately introduce broad-coverage "generalist" and task-specific "specialist" instruction tuning.
Generalist Instruction Tuning.Early attempts on instruction tuning Wang et al. (2022); Sanh et al. (2022); Wei et al. (2022); Chung et al. (2022)_inter alia_) transform a range of public NLP datasets into an instructional format, with a few manually crafted templates for each task. They then fine-tune an LLM on a portion of the transformed data and evaluate on another set of held-out tasks. Each work affirms that the model's generalization ability to unseen tasks improves when increasing the task and template diversity. However, template-based instructions are not sufficiently diverse for building a truly competent generalist Ouyang et al. (2022). In contrast, state-of-the-art generalist models such as ChatGPT OpenAI (2022) are trained with proprietary instructions collected from real human users. In the pursuit to replicate the success of ChatGPT, various open-source broad-coverage instruction-tuning datasets are proposed. Some are gathered via crowd-sourcing Labs (2023); Zhou et al. (2023) while others use the outputs from strong proprietary models Taori et al. (2023); Peng et al. (2023); Xu et al. (2023); Su et al. (2023); Li et al. (2023) with techniques such as self-instruct Wang et al. (2022). Existing results suggest that these models can achieve near parity with proprietary models in various aspects Chiang et al. (2023); Zhou et al. (2023); Taori et al. (2023).
Specialist Instruction Tuning.There is also an emerging trend to continue instruction tuning on specific NLP tasks, such as machine translation Jiao et al. (2023), information extraction Wang et al. (2023), medical QA Wang et al. (2023); Fleming et al. (2023), and writing-assistant Zhang et al. (2023). These works typically transform existing task-specific datasets into the same instructional format as generalist instruction tuning and yield better model performance in specific tasks. Different from previous work, this study aims to provide a comprehensive and in-depth investigation of the role of generalist instruction data in specialist instruction tuning.
Our work is most related to the initial studies on the cross-task generalization of instruction tuning such as FLAN Wei et al. (2022). The differences between our work and previous work are: (1) we use broad-coverage generalist data, while they use template-based data; and (2) they focus on zero/few-shot performance on unseen tasks, while
we assume an adequate amount of task-specific training data is available.
## 3 Incorporating Specialist Training with Generalist Training
### Data Collection
We sort the instruction-following data into two groups: (1) specialist data and (2) generalist data.
Specialist data.primarily originates from existing NLP datasets with a focus on particular tasks. To facilitate our research, we mainly utilize the SuperNI dataset Wang et al. (2022), a comprehensive benchmark containing 1,616 NLP datasets coupled with their respective natural language instructions, as the source of specialist data. The details are described in Section 4.1. We also leverage existing question answering datasets Kwiatkowski et al. (2019); Berant et al. (2013); Joshi et al. (2017), reading comprehension datasets Lai et al. (2017) reasoning datasets Bowman et al. (2015); Talmor et al. (2019); Ling et al. (2017) to evaluate different aspects of model skills, detailed in Section 5.1.
Generalist datais characterized by its extensive scope and diversity. For our research, we select two representative broad-coverage general-purpose datasets: GPT4-Instruct Peng et al. (2023) and LIMA Zhou et al. (2023). GPT4-Instruct Peng et al. (2023) contains 52k unique instruction-response pairs, where the instructions are collected through self-instruct Wang et al. (2022) and the responses are generated by GPT-4 OpenAI (2023). LIMA Zhou et al. (2023) consists of 1k carefully curated instruction-response pairs derived from human-authored community questions and answers. Notably, we emphasize that GPT4-Instruct serves as an example of generalist data synthesized by LLMs and LIMA represents another distinct example of generalist data written by humans.
Unified Format.We follow the template used in Stanford's Alpaca project Taori et al. (2023) (See Appendix A). Each instance in the generalist and specialist data is transformed in a pair of {instruction, response}.
### Training
Specialist/Generalist Data Combination.For each target task, we construct the training and test set with 50k and 5k instances, respectively. For target tasks that span over multiple datasets, we uniformly sample training/test instances from the corresponding datasets such that each dataset has an equal proportion. For generalist data, we consider the GPT4-Instruct and LIMA datasets as discussed above. We first train models on generalist data and then specialist data. We vary the amounts of specialist data across {2k, 4k, 6k, 8k, 10k} to study the effect of generalist data under different circumstances of data scarcity.
Model and Training Details.We conduct our experiments with the popular LLaMA 7B and 13B models Touvron et al. (2023). For training on generalist data, we follow the original setups in the respective papers Zhou et al. (2023); Taori et al. (2023). Specifically, for GPT4-Instruct, we train for 3 epochs with a batch size of 128, while for LIMA, we train for 15 epochs with a batch size of 64. In the subsequent specialist training phase, we train for 3 epochs with a batch size of 128. In both stages, we use the Adam optimizer Kingma and Ba (2015) with a learning rate of 2e-5 and utilize the standard language modeling objective:
\[\mathcal{L}=-\frac{1}{|\mathbf{y}|}\sum_{i=1}^{|\mathbf{y}|}\log p_{\theta}(y_{i}|\mathbf{ x},\mathbf{y}_{<i}), \tag{1}\]
where \(\theta\) denotes the model parameters and \(\{\mathbf{x},\mathbf{y}\}\) is an instruction-response pair.
## 4 Experiments I: The Coverage of the Target Tasks
### Coverage Taxonomy
To assess our model's performance on a variety of target tasks with distinct levels of generality, we construct a hierarchy of four specialist tasks using the SuperNI dataset Wang et al. (2022). This taxonomy encompasses tasks with varying scopes of coverage, as detailed below.
SuperNI(multiple tasks, multiple formats). At the most comprehensive level, we incorporate all the _English_ tasks from the SuperNI dataset, which encompasses a total of 756 datasets. Unlike LIMA and GPT4-Instruct, which accommodate a broad spectrum of user-oriented inquiries, the datasets in SuperNI focus on specific NLP tasks distilled from real-world demands. Therefore, we treat them as specialist data at the highest coverage level.
Classification(multiple tasks, single format). The tasks in SuperNI can be grouped based on
their task types, such as classification, summarization, and question answering. For the second level, we focus on the classification subset. Specifically, we select 252 classification datasets. To measure the model's cross-task generalization capability, we allocate 223 datasets for training and reserve the remaining 29 datasets as held-out datasets for evaluation.
Sentiment(single tasks, multiple domains). The classification tasks selected above can be further categorized based on their specific topics, such as sentiment analysis, toxic language detection, commonsense categorization, and others. Among these, we designate 32 sentiment analysis datasets as the third level.
Yelp(single tasks, single domain). The sentiment analysis datasets mentioned above span various domains, such as movie and restaurant reviews. At the most fine-grained level, we choose the Yelp dataset (Zhang et al., 2015) as the representative task to evaluate the model's performance in a highly specialized domain.
### Evaluation Setup
For the SuperNI level, we follow the same evaluation protocol as in Wang et al. (2022) and report Rouge-L Lin (2004). For the decoding strategy, we adopt greedy search with a maximum generation length of 512.1 For the Classification, Sentiment, and Yelp levels, we follow previous studies (Brown et al., 2020; Sanh et al., 2022) and utilize a _classification with options_ approach, where we prompt the model with a set of options and compute the likelihood of each option being the response. The one with the highest probability is taken as the model's prediction, and we report the model's accuracy.
Footnote 1: We leave the study on more advanced decoding methods (Holtzman et al., 2019; Su et al., 2022; Yang et al., 2023) as future work.
### Main Results
Generalist models lag behind specialist models across all coverage levels.We compare generalist models that are solely trained on generalist data (i.e., LIMA or GPT4-Instruct) to those specialist models that are solely trained on specialist data (the 10k training instances we collect for each cov
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Task Coverage & GPT4-Instruct & LIMA & specialist \\ \hline SuperNI & 25.54 & 12.65 & 54.92 \\ Classification & 53.20 & 46.84 & 80.02 \\ Sentiment & 68.66 & 51.46 & 90.71 \\ Yelp & 91.68 & 65.52 & 98.11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of generalists and specialists on tasks of different coverage levels on LLaMA-7B. The specialists are trained with 10k task-specific instances. For SuperNI, the performance is measured by Rouge-L, while the others are measured by accuracy.
Figure 1: Comparison of models trained with different combinations of specialist and generalist data across different tasks. We report Rouge-L for SuperNI and accuracy for other levels.
erage level), using LLaMA-7B. From the results presented in Table 1, we can see that generalist models fall short in performance when compared to specialist models on all coverage levels. Notably, even as the coverage level becomes more encompassing, the performance gap between generalist models and specialist models does not shrink. For instance, on the most specific Yelp task, the specialist model outperforms the generalist model (GPT4-Instruct) by 6.43% absolute points. On the SuperNI task, the performance gap between the specialist and the generalist (GPT4-Instruct) is 29.38. These results validate the necessity of specialist tuning for specific NLP tasks.
Transforming an LLM into a superior specialist demands a substantial amount of task-specific data.Figure 1 depicts the performance of specialist models on different tasks with varying numbers of training data (from 2k to 10k). From the results, we see that, for tasks with broader coverage (e.g. SuperNI and Classification), the model's performance does not seem to converge with the 10k training instances. Even for narrow tasks such as Sentiment, at least 10k task-specific data is required to fully unlock the LLM's potential. These results reveal the data-hungry nature of building specialist models.
Generalist data can improve specialist performance when the task coverage is broad.Figure 1 also demonstrates that the inclusion of generalist data consistently results in performance improvements for both SuperNI and Classification across LLaMA 7B and 13B models. On average across different settings of specialist data, the introduction of generalist data leads to an improvement of 0.96 for LLaMA-7B and 0.74 for LLaMA-13B on SuperNI tasks, while for Classification tasks, it results in an enhancement of 1.53% for LLaMA-7B and 0.82% for LLaMA-13B. It is also worth noting that LIMA only has 1k instances, but it can even help improve performance when the number of specialist data is 10\(\times\) larger. However, the results are the opposite for Sentiment and Yelp. For instance, the introduction of LIMA leads to a minor performance degeneration on Sentiment with 2k specialist data (a reduction of 0.25% for LLaMA-7B and 0.56% for LLaMA-13B). In the case of the Yelp task, the impact of including generalist data (both GPT4-Instruct and LIMA) appears to be minimal on the overall model performance.
The performance gain is most evident when the amount of specialist data is limited.We can see that the performance gap between specialists trained with and without generalist data shrinks as the amount of specialist data increases. For example, at the Classification level, when the specialist data comprises only 2k instances, the inclusion of GPT4-Instruct enhances LLaMA-7B's accuracy from 65.36% to 71.31% (+5.95%) and LLaMA-13B's accuracy from 70.59% to 73.13% (+2.54%). However, when the number of specialist data reaches 10k instances, the addition of GPT4-Instruct only leads to smaller improvements, from 80.02% to 80.17% (+0.15%) for LLaMA-7B, and from 81.01% to 81.93% (+0.92%) for LLaMA-13B, respectively.
The performance gain is less pronounced when the model scale is larger.As shown in Figure 1, when comparing the results of the 7B and 13B models, the trend of change in the effect of integrating generalist data is consistent for both models. However, it is worth noting that as the model scale is larger, the performance gain is less pronounced. Specifically, when the model scales up from 7B to 13B, the average improvement achieved by adding GPT4-Instruct on SuperNI decreases from 1.49 to 0.58, and the improvement in Classification reduces from 2.00% to 1.18%.
### Further Analysis
For a deeper understanding of the impact of generalist data, here we present additional analyses.
Figure 2: Results on held-out tasks (Classification) with LLaMA-7B.
Unless otherwise specified, all experiments use LLaMA-7B as the foundation model.
Cross-task Generalization.For the Classification level, recall that we exclude some classification tasks when constructing the training data. These tasks can be used as hold-out tasks to examine the specialist's cross-task generalization ability. The results are shown in Figure 2. It can be observed that the accuracy on held-out tasks fluctuates in small ranges from 50.98% to 57.55% across different amounts of specialist data. However, upon incorporating LIMA, the average absolute accuracy improvement on the hold-out task increases by 2.70%, while adding GPT4-Instruct results in a 6.12% rise in absolute accuracy. This indicates that generalist data can greatly improve the cross-task generalization of specialist models.
Number of Generalist Data.To study the effect of the amount of generalist data, we additionally partition the GPT4-Instruct dataset into five random parts and test the model's performance when using different proportions of the dataset. The experiments are conducted at the Classification level with a fixed quantity of 10k specialist data. As shown in Figure 3, even with only 10k generalist data, the model's accuracy is raised from 78.12% to 82.48%. Another interesting finding is that further increasing the generalist data to 50k merely brings small improvements (from 82.48% to 84.0%). The results together with our experiments with LIMA suggest that adding a small number of generalist data is sufficient to improve the specialist performance.
Cross-instruction Robustness.In all previous experiments, the models are trained and tested using the same instructions for each dataset. Now, we assess the model's robustness when confronted with alternative instructions that have not appeared during training. To do so, we employ ChatGPT (OpenAI, 2022) to generate 20 semantically equivalent instructions based on the original instruction. Figure 4 reports the results of these unseen instructions. As seen, the models trained with the addition of generalist data exhibit substantial improvement in average accuracy compared to the models trained with specialist data only. For instance, when the specialist data is limited to 2k instances, incorporating generalist data leads to a 6.64% absolute improvement on average compared to the specialist model. In the meantime, the incorporation of generalist data also alleviates the performance variation between the best-performing and worse-performing runs from 4.04% to 2.42%.
## 5 Experiments II: The Required Skills of the Target Tasks
We hypothesize that the model's ability to perform specific NLP tasks can be attributed to the mix of several core capabilities. As such, we set up three target tasks that focus on three key skills which are detailed below.
Figure 4: The results using different test instructions (Classification) with LLaMA-7B.
Figure 3: Results using different amounts of generalist data (Classification, 10k specialist data) with LLaMA-7B.
### Skill Taxonomy
Factual Knowledgeis essential for models to serve information needs. We use three knowledge-intensive datasets: Natural Questions Kwiatkowski et al. (2019), WebQuestions Berant et al. (2013), and TriviaQA Joshi et al. (2017). All these three datasets consist of entity-centric questions, making them suitable for probing models' ability to activate and utilize factual knowledge. Following previous work Brown et al. (2020), we evaluate under the closed-book setting where models are required to answer questions without the help of any external knowledge grounding.
Understandingacts as an important perspective as the capability to interpret input text. We choose the RACE dataset Lai et al. (2017). RACE comprises data collected from English examinations in China and is specifically designed to assess the model's ability to read and deeply comprehend texts in real-world scenarios.
Reasoningis another fundamental ability for models to solve complex tasks. We use the SNLI Bowman et al. (2015) dataset for implicit reasoning, the CQA Talmor et al. (2019) for common-sense reasoning, and the AQUA Ling et al. (2017) dataset for arithmetic reasoning.
### Evaluation Setup
For the Factual Knowledge tasks, we use greedy search with a maximum generation length of 512. We adopt the F1 score as the evaluation metric following Brown et al. (2020). For the Understanding and Reasoning tasks, we utilize the same _classification with options_ method detailed in Section 3.2 and report the model accuracy.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Task Coverage & GPT4-Instruct & LIMA & specialist \\ \hline Factual Knowledge & 14.78 & 17.28 & 46.37 \\ Understanding & 35.10 & 30.82 & 76.77 \\ Reasoning & 28.02 & 26.58 & 63.40 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The performance of generalists and specialists on tasks focusing on different skills. The specialists are trained with 10k task-specific instances on LLaMA-7B. For Factual Knowledge, the performance is measured by F1 score, while the others are measured by accuracy.
Figure 5: Comparison of models trained with different combinations of specialist and generalist data across different tasks. We report F1 score for Factual Knowledge and accuracy for other levels.
### Results and Analysis
Generalist models lag behind specialist models across all task skills.Similar to Experiment I, we commence by comparing specialist and generalist models across three target tasks, each concentrating on distinct skills. The outcomes presented in Table 2 indicate that the generalist models consistently underperform the specialist models. For the Factual Knowledge task, the specialist model outperforms the generalist model with a 29.09 points higher F1 score. For the Understanding task, the specialist model surpasses the generalist model with a 41.67% increase in accuracy. For the Reasoning task, the specialist model excels beyond the generalist model, attaining a 35.38% absolute accuracy difference. Collectively, these findings substantiate the necessity of specialist tuning for accomplishing specific tasks.
Incorporating GPT4-Instruct impairs the model's factual knowledge, while integrating LIMA offers benefits.As illustrated in Figure 5, we observe the varying impact of different generalist data on the model's performance in the Factual Knowledge task. In particular, when GPT4-Instruct is incorporated, the F1 score experiences a decline. Conversely, when LIMA data is integrated, the F1 witnesses an increase. We argue that this difference stems from the fact that GPT4-Instruct is machine-generated, while LIMA is human-authored. The rationale is that machine-generated data may contain hallucinations, thus impairing the model's ability to recall factual knowledge.
To validate our hypothesis, we conduct experiments using additional generalist datasets, namely Dolly Labs (2023), and Evol-Instruct Xu et al. (2023). Dolly consists of manually curated data generated by Databricks employees. Evol-Instruct uses more complex instructions than GPT4-Instruct and collects responses from ChatGPT Fang et al. (2023). As observed in Figure 6, adding Dolly does not impair the performance, but incorporating Evol-Instruct leads to similar performance degradation as GPT4-Instruct. The above results are consistent with our hypothesis that machine-generated generalist data might adversely affect the model's factual knowledgeability due to hallucinations.
For a more rigorous comparison, we use ChatGPT to generate responses for the 1k instructions in LIMA. The new 1k instruction-response pairs form a new generalist dataset, which we call LIMA-Chat. The only difference between LIMA-Chat and LIMA is that the responses in LIMA-Chat are machine-generated, while those in LIMA are human-written. From Figure 6, we can see that LIMA-Chat indeed harms the performance while LIMA improves the performance. The above results suggest that the choice of generalist data is crucial for target tasks that heavily rely on factual knowledge.
Adding generalist data enhances the understanding ability.The results of the Understanding task are presented in Figure 5. It is evident that the addition of GPT4-Instruct greatly improves the model's performance when the specialist data is only 2k or 4k instances. However, as the number of specialist data further increases, the improvement diminishes. This suggests that the inclusion of generalist data can enhance the model's comprehension ability when the specialist data is limited.
Adding generalist data enhances the reasoning ability.We further evaluate the consequences of incorporating generalist data on the model's reasoning ability, as demonstrated in Figure 5. Notably, unlike Understanding, where the improvements from adding generalist data gradually diminish, the benefits of incorporating generalist data on the Reasoning tasks are persistent across different amounts of specialist data (an average improvement of 0.65% on LLaMA-7B and 1.12% on LLaMA-13B). This phenomenon could be attributed to the fact that the activation of reasoning capabilities relies on diverse instruction data, and specialist data can be too narrow to fully unlock
Figure 6: Results on the Factual Knowledge with LLaMA-7B.
the true potential of LLMs.
Effect of Model Scale.For Factual Knowledge, increasing the model size from 7B to 13B results in more substantial performance improvements compared to increasing the amount of specialist data. This observation aligns with previous work Brown et al. (2020), which indicates that an LLM's knowledge is mostly obtained through its pre-training. For Understanding, increasing the model size is as beneficial as adding more specialist data. For Reasoning, increasing the model size does not yield improvements as noticeable as Factual Knowledge and Understanding. We speculate that the emergence of strong reasoning abilities requires a larger model scale Wei et al. (2022).
Generalist data plays a vital role in enhancing a model's understanding and reasoning capabilities, thereby increasing its effectiveness in addressing task-specific objectives.We dissect the model's capabilities into three core components: (i) factual knowledge, (ii) understanding, and (iii) reasoning abilities. We demonstrate that incorporating generalist data does not improve the model's factual knowledge and, in some cases, may even be detrimental if it includes hallucinated information. Nevertheless, comparative experiments focusing on understanding and reasoning abilities reveal that generalist data effectively fosters the model's comprehension and significantly augments its reasoning capabilities.
This observed efficacy can be ascribed to the capacity of generalist data to facilitate the model's understanding and execution of diverse tasks. The wide range of instructions embedded within the generalist data stimulates the model's comprehension and reasoning faculties, empowering it to grasp specific requirements associated with various tasks more effectively. Moreover, by activating the model's reasoning abilities, it showcases enhanced performance across an assortment of tasks involving different levels of complexity.
The activation of comprehension and reasoning abilities further broadens the model's cognitive capacity, allowing it to derive a more comprehensive understanding based on existing information pertinent to the given task. Consequently, the inclusion of generalist data amplifies the model's task-specific capabilities, as it becomes adept at utilizing its expanded cognitive capacity to achieve superior performance.
## 6 Conclusions
In this study, we thoroughly investigated the interaction between specialist data and generalist data in the context of targeting specific NLP tasks. Our findings consistently demonstrate that the addition of generalist data leads to performance improvement when the task coverage is broad. This highlights the potential benefits of incorporating generalist data, particularly when the availability of specialist data is limited. Furthermore, we extensively examined the impact of integrating generalist data on the model's core capabilities. Surprisingly, we observed that the inclusion of generalist data did not enhance the model's factuality. In fact, generalist data containing hallucinatory information can have a negative impact. On the other hand, our experiments also revealed that the introduction of generalist data has positive effects on the model's understanding and reasoning abilities. Overall, our findings highlight the importance of leveraging generalist data to enhance the understanding and reasoning capabilities of NLP models, thereby enabling them to tackle various tasks more effectively. However, careful consideration should be given to the quality and reliability of the generalist data to avoid adverse effects on the model's factual knowledge.
## Limitations
While this work aims to provide a comprehensive investigation, we note that we do not exhaustively cover all possible evaluations. For example, we do not discuss NLP tasks such as summarization, translation, etc. Instead, we focus on constructing a hierarchy of four target tasks of different coverage levels and three target tasks focusing on different core skills. In addition, due to resource constraints, we only use LLaMA 7B/13B as our foundation models. We leave the investigation on different types and scales of models to our future work.
## Acknowledgments
This work was partly supported by the National Key Research and Development Program of China (No.2020YFB1708200), and the Shenzhen Science and Technology Program (JSGG20220831110203007). |
2301.03205 | Enhanced photovoltaic effect in graphene-silicon Schottky junction under
mechanical manipulation | Graphene-silicon Schottky junction (GSJ) which has the potential for
large-scale manufacturing and integration can bring new opportunities to
Schottky solar cells for photovoltaic (PV) power conversion. However, the
essential power conversion limitation for these devices lies in the small
open-circuit voltage ($V_{oc}$), which depends on the Schottky barrier height
(SBH). In this study, we introduce an electromechanical method based on the
flexoelectric effect to enhance the PV efficiency in GSJ. By atomic force
microscope (AFM) tip-based indentation and in situ current measurement, the
current-voltage (I-V) responses under flexoelectric strain gradient are
obtained. The $V_{oc}$ is observed to increase for up to 20$\%$, leading to an
evident improvement of the power conversion efficiency. Our studies suggest
that strain gradient may offer unprecedented opportunities for the development
of GSJ based flexo-photovoltaic applications. | Dong Pu, Muhammad Abid Anwar, Jiachao Zhou, Renwei Mao, Xin Pan, Jian Chai, Feng Tian, Hua Wang, Huan Hu, Yang Xu | 2023-01-09T08:43:46Z | http://arxiv.org/abs/2301.03205v1 | # Enhanced photovoltaic effect in graphene-silicon Schottky junction under mechanical manipulation
###### Abstract
Graphene-silicon Schottky junction (GSJ) which has the potential for large-scale manufacturing and integration can bring new opportunities to Schottky solar cells for photovoltaic (PV) power conversion. However, the essential power conversion limitation for these devices lies in the small open-circuit voltage (\(V_{oc}\)), which depends on the Schottky barrier height (SBH). In this study, we introduce an electromechanical method based on the flexoelectric effect to enhance the PV efficiency in GSJ. By atomic force microscope (AFM) tip-based indentation and in situ current measurement, the current-voltage (I-V) responses under flexoelectric strain gradient are obtained. The \(V_{oc}\) is observed to increase for up to 20%, leading to an evident improvement of the power conversion efficiency. Our studies suggest that strain gradient may offer unprecedented opportunities for the development of GSJ based flexo-photovoltaic applications.
+
Footnote †: Corresponding author; Electronic mail: [email protected]
+
Footnote †: Corresponding author; Electronic mail: [email protected]
+
Footnote †: Corresponding author; Electronic mail: [email protected]
+
Footnote †: Corresponding author; Electronic mail: [email protected]
Due to its special two-dimensional structural properties and excellent performance [1], graphene has the potential to be integrated into existing semiconductor technologies and used in next-generation electronics. Recent research has shown that the formation of junctions between graphene and three-dimensional or two-dimensional semiconductors [2] can produce the rectification effect of a typical Schottky junction. Its tunable Schottky barrier makes graphene junctions an excellent platform for studying the transport properties of interfaces and has led to applications in scenarios such as photodetection [3], light-speed communication [4], and chemical and biological detection [5]. Solar energy collection and conversion have attracted much attention in recent years. The Schottky junction devices can naturally work as solar cell or photovoltaic cell [6], as the built-in electrical field provides the voltage potential difference that drives the current [7]. Conventional metal/semiconductor Schottky devices suffer from the contact instability, high cost and high-temperature fabrication process. Graphene, which has unique optical properties and excellent mechanical properties, offers Schottky solar cells with low sheet resistance, high optical transparency, large area growth, and low-cost transferring. Over the last decade, many studies have been focused on Gr/Si Schottky junction [8; 9; 10; 11] (GSJ) for solar cell applications. An overall power conversion efficiency (PCE) of 1-1.7% is achieved with open circuit voltage (\(V_{oc}\)) and short circuit current (\(J_{SC}\)) linearly depending on the intensity of incident light [6]. By chemical doping, PCE can be further increased for 8.5% [12]. With systematical optimization including the number of graphene layers, PCE can surpass 3% [13]. Furthermore, substrate metasurface [14; 15], such as Si nanowire, Si nanohole array, and additional antireflection layer [16] are proposed for PCE enhancement.
However, one primary limitation for the GSJ Schottky solar cell is the low open circuit voltage \(V_{oc}\), which relates to the small Schottky barrier height (SBH). Here, we introduce an electromechanical coupling, called flexoelectricity [17], to increase the SBH, thus enhancing the performance of GSJ solar cell. The flexoelectric effect describes the linear coupling between electric polarization and strain gradient in solid state materials [18; 19]. It suggests that the polarization can originate from the strain gradient even in the centrosymmetric systems. Compared with the piezoelectric effect that has been studied extensively, the response of the flexoelectric effect is very weak, and remains underexplored [17]. The electric polarization induced by a strain gradient is typical of the order of \(10^{-9}\) C/m [217; 20] at the macro scale. As the geometric size scales down, the strain gradient is inversely proportional to the spatial scale [21]. Thus micro-nano structure is able to achieve a large strain gradient, the flexoelectric effect induced by a strain gradient, and dominates over the piezoelectric effect at nanoscale [22; 23]. In recent years, By using an atomic force microscope (AFM) to introduce large strain gradients, a number of experimental studies concentrate on the mechanism of flexoelectricity and its applications have emerged [24; 25; 26]. Conductive AFM (CAFM) probe coated with metal can introduce a strain gradient and simultaneously monitor the current flow through the junctions. The strain gradient breaks the inversion symmetry of centrosymmetric materials and induces electric field polarization, named flexoelectric effect [20]. In contrast, a uniform strain cannot induce dipoles in graphene [27], and it is difficult to change the Schottky barrier. The barrier height of the Schottky junction interface between the probe and silicon can be tuned by the flexoelectric effect [20; 28]. The flexo-photovoltaic effect (FPV) was
found in perovskites [29; 30] and two-dimensional material system [31; 32], and significantly improves the solar cell performance. These motivate us to systematically investigate the flexo-photovoltaic in GSJ.
In this study, we introduce the flexo-photovoltaic effect in the Gr/Si Schottky junction. We find the GSJ performance as a solar cell can be largely enhanced through the flexoelectric effect by using AFM tip pressing. By in situ adding mechanical force on the GSJ, the current flows through the junction to the tip can be detected, then being read out based on the CAFM module using AFM. The current-voltage curves under different applied forces are analyzed, and we obtained the corresponding SBH variation as a function of applied force. Under illumination, the GSJ device shows PV effect. \(V_{oc}\) can be improved by this electromechanical effect. We finally demonstrate the enhanced PV through the flexoelectric effect in GSJ.
The graphene/silicon junction Schottky devices are prepared with a single layer of CVD graphene being wet-transferred to the lightly-doped n-type silicon layer forming the two dimensional and three dimensional (2d-3d) Schottky contact. An n-doped Si/SiO\({}_{2}\)\(500\,\mu\)m/\(100\,\mu\)m substrate with a resistivity of \(1\)-\(10\,\Omega\cdot\mathrm{cm}\) corresponding to a doping concentration of \(4.5\times 10^{14}\,\mathrm{cm}^{-3}\) to \(4.94\times 10^{15}\,\mathrm{cm}^{-3}\) is used.
The bottom of the silicon substrate is mechanically scraped to remove the thin oxide layer and coated with GaIn and copper to form an ohmic contact [33]. The scanning electron micrograph (SEM) of the device is shown in Fig.1a. The graphene covers the silicon window (dark area) forming the Schottky contact, connecting with the Au electrode (surrounding gray part), and forming an ohmic contact with Au electrodes. The graphene is etched into ribbons. The surface morphology of the Gr ribbon edge tested by AFM is shown in Fig.1b. The left-hand side flatten area is silicon covered by graphene, while the other is bare silicon after the graphene is etched. With the tip contacting the GSJ area, the current can be read out with the CAFM module. Figure.1c shows the high spatial resolution current distribution of the junction with \(V_{bias}=$-1\,\mathrm{V}$\). Figure.1d shows the corresponding Raman mapping on the graphene ribbon device. The intensity of the 2D peak versus the G peak and the intensity of the D peak versus the G peak are shown to illustrate the clear graphene ribbon areas. According to the contour scale bar, the 2D peak has a larger peak, while the D peak has a smaller peak, indicating the high quality of the graphene in the sample. With the probe connecting the bottom copper and the top-surface elec
Figure 1: Experimental characterization of the GSJ device. a. The Scanning electron micrograph (SEM) of the device. The white bar is \(100\,\mu\)m. b. The surface topography of the device measured by AFM. c. Current distribution of Gr/Si junction measured by AFM with bias voltage \(V_{bias}\)\(-\)\(1\,\mathrm{V}\) and applied force \(0.26\,\mu\mathrm{N}\). The white scale bar represents \(600\,\mathrm{pm}\). d. The Raman mapping of the device. e. The current-voltage response curve. The inset picture shows the logarithmic current as a function of the bias voltage.
trode separately, we obtain the basic current-voltage (I-V) response of the device using a semiconductor analyzer (Agilent B1500), shown in Fig.1e. The red line shows I-V response with the light turned off while the blue line represents the light-on case. In the light-off case, the I-V curves show the typical Schottky diode properties of GSJ. The center wavelength is 532 nm and the approximate power density used is 50 \(\mu\)W. The reverse current is on the order of \(10^{-10}\) A, showing a good diode characteristic. When the light turns on, the reverse photocurrent exceeds 10 \(\mu\)A. The device performs a typical photoresponse of GSJ, and the inset picture shows the half-logarithmic curve of the I-V results. In the reverse bias region, the photocurrent reaches nearly 5 orders of magnitude under the effect of the photoresponse. In the following experiments, we use AFM to do further tests to study the electromechanical effects on the responses by in situ exerting mechanical stress. Note that due to the current-limiting protection of the instruments (20 nA), we focus on the results in the range of \(\pm\)20 nA in the following tests.
Considering the graphene massless carrier property and the inhomogeneity of the Gr/Si contact, the modified equation for GSJ based on thermal emission theory can be expressed as the following formula [7; 34],
\[J=A^{*}T^{3}\exp\left(-\frac{\bar{\phi}_{B}-\frac{\bar{\phi}_{P}^{2}}{2k_{B}T} }{k_{B}T}\right)[\exp\left(\frac{qV}{\eta k_{B}T}-1\right)] \tag{1}\]
where \(A^{*}=0.011\,58\) A/cm\({}^{2}\)/K\({}^{3}\) and \(\delta_{P}=135\) meV, and \(\eta\), \(k_{B}\), \(T\) and \(q\) are the ideal factor, the Boltzmann constant, temperature and elementary charge, respectively. The effective working area \(A\) of GSJ for our device here is \(1.25\times 10^{-3}\) cm\({}^{2}\). Fitting Fig.1e with Eq.1, we then obtain the Schottky barrier height (SBH) \(\phi_{B}=0.6\) eV before applying mechanical stress. To find the effect of electrical polarization on the Schottky junction induced by strain gradient, we use the Conductive-AFM tip to exert stress on the graphene surface and in situ read-out the current (\(I_{gsj}\)) by sweeping the bias voltage \(V_{bias}\), the schematic of the experimental setup is shown in Fig.2a.
The tip coated with conductive diamond (Adam-AD40) is directly pressed onto the CVD graphene layer. Note that, in these AFM based experiments, \(V_{bias}\) is applied to the bottom ohmic electrode and the current reversely flows through the Gr/Si Schottky junction to the probe for current detection. Thus the forward bias response is under the condition of negative bias voltage (\(V_{bias}<0\)). Directly experimental calibration of the tip-induced strain distribution is still challenging [24]. In
Figure 2: a. Schematic of the CAFM experimental setup. b. Finite element analysis (FEA) of strain distribution in the substrate under pressing. c. Energy bandgap structure of the GSJ with laser on. d. The bandgap structure bending by tip-pressing induced flexoelectricity modulation.
the following, we estimate the strain distribution based on finite element simulation via COMSOL. As shown in Fig.1, the graphene is etched into ribbons. Due to the atomic thickness of graphene and the free-standing edge of the ribbon, the graphene layer has negligible effects on the out-of-plane deformation of the junction. The finite element analysis on the strain distribution under pressing thus only takes the tip and the silicon substrate into consideration for simplification. We set the tip radius to 10 nm according to the SEM imaging measurement. The bottom of the substrate is set to be fixed according to the actual situation of the experiment. The tip is set using diamond material with density 3515 kg/m\({}^{3}\), Young's modulus \(105\times 10^{10}\) Pa and Poisson's ratio 0.1 while the silicon substrate with density 2329 kg/m\({}^{3}\), Young's modulus \(170\times 10^{9}\) Pa and Poisson's ratio 0.28. The half-cross-section of the substrate is shown in Fig.2b, as the contact area has rotation symmetry of z-axis. While at an applied force on the order of \(\sim\)5 \(\mu\)N, the corresponding strain is about \(\pm 0.3\). For the vertical direction (z-axis), the strain gradient is estimated to be on the order of \(10^{7}\) m\({}^{-1}\).
For a typical graphene low-doped n-type silicon junction, the electronic band structure and energy-band diagram are shown in Fig.2c. While under illumination, part of the photo-generated carriers moves to the graphene thus the barrier height slightly changes. Note that in our following experiments, the laser central wavelength from AFM is 840 nm. In Fig.2d, we show the further changes (dashed blue lines) of the barrier height. While under tip pressing, the strain gradient in the substrate silicon breaks the inverse symmetry of the centrosymmetric material, which results in a flexoelectric effect [29; 20] (the blue arrow represents the polarization). The corresponding built-in electric field thus increases the SBH. This leads to the flexoelectricity induced Schottky barrier height tuning of the device, which will consequently affect the photovoltaic effect in this junction.
Figure 3 shows the GSJ I-V responses obtained directly by AFM while applying forces. While the laser is turned off, the GSJ shows a typical Schottky diode feature (the orange line in Fig.3a). As aforementioned, the forward bias range is when \(V_{bias}<0\), while the reverse bias area \(V_{bias}>0\). While \(V_{bias}>\)\(-0.3\) V, the response current
Figure 3: GSJ I-V responses under tip-pressing obtained by AFM. Photo-response effects on the GSJ performance with fixed applied force (a. Linear ordinate coordinates; b. Logarithmic ordinate coordinates). c. GSJ I-V responses of increased applied forces with laser turned off. d. GSJ I-V responses of increased applied forces with laser turned on.
reaches the upper testing limit of our AFM instrument. The open-circuit voltage (\(V_{oc}\)) increases to \(\sim\)0.4 V, shown clearly in Fig.3b. The short-circuit current (\(J_{sc}\)) can not be directly obtained, because of the instrument current limits. Nevertheless, the photoresponse (in Fig.3a and Fig.3b) implies that this GSJ device has an evident photovoltaic effect. To further investigate the mechanical manipulation of the PV effect, we applied forces by AFM nanoindentation on the GSJ. We first turn the laser off to study the force induced effects on the GSJ. The response current (absolute value) in the forward bias area can be enlarged by increasing the applied force (Fig.3c) from 2.07 \(\mu\)N to 7.58 \(\mu\)N, while the response current in reverse bias area maintains being cut off. Similarly, for each applied force, we then collect the current data by sweeping the bias voltage \(V_{bias}\) when turning the laser on, shown in Fig.3d. In the forward bias area, the GSJ current (\(I_{GSJ}\)) maintained being enlarged (absolute value) when the applied force is increased in contrast to the Fig.3c. Due to the laser-induced photoresponse, the current dramatically changed. There exists an evident photovoltaic area. By directly contrasting the I-V responses under different applied forces, the half-logarithmic graph of the absolute GSJ current as a function of the bias voltage is shown in Fig.4a. For the laser-off cases (orange lines), the open-circuit voltage \(V_{oc}\) remains stable under different applied forces. Remarkably, for the laser-on cases (blue lines), due to the existing photovoltaic effects, \(V_{oc}\) varies from near zero to around \(-\)0.5 V in contrast to the laser-off cases. The absolute value of \(V_{oc}\) is increased from 0.38 V to 0.46 V for more than 20%, as shown in Fig.4b (the blue circles). Fitting the laser-off cases with Eq.1, the Schottky barrier height as a function of applied forces is obtained, as shown in Fig.4c. The strain gradient in the substrate originates from the contacts between the tip and the substrate surface [24], which breaks the inversion symmetry in the silicon [20; 28] and causes an extra flexoelectricity induced built-in potential in addition to the native depletion layer in the GSJ. The SBH can be additionally increased by the forces.
Figure 4: a. The GSJ current as a function of bias voltage under different applied forces. The orange curves show the laser-off cases while the blue curves show the laser-on cases. b. The open-circuit voltage as a function of applied forces. The blue points represent the laser-on cases while the orange points represent the laser-off cases. c. The extracted Schottky barrier height as a function of applied forces.
For this GSJ, the open-circuit voltage can be obtained for \(I=0\) while short-circuiting current for \(V=0\). When \(V=0\), the simple expression can be obtained [7]\(I_{sc}\approx-I_{ph}\), where \(I_{ph}\) is the photogenerated current. The \(V_{oc}\) can be expressed as,
\[V_{oc}\approx\frac{\eta k_{B}T}{q}\ln(\frac{I_{ph}}{I_{0}})\approx\frac{\eta}{ q}\phi_{B}+Const. \tag{2}\]
The \(V_{oc}\) has approximately a linear relation with the Schottky barrier height, and agrees well with experiments.
Considering the photoresponse of Gr/Si Schottky junction, we need the short-circuit current \(I_{sc}\) and fill factor \(FF\) for estimation. However, because of the limitation of our instrument, the short-circuit current can not be precisely measured experimentally. Here, our laser power has not been changed as an AFM detection light source for the laser-on cases, i.e. photogenerated current \(I_{ph}\) is fixed. We then assume that the short-circuit current for different applied forces are equal, i.e. for each case, when \(V_{bias}=0\), the GSJ reaches the reverse saturated state. In addition, when \(V_{bias}>V_{oc}\), the current rises sharply to the upper limit in a series of parabolic shapes. Considering the actual \(I_{sc}\gg\)20 nA, the fill factors \(FF\) are assumed to be approximately equal for the cases under different forces. Consequently, the power conversion efficiency (PCE) can be expressed as \(PCE=\frac{I_{sc}V_{oc}FF}{P_{in}}\), which can be improved for about at least 20% attributed to the enhancement of \(V_{oc}\) under strain gradient.
In this article, we investigate the flexoelectricity-enhanced photovoltaic effect in GSJ. Using the AFM, we apply different forces to generate the strain gradient in the junction and in situ obtain the response current of GSJ under these conditions. By experimental validation, we find that the open-circuit voltage (\(V_{oc}\)) can be enhanced from 0.38 V to 0.46 V by the applied forces when the laser is turned on and the Schottky barrier height can be increased from 0.65 eV to 0.7 eV, respectively. As a consequence, the power conversion efficiency can be enhanced for more than 20% with the assumption that the short-circuit current \(I_{sc}\) is identical and similar fill factor \(FF\). Our work shed light on the GSJ devices for PV effect enhancement with mechanical manipulation approach and pave the way for the PV enhancements 2D heterostructure based on the similar mechanism. At present, our research is still in the early stage of exploration, and combining this technology with integration is still a challenge for future applications.
###### Acknowledgements.
This work was supported in part by the National Natural Science Foundation of China under Grants 92164106 and 61874094, China Postdoctoral Science Foundation (2021M692789), and in part by the Fundamental Research Funds for the Central Universities under Grants K20200060 and 2021FZZX001-17. We also thank Prof. Chunli Zhang for his valuable discussions and comments.
|
2303.02762 | Reverse Engineering Word-Level Models from Look-Up Table Netlists | Reverse engineering of FPGA designs from bitstreams to RTL models aids in
understanding the high level functionality of the design and for validating and
reconstructing legacy designs. Fast carry-chains are commonly used in synthesis
of operators in FPGA designs. We propose a method to detect word-level
structures by analyzing these carry-chains in LUT (Look-Up Table) level
netlists. We also present methods to adapt existing techniques to identify
combinational operations and sequential modules in ASIC netlists to LUT
netlists. All developed and adapted techniques are consolidated into an
integrated tool-chain to aid in reverse engineering of word-level designs from
LUT-level netlists. When evaluated on a set of real-world designs, the
tool-chain infers 34\% to 100\% of the elements in the netlist to be part of a
known word-level operation or a known sequential module. | Ram Venkat Narayanan, Aparajithan Nathamuni Venkatesan, Kishore Pula, Sundarakumar Muthukumaran, Ranga Vemuri | 2023-03-05T20:08:47Z | http://arxiv.org/abs/2303.02762v1 | # Reverse Engineering Word-Level Models from Look-Up Table Netlists
###### Abstract
Reverse engineering of FPGA designs from bitstreams to RTL models aids in understanding the high level functionality of the design and for validating and reconstructing legacy designs. Fast carry-chains are commonly used in synthesis of operators in FPGA designs. We propose a method to detect word-level structures by analyzing these carry-chains in LUT (Look-Up Table) level netlists. We also present methods to adapt existing techniques to identify combinational operations and sequential modules in ASIC netlists to LUT netlists. All developed and adapted techniques are consolidated into an integrated tool-chain to aid in reverse engineering of word-level designs from LUT-level netlists. When evaluated on a set of real-world designs, the tool-chain infers 34% to 100% of the elements in the netlist to be part of a known word-level operation or a known sequential module.
## I Introduction
FPGA reverse engineering plays an important role in understanding potential security concerns. Reverse engineering of RTL models from logic-level netlists is also helpful in understanding adversarial designs for which an RTL description of the design is unavailable. However, the heterogeneous functionalities of Configurable Logic Blocks (CLBs) in today's FPGAs increase the complexity of reverse engineering to detect word-level structures implemented in the FPGA fabric.
There are three stages to reverse engineering an FPGA design - _bitstream extraction_, _netlist extraction_ and _specification discovery_[1]. _Bitstream extraction_ is the process of recovering the bitstream file that is used to configure an FPGA. The bitstream file can be intercepted during the boot-up process of an FPGA [2]. _Netlist extraction_ is the process of recovering the LUT-level netlist from the extracted bitstream. FPGAs are made up of a series of CLBs. Depending on the type of FPGA, the CLB can contain different logic primitives. Modern FPGAs have several primitives such as Look-Up-Tables (LUT), storage elements such as latches and flip-flops, shift registers, RAM blocks, multiplexers, and carry blocks. Specification discovery is the process of extracting high-level functional modules from gate-level or LUT-level netlists. The extracted functional modules can be used to generate a RTL model in VHDL or Verilog. In this paper, we assume that the bitstream extraction and netlist extraction are completed and focus on the problem of specification extraction from flate netlists of LUTs, Carry modules and flip-flops. There are excellent tools such as Project X-ray [3] for LUT-level netlist reverse engineering.
For the purpose of reverse engineering, we assume a flattened LUT-level netlist. Therefore, we lose the original modular hierarchy of the netlist. In addition, word-level structures in the design might be cross-optimized with other components in the netlist. We also assume that we do not possess any information about the names of the resources and nets in the flattened netlist. The primary goal of this paper is to provide an automated tool that can analyze the flattened LUT-level netlist and generate a high-level representation for this netlist. A high-level representation of the netlist not only abstracts the netlist description to the word-level but also provides a modular hierarchy to help understand the netlist. Our tool-chain analyzes carry-chains in FPGA netlists to identify word-level operations and ALUs. In addition, the tool-chain adapts state-of-the-art ASIC reverse engineering techniques to identify sequential components and other word-level combinational operations in LUT-level netlists.
This paper is organized as follows. Section II discusses the state-of-the-art ASIC and FPGA gate-level netlist reverse engineering techniques. Section III discusses our new research in how carry-chains are analyzed to identify word-level operations and ALUs. Section IV highlights the state-of-the-art techniques adapted by the tool and shows how the different techniques work in tandem to obtain a top-level RTL model. The tool is evaluated on a set of real-world designs and the results are presented in Section V.
## II Related Work
### _RTL or word-level models for ASIC netlists_
Subramanyan et al. [4] focused on the detection of commonly used datapath components such as adders, multiplexers, counters, registers, and RAM modules in ASIC gate-level netlists using structural and functional analyses. A gate coverage metric was used to quantify their results. The paper shows that the components detected varied from 45% to 94% in real-world benchmarks. Gascon et al. [5] proposed a method to identify commonly used word-level modules by means of
Permutation-Independent Equivalence Checking (PIEC). The techniques were evaluated successfully on a set of reverse engineering benchmarks from ISCAS'85 and academic implementations of ALUs. Meade et al. [6] provided a toolset for segregating control logic and datapath logic in ASIC gate-level netlists. They analyzed the structure of the fan-in for each gate in the netlist and grouped elements based on similarity. They also provided a way to obtain state registers from the control logic and further construct FSMs and describe them using a high-level description. However, the techniques described in these papers were tested only on gate-level netlists of ASICs.
### _FPGA bitstream to netlist reverse engineering_
Benz et al. [7] present a tool-chain for partial conversion of a FPGA bitstream to LUT level netlist. The tool implements a correlation technique by using the XDLRC file available in Xilinx FPGAs. The bitstream reversal tool utilizes this XDLRC to map the XDL components to the bits in the bitstream and produces a netlist in XDL file format. The tool is capable of extracting only a fraction of the netlist from the bitstream.
Zhang et al. [8] present a tool-chain to obtain an RTL description in Verilog from a FPGA bitstream. The tool consists of Library Generator (LG), Bitstream Reversal Tool (BRT), and a Netlist Reversal Tool (NRT). The LG maps the bitstream to logic primitives in the configurable logic blocks. The NRT includes a procedure in which a reverse BFS from the primary output and ending with the primary inputs extracted the clusters in the netlist. Note that the BFS performed from primary output to the primary input does not recover the module boundaries or reveal information about the actual hierarchy of the design or the word-level groupings of elements.
### _Netlist to RTL description for FPGAs_
Albartus et al. [9] provided techniques to identify register groupings in both FPGA and ASIC gate-level netlists. They used control signals, neighborhood information, and other structural information to obtain highly accurate groupings. Fyrbiak et al. [10] proposed a way to detect trojans in FPGA and ASIC designs by comparing subcircuits with library modules. The library modules were un-tampered circuits and a graph similarity measure was used to compare with the target subcircuit to check for Trojans. However, these techniques do not directly provide insight into the different operators and other datapath components in the circuit.
Although there is significant research done in the field of gate-level ASIC netlist reverse engineering, only a few techniques address FPGA designs. In FPGAs, LUTs, carry modules and other combinational modules absorb considerable amount of logic. Methods that inspect the subcircuit structure containing gates in ASIC netlists cannot be immediately applied to FPGA netlists. Most of the current research on FPGA netlist reverse engineering uses structural analysis and graph comparison techniques to identify word-level modules and flip-flop groupings in a netlist. Current research also converts the LUTs, carry blocks and other combinational modules to a generic representation such as the AND-OR-INV logic [10]. However, some of the structural information is lost by converting the design to such a generic representation.
The tool-chain presented in this paper identifies datapath components and word-level modules by using structural and functional analyses. The tool-chain also analyzes fast carry-chains in modern FPGA netlists to identify word-level operations. Even though the tool-chain described in this paper is specific to current Xilinx FPGA architectures, the techniques can be extended to include other FPGA families by minimal modifications to the algorithms. Along with the identification of components, the tool-chain also produces a high-level representation in Verilog.
## III Extracting Word-level Operators Using Carry-Chains
Section III-A provides necessary background on carry-chains in Xilinx FPGAs. Section III-B discusses how carry-chain analysis is used to detect arithmetic and comparison operations. Section III-C shows how carry-chain analysis is extended to detect ALUs.
### _Carry-chains_
To significantly improve performance in common arithmetic operations, modern FPGAs are designed with low-level primitives that can perform specific tasks in a highly optimized manner. One such primitive is the carry module. Hauck et al. [11] shows how significant performance can be achieved by using carry-chains in FPGAs. Therefore, using the carry module is an integral part of the synthesis of operations in both Xilinx and Altera FPGAs1.
Footnote 1: Now AMD and Intel respectively
The carry module in Xilinx FPGAs shown in Fig. 1 consists of MUXCY multiplexers and XORCY xor gates [12]. The MUXCY component selects the _carry-in_ for each bit. The
Fig. 1: CARRY4 element in Xilinx 7-series FPGAs [12].
XORCY component performs the arithmetic operation. The select line for the multiplexer MUXCY is provided through the input S and one of the data inputs to the multiplexer is provided through the input DI. The carry module performs a specific function based on the inputs provided on the S, DI, CYINIT and CI pins. Multiple carry modules are cascaded to perform wider arithmetic logic. By analyzing the function performed by the logic in the transitive fan-in of each input pin in the carry-chain, we can identify the operation performed by the chain.
In Xilinx 7-series architecture, carry-chains are used to implement arithmetic and comparison operations. Bitwise operations are implemented directly using LUTs. Arithmetic operations are word-level operations that produce a multi-bit word as the output. On the contrary, comparison operations produce a single bit as the output. Therefore, by analyzing the output pin in the carry chain, we can differentiate between pure arithmetic and comparison operations.
As an example, Figure 2 shows the logic connected to the input and output pins for a 16-bit adder. The two sum operands are XORed and provided as input to the S pin. The DI pin receives input from either of the operands. The DI pin is the data input '0' of the MUXCY. The select line of the MUXCY is the result of the XOR operation between the two input operands. The select line is low if and only if both the operands are the same. This is why the DI pin can receive either of the operands. The CYINIT pin receives a '0' since addition is performed. For a subtraction, one of the operands is inverted and the CYINIT pin receives a '1' in order to complete the two's complement negation. The output word is taken from the O pins of the carry-chain.
### _Carry-Chain Analysis_
Carry elements are utilized for implementing arithmetic and comparison operations. Carry elements are considered as the starting point in detecting these operators. Once the different carry-chains in the netlist are identified, we perform the analysis on the input pins of each CARRY4 gate in the carry-chain. A reverse Breadth-First Search (BFS) is performed at each input pin of the CARRY4 element in the carry-chain to identify the elements in the transitive fan-in of each pin. BFS traversal stops at a _potential_ module boundary.
Algorithm 1 shows the procedure to detect carry-chains in LUT-level netlists. The input to the procedure is the list of all carry elements in the netlist. For each carry element in the netlist, we first check the connection to the CI pin. If the pin is connected to ground, the carry element is the start of the carry-chain. Lines 5-10 are executed when the start of the chain is identified. In line 6, the starting carry element is added to the carry-chain. Setting the starting carry element as the current carry element, we look at the CO[3] pin of the element. If there is another carry element connected to the current element, we append it to the chain and update the new carry element with the current carry element. We repeat this process until we cannot find another carry element connected to the CO[3] pin. This helps us find a single carry-chain in the netlist. In line 10, we append the chain to a list of carry-chains. When this is run across all the carry elements in the netlist, we can identify all the carry-chains. The function returns the list of all carry-chains.
```
1:proceduredetectCarryChains(\(carryGates\))
2:\(carryG Chains\leftarrow[]\)
3:for\(carryGate\) in \(carryGates\)do
4:ifCl pin of \(carryGate\) is 0 then
5:\(currentChain\leftarrow[carryGate]\)
6:whileCO[3] of \(carryGate\) is linked to CARRY4 do
7:\(carryGate\gets carryGate\) linked to CO[3]
8:\(currentChain\_append(carryGate)\)
9:endwhile
10:\(carryChains.append(currentChain)\)
11:endif
12:endfor
13:return\(carryChains\)
14:endprocedure
```
**Algorithm 1** Detection of carry-chains
#### Iii-B1 Detection of Pure Operations
A potential module boundary is delineated by flip-flops, RAM blocks, primary inputs and other CARRY4 elements which are potential input sources (operand) to an operator. A logic function is obtained for each pin in terms of the nets at the modular boundary by using all the elements in the transitive fan-in of the corresponding pin. In pure operations, a carry-chain implements only a single operation. Therefore, in pure operations, the fan-in function is same at all the S pins and the fan-in function is same at all the DI pins. A permutation-independent form for the fan-in function at the S and DI pins of a CARRY element is obtained using the fast Boolean matching technique presented by Huang et al. [13]. By comparing the permutation-independent form of the logic functions with a library, pure operations can be detected. For each operation, the library contains the corresponding permutation-independent fan-in functions at the S and DI pins of the CARRY4. For example, the adder entry in the library contains the permutation-independent form of the XOR function at the S pin and the permutation-independent form of the BUFFER function2 at the DI pin.
Footnote 2: Here, BUFFER function refers to the function Y = A
Algorithm 2 shows the procedure to detect pure operations using carry-chains. The function receives the carry-chain as
Fig. 2: I/O configuration for a 16-bit adder implemented using a carry-chain
the input and returns the elements that are part of the carry-chain operation and the identified operation as the output. If the operation is unknown, _None_ is returned. At first, for each S and DI pin in each carry element in the carry-chain, we perform a reverse BFS traversal and identify the elements in the fan-in. All the elements in the traversal are added to the module. This is shown in line 7. If the number of inputs in the fan-in is less than six, it signifies a potential pure operation that can be detected. This is shown in line 8. The number was chosen after analyzing the number of inputs in commonly used pure carry-chain-based operations. For pure carry-chain-based operations such as addition, subtraction and comparison, the number of inputs in the fan-in were identified to be less than six. Using the elements in the fan-in, we first obtain a fan-in equation and then identify a permutation-independent canonical form for the equation. This is shown in lines 9-10. We examine the output nets at the O and CO pins of the carry-chain to identify the module boundary for the operation. If the output nets are connected to a flip-flops, the flip-flops are added to the list of elements in the module as shown in line 15. In this case, the output word is mapped to the output of the flip-flops. However, if the output nets are not connected to flip-flops, the output word is mapped the output pins of the carry-chain. The operation is identified by comparing the S and DI fan-in functions of the candidate carry-chain and a library of S and DI fan-in functions.
```
1:procedureidentifyOperation(\(carryChain\))
2:\(operation\leftarrow\)"\(unknown\)"
3:\(moduleGates\leftarrow[\ ]\)
4:for\(carryGate\) in \(carryChain\)do
5:for\(pin\) in \(carryGate\)do\(\triangleright\) input pins
6:\(fanIn\leftarrow\) extractFanIn(\(pin\))
7:\(moduleGates.add(fanInGates)\)
8:if no. of inputs is \(<\) 6 then
9:\(pinEqn\leftarrow\) getEquation(\(fanIn\),\(pin\))
10:\(pinEqn\leftarrow\) getCanonical(\(pinEqn\))
11:endif
12:endfor
13:for\(pin\) in \(carryGate\)do\(\triangleright\) output pins
14:if output pin is connected to a flip-flop then
15:\(moduleGates.append(\)flip-flop\()\)
16:endif
17:endfor
18:endfor
19:if function of S, DI pins of \(carryGate\) in library then
20:\(operation\leftarrow\) matched library function
21:endif
22:return\(moduleGates,operation\)
23:endprocedure
```
**Algorithm 2** Detection of pure operations using carry-chains
#### Iii-B2 Detection of Cross-Optimized Operations
Since a standard structure is found in pure operations, it is straightforward to identify these operations. However, we need to devise a different approach to identify the functionality of a carry-chain that performs more than one operation. An _Add_Sub_ operation performs either an addition or a subtraction based on a specific condition. In ASIC netlists, multiplexers are used at the input or output of an _Add_Sub_ operation to choose between an addition or a subtraction. Since LUTs absorb the multiplexer logic, an _Add_Sub_ operation synthesized using an FPGA might not implement separate addition and subtraction modules. In order to identify carry-chains that perform the _Add_Sub_ operation that are dependent on a select line, we first identify the select line in the transitive fan-in of the input pins.
We identify select lines by analyzing the input nets in the fan-in of S pin for each carry element in the carry-chain. A net connected to all S pins of a carry-chain is classified as a candidate select line. The carry-chain implements an operation based on the signal value on the select line. For each input combination of the select line, we identify the fan-in function of the carry elements in the carry-chain. By comparing the permutation-independent form of the fan-in functions with a library, the operation can be detected. Thereby, we identify the operation performed by the carry-chain for each combination of the select line.
#### Iii-B3 RTL representation
In FPGA netlists, the input and output pin ordering of carry modules can be used to correctly identify the input and output word. Using the ordering information, the operations can be extracted to a higher level of abstraction. Input pins S[0] to S[3] and DI[0] to DI[3] of the first CARRY4 in the chain corresponds to the first four bits of the input word. The S[0] to S[3] and DI[0] to DI[3] of the cascaded CARRY4s in the chain correspond to the respective bits of the input word.
Pins S[0:3] and DI[0:3] have the same connectivity for all carry-chains. The first carry element is indicated by the CI pin being set to '0'. The last carry element is indicated by the CO pin not connected to another carry element. The input and output pin orderings of carry elements aid in understanding the order of bits in the words obtained in the previous steps. The first and second input words are obtained by comparing the inputs in the fan-in of the S and DI input pins of carry elements with a reference library. For each operation, this reference library contains the corresponding sequence in which the input operands appear at the fan-in of the S and DI pin. For example, the adder entry in the library contains ((1,A)) as the input to the fan-in cone of DI pin and contains ((1,A),(2,B)) as the inputs to the fan-in cone of the S pin. The numbers represent the sequence and the letter represents the corresponding input word.
The bits of the output word are simply the output nets of the carry-chain. However, this technique only works when the ordering of inputs in the transitive fan-in of each pin is the same. If the order of the inputs in the transitive fan-in is changed, we need to first differentiate between the input operands. Then, the ordering can be identified using the carry-chain. Flip-flops are the most common source of inputs. Flip-flop grouping techniques can be used to differentiate between different input words (see Section IV-B).
### _Detection of ALUs_
Inspecting the fan-in cone at the input of each pin of a carry element is insufficient to detect words when the logic
contained in the fan-in-cones is complex. Therefore, a different approach is required to identify complex combinational logic such as the ALU. To this end, potential ALUs are first identified by obtaining the register boundaries at the input and output of the carry-chain. For identifying the functionality of the ALU, we use the Quantified Boolean Formula (QBF) solver approach described by Gascon et al. [14]. However, we adapt it to FPGA netlists.
To identify the ALU boundaries, we first apply a reverse BFS to obtain the input word. To obtain the output word, we perform forward BFS starting at the CARRY elements and stopping at a potential module boundary (flip-flop, RAM module and primary output). All the elements encountered in this traversal might not be part of the same carry-chain-based operation. Therefore, a reverse BFS technique starting at the carry elements and stopping at the potential module boundaries is used to identify the inputs for these elements. By doing this set of traversals, a register-to-register boundary that contains carry-chains interspersed with other combinational logic elements is obtained. A typical ALU receives two input words and produces an output word based on a condition. This condition is usually referred to as the _opcode_. If the register-to-register boundary obtained matches the standard structure of an ALU, it is marked as a potential ALU.
Figure 3 shows the QBF formulation setup to identify a given ALU's functions. Once the ALU boundaries are identified, we identify the functions of the candidate ALUs by comparing it with a library of known components. In this library, we contain a Verilog description of a list of commonly used operations that include arithmetic, comparison, logical, bitwise and shifting operations. Each operation in this library is designed with a generic width, say N. The width N is updated to match the candidate ALU width during the comparison. To compare with the reference operator, the candidate ALU is represented using a Verilog description where the LUTs, carry modules and multiplexers in the module using a simplified form of their Boolean function. _Operand A_ and _Operand B_ of the ALU are provided as inputs to the reference operator circuit and the candidate ALU. The candidate module receives the opcode as side inputs. The equivalence function is used to form the Miter circuit to test for functional equivalence between the outputs of the two circuits. The QBF solver checks whether an _opcode_ exists such that the two circuits are equivalent for all input combinations. By testing the candidate ALU with various reference operators, we can identify the set of functions performed by the ALU and the corresponding _opcode_ values.
## IV Reverse Engineering Tool-Chain for FPGA Designs
The proposed tool-chain uses the carry-chain analysis technique along with existing state-of-the-art techniques to identify word-level modules in ASIC netlists, which we adapted to LUT netlists. To detect bitwise operations, counters, and shift registers, we use the techniques proposed by Subramanyan et al. [4]. Section IV-A shows how K-cut analysis is used on FPGA designs to identify bitwise operations. Section IV-B shows how grouping and graph isomorphism techniques are used to identify sequential modules.
### _K-Cut Analysis_
Word-level operators are usually synthesized into bit-sliced designs as it is the most efficient way to perform word-level operations. The functions of these bit-slices are usually derived using a collection of logic gates. Since LUT-level netlists are synthesized using LUTs, we analyze the INIT values of the LUTs to identify the logic functions. Therefore, to identify the logic used in a bit-slice, we need to determine the set of LUTs that perform the logic. Cut enumeration is used to detect bit-slices. In a netlist, a cut of a net \(N\) refers to a set of nodes that are present in the transitive fan-in of \(N\) that can definitively assign a logic value to net \(N\). When the number of nets that are used to define \(N\) is less than or equal to \(k\), it is called a _k-cut_. Most of the operators in FPGA designs are synthesized using carry-chains. However, pure bitwise operations are not synthesized using carry-chains. Therefore, to improve the effectiveness of detecting word-level operations, we include this analysis.
For an n-bit bitwise operation with _Out_ as its output and \(A\) and \(B\) as its inputs, _Out[i]_ is only dependent on _A[i]_ and _B[i]_. Thus, it is not possible to identify a word operation only using knowledge about the functionality of the bit-slices. To identify word-level bitwise operations, we analyze the neighborhood information along with the functions of each bit-slice. In real-world designs, these pure bitwise operations usually receive their inputs from a group of flip-flops and the outputs are connected to a group of flip-flops. To differentiate between flip-flops that are inputs and outputs to the bitwise operation from the other flip-flops in the netlist, the register stage identification algorithm provided in DANA by Albartus et al. [9] is used.
### _Detection of Sequential Modules_
When reverse engineering FPGAs, it is essential to apply techniques to detect sequential components formed by flip-flops. We use techniques described in [4] for identifying sequential components. Techniques described in [9] can also be used to identify sequential modules. Counters, shift registers, registers, and RAM modules are some of the commonly used sequential components. Xilin's 7-series FPGAs use embedded memory resources in their FPGAs such as Block RAMs to implement designs that require RAM modules [15]. Thus, the goal is to detect counters, shift registers, and registers.
To ensure the synchronization of flip-flops in a register, these flip-flops usually have the same clock and control
Fig. 3: QBF formulation
signals. Thus, grouping flip-flops in a netlist based on control signals provides us with potential registers in the netlist. Even though the idea behind this technique is very simple, it is an effective way to detect module boundaries in a netlist.
Due to the nature of the operation of counter and shift register circuits, the data flow between the flip-flops in these sequential components follows a standard structure. By analyzing the data flow between the flip-flops in a register grouping, we can identify counters and shift registers by comparing their Flip-Flop Connectivity Graphs (FFCGs)3. A flip-flop connectivity graph is a graph data structure in which the nodes are flip-flops and the edges are connections between the flip-flops. For constructing the FFCG, we replace any combinational logic element (logic gates in ASICs) encountered in the traversal is replaced by a wire.
Footnote 3: This is similar to the Latch Connection Graph (LCG) mentioned in [4] and Data Flow Graph (DFG) mentioned in [9]
Once the different flip-flop groupings and sequential components in the circuit are identified, a modular boundary is outlined by identifying the gates that are part of the module. This is done by analyzing paths between each pair of flip-flops in a grouping. We perform forward BFS at the output of each flip-flop in the grouping and exit when we encounter another flip-flop. If the destination flip-flop obtained in each traversal is part of the same grouping, we add the combinational elements encountered in these paths as part of the module. If the destination flip-flop is not part of the same grouping, we disregard the combinational elements in this path as they are not part of this module. When this is done on all the paths, a complete modular boundary is obtained.
Counters and shift registers are compared with a reference counter and shifter of the same size. If there is a perfect match in the mapping between all the input and output signals in the reference component and the inferred component, the component is extracted to a high-level description. If the structural mapping fails, we write a behavioral description for each flip-flop in the grouping using the logic expressions of the fan-in cones. The RTL for registers that are not classified as counters or shift registers are also written in a similar manner.
### _Tool-Chain Workflow_
Sections III described how word-level operations can be detected using carry-chain analysis. Section IV-A and Section IV-B show how ASIC reverse engineering techniques are adapted to FPGA designs. This section describes how the reverse engineering tool-chain integrates the techniques and produces a top-level module that instantiates the identified word-level modules.
The tool-chain integrates the techniques in a particular order and ultimately provides a high-level description of the gate-level netlist. Figure 4 shows the order in which the techniques are integrated. HAL [16] is used to parse the netlist. Then, the different carry-chains are identified in the netlist, and carry-chain-based operations are mapped as part of a module as shown in blue. The remaining unmapped gates and nets are passed on to the next stage where the flip-flops are identified and grouped based on control signals. Flip-flop connectivity graphs are formed on these groupings and sequential components are detected as shown in green. On the remaining unmapped nets and gates, k-cut analysis is performed and bit-slices are identified. Using bit-slices that represent the same functionality, bitwise operations are detected as shown in red. An RTL description is written for all the identified modules. They are instantiated in the top-level module to form a complete netlist. The obtained high-level RTL is then verified using JasperGold for equivalence checking with the source HDL to confirm functional validity. However, to ensure equivalence in sequential circuits, we reset all flip-flops in the source HDL that do not have an initialization.
## V Experimental Results
The techniques described in this paper were implemented using Python. HAL [16] was used to perform fundamental operations on the netlist such as graph traversal and storing and retrieving data on design elements. All the benchmark designs were first synthesized using Artix 7-series and Zync 7-series FPGAs using Xilinx Vivado. We set the _flatten hierarchy_ option to _full_, while we kept the other default synthesis and optimization settings. We delete all the original net IDs that may indicate the signal name in the original model and generate unique random IDs for all nets. We use these netlists for our reverse engineering experiments. Yosys [17] was used for the QBF SAT problem. Permutation-independent Boolean matching was performed using _testnpn_[13] command in _abc_[18]. All experiments were done on an AMD Ryzen 7 processor with 16GB RAM.
A diverse set of real-world designs varying between arithmetic cores, DSP cores, and processors were chosen to perform this experiment. These benchmarks were taken from
Fig. 4: Workflow of the tool-chain
OpenCores [19]. The number of carry modules, LUTs, flip-flops, and memory resources for each design are shown in Table I. Section V-A shows the number of carry-chains detected and the number of operations that were inferred in each benchmark. Section V-B shows the detection accuracy of sequential components. Section V-C shows the number of operations detected in each benchmark and the coverage achieved. Section V-D discusses the execution time of the tool-chain. Section V-E discusses the overall results and provides ways to improve the gate coverage.
### _Carry-chain analysis_
Table II shows the number of gates and number of carry-chains detected along with the number of known operations detected for each benchmark. The column _Detected Operations (%)_ shows the ratio of number of operations detected to the number of carry-chains in the benchmark. The column _Converted to RTL (%)_ shows the percentage ratio of number of carry-chains that were extracted to RTL with respect to the number of carry-chains in the benchmark. The column _Module Coverage_ % shows the percentage of elements in the netlist that were identified to be part of a module derived from the carry-chain. We include this percentage to highlight the number of gates that are part of a carry-chain-based operation in a netlist. The 128-bit AES core benchmark is not included because it did not contain any carry elements. We found that, on average, 54.19% of the elements in these netlists were identified to be part of a carry-chain-based module. However, only 29.08% of these modules were detected as a known operation. In the **Cordic core**, **Reed Solomon decoder** and **Canny edge detector** a large number of operations were detected. However, in the **Reed Solomon decoder** benchmark, detecting carry-chains was not enough to detect a significant portion of the netlist. Our tool successfully detected the ALU in the **Simple 8-bit microprocessor**. However, in the **M1 core** and **LXP32 processor**, we found that the register-to-register boundary did not match with the ALU structure. Thus, the operations detected were low in these designs. Since ALUs contain significant combinational logic, the coverage % was also low in these designs. However, with the correct module boundary, we were able to identify the functionality of the ALU using the QBF solver technique.
### _Detection accuracy of sequential modules_
Table III shows the number of flip-flops and the number of sequential components detected in each benchmark. The columns _Registers, Counters_, and _Shifters_ show the ratio of correctly identified components with respect to the total number of sequential components identified. The column _Known Seq. coverage (%)_ shows the percentage of LUTs and flip-flops that were identified as part of a sequential module. All the flip-flops in the **128-bit AES core** received the same control signal. Therefore, no sequential components were identified. In the **Cordic core**, most of the flip-flops and LUTs were identified as part of a carry-chain based operation. On average, 27.04% of the LUTs and flip-flops were identified to be part of a known sequential module.
### _Gate coverage_
Table IV shows the gate coverage for each benchmark. The _Module (%)_ shows the gate coverage is a percentage of the number of gates identified to be part of a module with respect to the total number of gates. The _Known component (%)_ shows the gate coverage as a percentage of the number of gates identified to be part of a module with respect to the total number of gates. Due to the high number of operations in the arithmetic and DSP cores, a high number of gates are identified as part of a known operation. In the **128-bit AES core** benchmark, 20 bitwise operations were identified. Due to the absence of carry-chains in this design, the inclusion of k-cut analysis was included to detect components in this design. On average, 56.12% of the gates in the design were identified to be part of a known operation or sequential component. On average, 75.18% of the gates in the design were identified as part of a module.
### _Execution time_
The maximum time taken for execution of the tool-chain is around 5 minutes for the **128-bit AES core** benchmark. Since the design did not contain any carry-chains and no registers were grouped by shared control signals in this design, k-cut analysis was run on the entire netlist, resulting in a high execution time.
### _Discussion_
The tool-chain produced good quality detection coverage on netlists of DSPs and arithmetic cores. By using state-of-the-art techniques to detect bit-slices and sequential components, we improved the coverage percentage on the UART and processor designs. Operations performed by the ALUs were detected by extending the carry chain detection algorithm assuming that we have the exact modular boundary of the ALU. By adapting select-line detection techniques to detect multiplexers and other grouping techniques, we can provide a comprehensive solution to detect ALUs. We were unable to identify the functionality of some carry-chain modules in these benchmarks as they did not match the structure of our library components. With the known module boundary, we can use the QBF-solving technique to detect the functionality of these modules.
\begin{table}
\begin{tabular}{|c|l|c|c|c|c|c|} \hline SL No. & Benchmark & LUTs & FFs & Carrys & Mems & Total gates \\ \hline
1 & Simple 8-bit processor & 87 & 44 & 2 & 0 & 133 \\
2 & Hilbert transformer & 232 & 525 & 60 & 0 & 817 \\
3 & UART & 528 & 360 & 4 & 6 & 898 \\
4 & Cerdic polar2ret & 739 & 688 & 175 & 0 & 1602 \\
5 & Cerdic req2ret & 1021 & 1051 & 259 & 0 & 2331 \\
6 & OpenFPU & 1591 & 760 & 115 & 0 & 2466 \\
7 & LXP32 processor & 1999 & 1017 & 165 & 144 & 3325 \\
8 & MI core & 2724 & 1340 & 220 & 0 & 4284 \\
9 & Reed Solomon decoder & 3700 & 2869 & 123 & 62 & 6754 \\
10 & AES & 3536 & 3968 & 0 & 172 & 7676 \\
11 & Camy edge detector & 5262 & 4706 & 639 & 1981 & 12588 \\ \hline \end{tabular}
\end{table} TABLE I: List of Benchmarks
## VI Conclusions and Future work
This paper discussed an automated tool-chain to detect word-level operations and datapath components in flattened LUT-level netlists extracted from bitstreams and express the result using a high-level model. Carry-chain analysis and k-cut analysis are used for identifying combinational modules. Grouping techniques and existing graph isomorphism techniques are used to identify sequential components. The paper also highlights the importance of analyzing carry-chains for reverse engineering FPGAs. We intend to extend the carry-chain analysis to identify multipliers, dividers and counters. Identification of modular boundary of ALUs in real world designs is another problem that requires a robust solution.
## Acknowledgement
This work was supported in part by the National Science Foundation under IUCRC-1916762, and CHEST (Center for Hardware Embedded System Security and Trust) industry funding.
|
2308.07568 | Symmetry breaking of extremals for the high order
Caffarelli-Kohn-Nirenberg type inequalities | In this paper we give the first result about the precise symmetry and
symmetry breaking regions of extremal functions for weighted second-order
inequalities. Firstly, based on the work of C.-S. Lin [Comm. Partial
Differential Equations, 1986], a new second-order Caffarelli-Kohn-Nirenberg
type inequality will be established, i.e.,
\begin{equation*}
\int_{\mathbb{R}^N}|x|^{-\beta}|\mathrm{div} (|x|^{\alpha}\nabla u)|^2
\mathrm{d}x
\geq \mathcal{S}\left(\int_{\mathbb{R}^N}
|x|^{\beta}|u|^{p^*_{\alpha,\beta}}
\mathrm{d}x\right)^{\frac{2}{p^*_{\alpha,\beta}}},\quad \mbox{for all}\ u\in
C^\infty_0(\mathbb{R}^N),
\end{equation*}
for some constant $\mathcal{S}=\mathcal{S}(N,\alpha,\beta)>0$, where
\begin{align*}
N\geq 5,\quad \alpha>2-N,\quad \alpha-2<\beta\leq \frac{N}{N-2}\alpha,\quad
p^*_{\alpha,\beta}=\frac{2(N+\beta)}{N-4+2\alpha-\beta}.
\end{align*}
We obtain a symmetry breaking conclusion: when $\alpha>0$ and
$\beta_{\mathrm{FS}}(\alpha)<\beta< \frac{N}{N-2}\alpha$ where
$\beta_{\mathrm{FS}}(\alpha):=
-N+\sqrt{N^2+\alpha^2+2(N-2)\alpha}$, then the extremal function for the best
constant $\mathcal{S}$, if it exists, is nonradial. Furthermore, we give a
symmetry result when $\beta=\frac{N}{N-2}\alpha$ and $2-N<\alpha<0$... | Shengbing Deng, Xingliang Tian | 2023-08-15T04:39:53Z | http://arxiv.org/abs/2308.07568v3 | # Symmetry breaking of extremals for the high order Caffarelli-Kohn-Nirenberg type inequalities
###### Abstract.
Based on the work of Lin [21], a new second-order Caffarelli-Kohn-Nirenberg type inequality will be established, namely
\[\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u )|^{2}\mathrm{d}x\geq\mathcal{S}\left(\int_{\mathbb{R}^{N}}|x|^{\beta}|u|^{p_{ \alpha,\beta}^{*}}\mathrm{d}x\right)^{\frac{2}{p_{\alpha,\beta}^{*}}},\quad \text{for all}\quad u\in C_{0}^{\infty}(\mathbb{R}^{N}),\]
for some constant \(\mathcal{S}>0\), where
\[N\geq 5,\quad\alpha>2-N,\quad\alpha-2<\beta\leq\frac{N}{N-2}\alpha,\quad p_{ \alpha,\beta}^{*}=\frac{2(N+\beta)}{N-4+2\alpha-\beta}.\]
We obtain a symmetry breaking conclusion: when \(\alpha>0\) and \(\beta_{\mathrm{FS}}(\alpha)<\beta<\frac{N}{N-2}\alpha\) where \(\beta_{\mathrm{FS}}(\alpha):=-N+\sqrt{N^{2}+\alpha^{2}+2(N-2)\alpha}\), then the extremal function for the best constant \(\mathcal{S}\), if it exists, is nonradial. This extends the works of Felli and Schneider [18], Ao, DelaTorre and Gonzalez [1] to second-order case.
Key words and phrases:Caffarelli-Kohn-Nirenberg inequalities; Weighted fourth-order equation; Non-degeneracy; Symmetry breaking.
Felli and Schneider [18] proved the extremal function is non-radial by restricting it in the radial space and classifying linearized problem. Finally, in a celebrated paper, Dolbeault, Esteban and Loss [14] proved an optimal rigidity result by using the so-called _carre du champ_ method that when \(a<0\) and \(b_{\mathrm{FS}}(a)\leq b<a+1\), the extremal function is symmetry, and we refer to [13] for an overall review about this method. See the previous results shown as in Figure 1. For the general case \(1<p<N\), we refer to [3, 7, 20, 20], however, the optimal symmetry region is not known yet and it is conjectured that \(\frac{(1+a-b)(a_{c}-a)}{(a_{c}-a+b)}\leq\sqrt{\frac{N-1}{N/(1+a-b)-1}}\). We also refer to [1] for the fractional-order (CKN) inequality.
In 1986, C.-S. Lin [21] extended the (CKN) inequality of [4] to the higher-order case, here we only mention the case without interpolation term:
**Theorem A**.: _[_21_]_ _There exists a constant \(C>0\) such that_
\[\||x|^{-a}D^{m}u\|_{L^{p}}\geq C\||x|^{-b}D^{j}u\|_{L^{r}},\quad\text{for all}\quad u\in C_{0}^{\infty}(\mathbb{R}^{N}), \tag{1.3}\]
_where \(j\geq 0\), \(m>0\) are integers, and_
\[\frac{1}{p}-\frac{a}{N}>0,\quad\frac{1}{r}-\frac{b}{N}>0,\quad a\leq b\leq a+m -j,\quad\frac{1}{r}-\frac{b+j}{N}=\frac{1}{p}-\frac{a+m}{N}.\]
_Here_
\[D^{s}u:=\begin{cases}\nabla(\Delta^{\frac{s-1}{2}}u),\quad\text{if $s$ is odd};\\ \Delta^{\frac{s}{2}}u,\quad\text{if $s$ is even}.\end{cases}\]
_In particular, when \(j=0\), \(m=2\) and \(p=2\), it holds_
\[\int_{\mathbb{R}^{N}}|x|^{-2a}|\Delta u|^{2}\mathrm{d}x\geq C\left(\int_{ \mathbb{R}^{N}}|x|^{-rb}|u|^{r}\mathrm{d}x\right)^{\frac{2}{r}},\quad\text{ for all}\quad u\in C_{0}^{\infty}(\mathbb{R}^{N}), \tag{1.4}\]
_where \(-\infty<a<\frac{N-4}{2}\), \(a\leq b\leq a+2\), \(r=\frac{2N}{N-2(2+a-b)}\)._
Figure 1. For the first order case with \(p=2\). The _Felli-Schneider region_, or symmetry breaking region, appears in dark grey and is defined by \(a<0\) and \(a<b<b_{\mathrm{FS}}(a)\). And symmetry holds in the brown region defined by \(a<0\) and \(b_{\mathrm{FS}}(a)\leq b<a+1\), also \(0\leq a<a_{c}\) and \(a\leq b<a+1\).
Therefore, same as the first-order case, it is natural to ask that whether the best constant could be achieved or not? Moreover, if it is achieved, are the extremal functions radial symmetry? There are some partial results about this problems for the second-order case (1.4): Szulkin and Waliullah [23] proved that when \(a=b>0\) the best constant is achieved, furthermore, Caldiroli and Musina [8, Theorem A.2] proved when \(a<b<a+2\) the best constant is achieved. Caldiroli and Cora [5] obtained a partial symmetry breaking result when a parameter is sufficiently large, we refer to [6] in cones. Dong [15] obtained the existence of extremal functions of higher-order (CKN) inequality and found the sharp constants under some suitable assumptions. As mentioned previous, there are no optimal results about symmetry or symmetry breaking phenomenon.
Recently, in [12] we have considered the following second-order (CKN) type inequality:
\[\int_{\mathbb{R}^{N}}|x|^{\alpha}|\Delta u|^{2}dx\geq S^{rad}(N,\alpha)\left( \int_{\mathbb{R}^{N}}|x|^{-\alpha}|u|^{p^{*}_{\alpha}}dx\right)^{\frac{2}{p^{ *}_{\alpha}}},\quad u\in C_{0}^{\infty}(\mathbb{R}^{N})\text{ and }u\text{ is radial},\]
where \(N\geq 3\), \(4-N<\alpha<2\), \(p^{*}_{\alpha}=\frac{2(N-\alpha)}{N-4+\alpha}\). As in [18] which deals with the first-order (CKN) inequality, we restrict it in radial space, then consider the best constant \(S^{rad}(N,\alpha)\) and its minimizer \(V\) by using the change of variable \(v(s)=u(r)\) where \(r=s^{\frac{2}{2-\alpha}}\) which transfers it into the standard second-order Sobolev inequality
\[\int_{0}^{\infty}\left[v^{\prime\prime}(s)+\frac{M-1}{s}v^{\prime}(s)\right] ^{2}s^{M-1}ds\geq C(M)\left(\int_{0}^{\infty}|v(s)|^{\frac{2M}{M-4}}s^{M-1}ds \right)^{\frac{M-4}{M}},\]
where \(M=\frac{2N-2\alpha}{2-\alpha}>4\), and also classify the solutions of related linearized problem:
\[\Delta(|x|^{\alpha}\Delta v)=(p^{*}_{\alpha}-1)|x|^{-\alpha}V^{p^{*}_{\alpha}- 2}v\quad\text{in}\quad\mathbb{R}^{N},\quad v\in C_{0}^{\infty}(\mathbb{R}^{N}).\]
We have showed that if \(\alpha\) is a negative even integer, there exist new solutions which "replace" the ones due to the translations invariance. Furthermore, in [11] we have also considered another case, by using the change of variable \(v(s)=r^{2-N}u(r)\) where \(r=s^{\frac{2}{2-\alpha}}\),
\[\int_{\mathbb{R}^{N}}|x|^{\alpha}|\Delta u|^{2}dx\geq S^{rad}(N,\alpha)\left( \int_{\mathbb{R}^{N}}|x|^{l}|u|^{q^{*}_{\alpha}}dx\right)^{\frac{2}{q^{*}_{ \alpha}}},\quad u\in C_{0}^{\infty}(\mathbb{R}^{N})\text{ and }u\text{ is radial},\]
where \(N\geq 3\), \(2<\alpha<N\), \(l=\frac{4(\alpha-2)(N-2)}{N-\alpha}-\alpha\) and \(q^{*}_{\alpha}=\frac{2(N+l)}{N-4+\alpha}\). However, it seems difficult to obtain the symmetry breaking result as the Felli-Schneiderin type [18].
### Problem setup and main results
In present paper, we do not directly deal with the high order (CKN) inequality (1.4) established by Lin [21], but we will establish a new second-order (CKN) type inequality, namely
\[\int_{\mathbb{R}^{N}}|x|^{-\beta}|\text{div}(|x|^{\alpha}\nabla u)|^{2} \mathrm{d}x\geq\mathcal{S}\left(\int_{\mathbb{R}^{N}}|x|^{\beta}|u|^{p^{*}_{ \alpha,\beta}}\mathrm{d}x\right)^{\frac{2}{p^{*}_{\alpha,\beta}}},\quad\text {for all}\quad u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N}), \tag{1.5}\]
for some constant \(\mathcal{S}>0\), where
\[N\geq 5,\quad\alpha>2-N,\quad\alpha-2<\beta\leq\frac{N}{N-2}\alpha,\quad p^{*}_{ \alpha,\beta}=\frac{2(N+\beta)}{N-4+2\alpha-\beta}. \tag{1.6}\]
Here we denote \(\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) as the completion of \(C^{\infty}_{0}(\mathbb{R}^{N})\) with the norm
\[\|u\|_{\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})}=\left(\int_{\mathbb{R} ^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u)|^{2}\mathrm{d}x\right)^{ \frac{1}{2}}. \tag{1.7}\]
Note that \(N\geq 5\), \(\alpha>2-N\), \(\beta\leq\frac{N}{N-2}\alpha\) imply \(\beta<N-4+2\alpha\) then \(2<p^{*}_{\alpha,\beta}\leq\frac{2N}{N-4}\) due to \(\alpha-2<\beta\).
Taking the same arguments as those in [19, Section 2] (which deals with it in bounded domain, but it also holds in the whole space), then we deduce that for all \(u\in C^{\infty}_{0}(\mathbb{R}^{N})\) the functional
\[\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u)|^{2} \mathrm{d}x\]
is equivalent to
\[\int_{\mathbb{R}^{N}}|x|^{2\alpha-\beta}|\Delta u|^{2}\mathrm{d}x.\]
Note that the assumption (1.6) implies
\[\frac{2\alpha-\beta}{2}>\frac{4-N}{2},\quad\text{and}\quad\frac{2\alpha-\beta }{2}-2<\frac{\beta}{p^{*}_{\alpha,\beta}}\leq\frac{2\alpha-\beta}{2},\]
then in (1.4), let \(a=-\frac{2\alpha-\beta}{2}\) and \(b=-\frac{\beta}{p^{*}_{\alpha,\beta}}\), we directly obtain the new type second-order (CKN) inequality (1.5). Now, let us rewrite (1.5) as
\[\mathcal{S}=\inf_{u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\setminus \{0\}}\frac{\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u )|^{2}\mathrm{d}x}{\big{(}\int_{\mathbb{R}^{N}}|x|^{\beta}|u|^{p^{*}_{\alpha, \beta}}\mathrm{d}x\big{)}^{\frac{2}{p^{*}_{\alpha,\beta}}}}>0. \tag{1.8}\]
We are interested in whether the extremals (if exist) of the best constant \(\mathcal{S}\) are symmetry.
Following the work of Felli and Schneider [18], firstly let us consider the radial case.
**Theorem 1.1**.: _Assume that (1.6) holds. Let us define the best constant in the radial class to be_
\[\mathcal{S}_{r}=\inf_{u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N}) \setminus\{0\},\ u(x)=u(|x|)}\frac{\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{ div}(|x|^{\alpha}\nabla u)|^{2}\mathrm{d}x}{\big{(}\int_{\mathbb{R}^{N}}|x|^{ \beta}|u|^{p^{*}_{\alpha,\beta}}\mathrm{d}x\big{)}^{\frac{2}{p^{*}_{\alpha, \beta}}}}. \tag{1.9}\]
_Then the explicit form of the best constant \(\mathcal{S}_{r}\) is_
\[\mathcal{S}_{r}=\left(\frac{2+\beta-\alpha}{2}\right)^{4-\frac{2(2+\beta- \alpha)}{N+\beta}}\left(\frac{2\pi^{\frac{N}{2}}}{\Gamma(\frac{N}{2})}\right)^ {\frac{2(2+\beta-\alpha)}{N+\beta}}C\left(\frac{2(N+\beta)}{2+\beta-\alpha} \right),\]
_where \(C(M)=(M-4)(M-2)M(M+2)\left[\Gamma^{2}(\frac{M}{2})/(2\Gamma(M))\right]^{\frac{ 4}{M}}\) and \(\Gamma\) is the Gamma function. Moreover the extremal radial functions which achieve \(\mathcal{S}_{r}\) in (1.9) are given by_
\[V_{\lambda}(x)=\frac{A\lambda^{\frac{N-4+2\alpha-\beta}{2}}}{(1+\lambda^{2+ \beta-\alpha}|x|^{2+\beta-\alpha})^{\frac{N-4+2\alpha-\beta}{2+\beta-\alpha}}}\]
_for any \(A\in\mathbb{R}\) and \(\lambda>0\)._
The Euler-Lagrange equation of (1.5), up to scaling, is given by
\[\operatorname{div}(|x|^{\alpha}\nabla(|x|^{-\beta}\mathrm{div}(|x|^{\alpha} \nabla u)))=|x|^{\beta}|u|^{p^{*}_{\alpha,\beta}-2}u\quad\text{in}\quad\mathbb{ R}^{N},\quad u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N}). \tag{1.10}\]
Note that (1.10) is equivalent to the system
\[\left\{\begin{aligned} -\mathrm{div}(|x|^{\alpha}\nabla u)& =|x|^{\beta}v\quad\text{in}\quad\mathbb{R}^{N},\\ -\mathrm{div}(|x|^{\alpha}\nabla v)&=|x|^{\beta}|u |^{p^{*}_{\alpha,\beta}-2}u\quad\text{in}\quad\mathbb{R}^{N},\end{aligned}\right.\]
which is a special weighted Lane-Emden system. Therefore, as a direct consequence of Theorem 1.1, we obtain
**Corollary 1.2**.: _Assume that (1.6) holds. Then problem (1.10) has a unique (up to scalings) positive radial solution of the form \(U_{\lambda}(x)=\lambda^{\frac{N-4+2\alpha-\beta}{2}}U(x)\) with \(\lambda>0\), where_
\[U(x)=\frac{C_{N,\alpha,\beta}}{(1+|x|^{2+\beta-\alpha})^{\frac{N-4+2\alpha- \beta}{2+\beta-\alpha}}}, \tag{1.11}\]
_and_
\[C_{N,\alpha,\beta}=[(N-4+2\alpha-\beta)(N-2+\alpha)(N+\beta)(N+2-\alpha+2 \beta)]^{\frac{N-4+2\alpha-\beta}{4(2+\beta-\alpha)}}\,.\]
Then we concern the linearized problem related to (1.10) at the function \(U\). This leads to study the problem
\[\operatorname{div}(|x|^{\alpha}\nabla(|x|^{-\beta}\mathrm{div}(|x|^{\alpha} \nabla v)))=(p^{*}_{\alpha,\beta}-1)|x|^{\beta}U^{p^{*}_{\alpha,\beta}-2}v \quad\text{in}\quad\mathbb{R}^{N},\quad v\in\mathcal{D}^{2,2}_{\alpha,\beta}( \mathbb{R}^{N}). \tag{1.12}\]
Next theorem characterizes all the solutions to (1.12).
**Theorem 1.3**.: _Assume that (1.6) holds. If_
\[\beta=-N+\sqrt{N^{2}+\alpha^{2}+2(N-2)\alpha+4[k(N-2+k)-(N-1)]}, \tag{1.13}\]
_for some \(k\in\mathbb{N}^{+}\), then the space of solutions of (1.12) has dimension \(1+\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!}\) and is spanned by_
\[Z_{0}(x)=\frac{1-|x|^{2+\beta-\alpha}}{(1+|x|^{2+\beta-\alpha})^{\frac{N-2+ \alpha}{2+\beta-\alpha}}},\quad Z_{k,i}(x)=\frac{|x|^{\frac{2+\beta-\alpha}{2 }}\Psi_{k,i}(x)}{(1+|x|^{2+\beta-\alpha})^{\frac{N-2+\alpha}{2+\beta-\alpha}}}, \tag{1.14}\]
_where \(\{\Psi_{k,i}\}\), \(i=1,\ldots,\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!}\), form a basis of \(\mathbb{Y}_{k}(\mathbb{R}^{N})\), the space of all homogeneous harmonic polynomials of degree \(k\) in \(\mathbb{R}^{N}\). Otherwise, the space of solutions of (1.12) has dimension one and is spanned by \(Z_{0}\thicksim\frac{\partial U_{\lambda}}{\partial\lambda}|_{\lambda=1}\), and in this case we say the solution \(U\) of equation (1.10) is non-degenerate._
Note that when (1.13) holds for \(k=1\), we have
\[\beta=\beta_{\mathrm{FS}}(\alpha):=-N+\sqrt{N^{2}+\alpha^{2}+2(N-2)\alpha}. \tag{1.15}\]
Note that \(\lim\limits_{\alpha\to+\infty}\frac{\beta_{\mathrm{FS}}(\alpha)}{\alpha-2}=1\). Now, we are ready to give the main result of this paper.
**Theorem 1.4**.: _Let \(N\geq 5\), \(\alpha>0\) and \(\beta_{\mathrm{FS}}(\alpha)<\beta<\frac{N}{N-2}\alpha\). Then the extremal function for the last constant \(\mathcal{S}\) which is defined in (1.8), if it exists, is nonradial._
Comparing our proof to the ones in [18] and [14] we give the following conjecture:
**Conjecture:**_the extremal functions of \(\mathcal{S}\) are symmetry radial either \(2-N<\alpha\leq 0\) and \(\alpha-2<\beta\leq\frac{N}{N-2}\alpha\), or \(\alpha>0\) and \(\alpha-2<\beta\leq\beta_{\mathrm{FS}}(\alpha)\)._ See Figure 2.
The paper is organized as follows: In Section 2 we deduce the optimizers of \(\mathcal{S}_{r}\) and give the uniqueness of positive radial solutions of Euler-Lagrange equation (1.10). Section 3 is devoted to characterizing all solutions to the linearized problem (1.12), then by using the result we will give the proof of symmetry breaking conclusion of Theorem 1.4.
## 2. **Optimizers of radial case**
In this section, we first use a suitable transform that is changing the variable \(r\mapsto r^{\frac{2}{2+\beta-\alpha}}\), related to Sobolev inequality to investigate the sharp constants and optimizers of best constant \(\mathcal{S}_{r}\) in (1.9).
### Proof of Theorem 1.1
Let \(u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) be radial. Making the changes that \(v(s)=u(r)\), \(|x|=r=s^{t}\) where \(t>0\) will be given later, then we have
\[\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u )|^{2}\mathrm{d}x= \omega_{N-1}\int_{0}^{\infty}\left[u^{\prime\prime}(r)+\frac{N+ \alpha-1}{r}u^{\prime}(r)\right]^{2}r^{N+2\alpha-\beta-1}\mathrm{d}r\] \[= \omega_{N-1}t^{-3}\int_{0}^{\infty}\left[v^{\prime\prime}(s)+ \frac{t(N-2+\alpha)+1}{s}v^{\prime}(s)\right]^{2}s^{t(N-4+2\alpha-\beta)+3} \mathrm{d}s,\]
Figure 2. The new second-order case. We also call the _Felli-Schneider region_, or symmetry breaking region, appears in dark grey and is defined by \(\alpha>0\) and \(\beta_{\mathrm{FS}}(\alpha)<\beta<\frac{N}{N-2}\alpha\). We conjecture that the symmetry holds in the brown region defined by \(2-N<\alpha\leq 0\) and \(\alpha-2<\beta\leq\frac{N}{N-2}\alpha\), also \(\alpha>0\) and \(\alpha-2<\beta\leq\beta_{\mathrm{FS}}(\alpha)\).
where \(\omega_{N-1}\) is the surface area for unit ball of \(\mathbb{R}^{N}\). In order to make use of Sobolev inequality, we need \(t(N-2+\alpha)+1=t(N-4+2\alpha-\beta)+3\) which requires
\[t=\frac{2}{2+\beta-\alpha}. \tag{2.1}\]
Now, we set
\[M:=t(N-2+\alpha)+2=\frac{2(N+\beta)}{2+\beta-\alpha}>4, \tag{2.2}\]
which implies
\[\int_{0}^{\infty}\left[u^{\prime\prime}(r)+\frac{N+\alpha-1}{r}u^ {\prime}(r)\right]^{2}r^{N+2\alpha-\beta-1}\mathrm{d}r= t^{-3}\int_{0}^{\infty}\left[v^{\prime\prime}(s)+\frac{M-1}{s}v^{ \prime}(s)\right]^{2}s^{M-1}\mathrm{d}s.\]
When \(M\) is an integer we can use the classical Sobolev inequality (see [24]) and we get
\[\int_{0}^{\infty}\left[v^{\prime\prime}(s)+\frac{M-1}{s}v^{ \prime}(s)\right]^{2}s^{M-1}\mathrm{d}s\geq C(M)\left(\int_{0}^{\infty}|v(s)|^{\frac{2M}{M-4}}s^{M-1} \mathrm{d}s\right)^{\frac{M-4}{M}}\] \[= t^{\frac{4-M}{M}}C(M)\left(\int_{0}^{\infty}|u(r)|^{\frac{2M}{M-4 }}r^{\frac{M}{t}-1}\mathrm{d}r\right)^{\frac{M-4}{M}},\]
where \(C(M)=\pi^{2}(M+2)M(M-2)(M-4)\left(\Gamma(M/2)/\Gamma(M)\right)^{\frac{4}{M}} \left(2\pi^{M/2}/\Gamma(M/2)\right)^{-\frac{4}{M}}\) (see [24, (1.4)]). Moreover, since we restrict this problem in radial space, even if \(M\) is not an integer we readily see that the above inequality remains true, see [16].
From (2.2), we deduce that
\[\frac{2M}{M-4}=\frac{2(N+\beta)}{N-4+2\alpha-\beta}=p_{\alpha,\beta}^{*}, \quad\frac{M}{t}=N+\beta.\]
So we get
\[\int_{0}^{\infty}\left[u^{\prime\prime}(r)+\frac{N+\alpha-1}{r}u^ {\prime}(r)\right]^{2}r^{N+2\alpha-\beta-1}\mathrm{d}r\geq t^{\frac{4-M}{M}-3}C(M)\left(\int_{0}^{\infty}|u(r)|^{p_{\alpha, \beta}^{*}}r^{N+\beta-1}\mathrm{d}r\right)^{\frac{2}{p_{\alpha,\beta}^{*}}},\]
which proves with
\[\mathcal{S}_{r}= t^{\frac{4-M}{M}-3}\omega_{N-1}^{1-\frac{2}{p_{\alpha,\beta}^{*} }}C(M)=\left(\frac{2+\beta-\alpha}{2}\right)^{4-\frac{2(2+\beta-\alpha)}{N+ \beta}}\left(\frac{2\pi^{\frac{N}{2}}}{\Gamma(\frac{N}{2})}\right)^{\frac{2(2 +\beta-\alpha)}{N+\beta}}C\left(\frac{2(N+\beta)}{2+\beta-\alpha}\right).\]
Moreover, from the previous inequalities, we also get that the extremal functions are obtained as
\[\int_{0}^{\infty}\left[v_{\nu}^{\prime\prime}(s)+\frac{M-1}{s}v_{\nu}^{\prime }(s)\right]^{2}s^{M-1}\mathrm{d}s= C(M)\left(\int_{0}^{\infty}|v_{\nu}(s)|^{\frac{2M}{M-4}}s^{M- 1}\mathrm{d}s\right)^{\frac{M-4}{M}}.\]
It is well known that
\[v_{\nu}(s)=A\nu^{\frac{M-4}{2}}(1+\nu^{2}s^{2})^{-\frac{M-4}{2}}\]
for all \(A\in\mathbb{R}\) and \(\nu\in\mathbb{R}^{+}\), see [16, Theorem 2.1]. Setting \(\nu=\lambda^{1/t}\) and \(s=|x|^{1/t}\), then we obtain that all the radial extremal functions of \(\mathcal{S}_{r}\) have the form
\[V_{\lambda}(x)=\frac{A\lambda^{\frac{N-4+2\alpha-\beta}{2}}}{(1+\lambda^{2+ \beta-\alpha}|x|^{2+\beta-\alpha})^{\frac{N-4+2\alpha-\beta}{2+\beta-\alpha}}},\]
for all \(A\in\mathbb{R}\backslash\{0\}\) and \(\lambda>0\). The proof of Theorem 1.1 is now completed.
### Proof of Corollary 1.2
Let \(u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) be a positive radial solution of equation (1.10). Set
\[-\mathrm{div}(|x|^{\alpha}\nabla u)=|x|^{\beta}v,\]
and \(r=|x|\), then equation (1.10) is equivalent to the following system:
\[\left\{\begin{aligned} & u^{\prime\prime}(r)+\frac{N-1+\alpha}{r}u^{ \prime}(r)+\frac{v(r)}{r^{\alpha-\beta}}=0\quad\text{in}\quad r\in(0,\infty), \\ & v^{\prime\prime}(r)+\frac{N-1+\alpha}{r}v^{\prime}(r)+r^{\beta} u^{p^{*}_{\alpha,\beta}-1}=0\quad\text{in}\quad r\in(0,\infty).\end{aligned}\right. \tag{2.3}\]
Moreover, set \(r=s^{t}\) where \(t=2/(2+\beta-\alpha)\) and let
\[X(s)=u(r),\quad Y(s)=t^{2}v(r),\]
that transforms (2.3) into the system
\[\left\{\begin{aligned} & X^{\prime\prime}(s)+\frac{M-1}{s}X^{ \prime}(s)+Y(s)=0\quad\text{in}\quad s\in(0,\infty),\\ & Y^{\prime\prime}(s)+\frac{M-1}{s}Y^{\prime}(s)+t^{4}X^{p^{*}_{ \alpha,\beta}-1}=0\quad\text{in}\quad s\in(0,\infty),\end{aligned}\right.\]
where \(M=\frac{2(N+\beta)}{2+\beta-\alpha}>4\), which is equivalent to
\[X^{\prime\prime\prime\prime}(s)+\frac{2(M-1)}{s}X^{\prime\prime\prime}(s)+ \frac{(M-1)(M-3)}{s^{2}}X^{\prime\prime}(s)-\frac{(M-1)(M-3)}{s^{3}}X^{\prime} (s)=t^{4}X^{\frac{M+4}{M-4}}, \tag{2.4}\]
in \(s\in(0,\infty)\), since \(p^{*}_{\alpha,\beta}=\frac{2M}{M-4}\). Then from [16, Lemma 2.2], we know that equation (2.4) has a unique (up to scalings) positive solution of the form
\[X(s)=\frac{C_{M,t}\nu^{\frac{M-4}{2}}}{(1+\nu^{2}s^{2})^{\frac{M-4}{2}}},\]
for some constant \(\nu>0\), where \(C_{M,t}=[t^{-4}(M-4)(M-2)M(M+2)]^{\frac{M-4}{8}}\). That is, equation (1.10) has a unique (up to scalings) radial positive solution of the form
\[u(x)=\frac{C_{N,\alpha,\beta}\lambda^{\frac{N-4+2\alpha-\beta}{2}}}{(1+\lambda ^{2+\beta-\alpha}|x|^{2+\beta-\alpha})^{\frac{N-4+2\alpha-\beta}{2+\beta- \alpha}}}\]
for some \(\lambda>0\), where
\[C_{N,\alpha,\beta}=[(N-4+2\alpha-\beta)(N-2+\alpha)(N+\beta)(N+2-\alpha+2 \beta)]^{\frac{N-4+2\alpha-\beta}{4(2+\beta-\alpha)}}\,.\]
Now, the proof of Corollary 1.2 is completed.
## 3. **Symmetry breaking phenomenon**
Firstly, by using the standard spherical decomposition and taking the changes of variable \(v(s)=u(r)\) with \(r=s^{\frac{2}{2+\beta-\alpha}}\), we can characterize all solutions to the linearized problem (1.12).
### Proof of Theorem 1.3
We follow the arguments as those in the proof of [22]. Let us decompose the fourth-order equation (1.12) into a system of two second-order equations. Set
\[-\mathrm{div}(|x|^{\alpha}\nabla v)=|x|^{\beta}w, \tag{3.1}\]
then problem (1.12) is equivalent to the following system:
\[\left\{\begin{aligned} -|x|^{\alpha}\Delta v-\alpha|x|^{\alpha-2}(x \cdot\nabla v)=|x|^{\beta}w\quad\text{in }\mathbb{R}^{N},\\ -|x|^{\alpha}\Delta w-\alpha|x|^{\alpha-2}(x\cdot\nabla w)= \frac{(p^{*}_{\alpha,\beta}-1)C^{p^{*}_{\alpha,\beta}-2}_{N,\alpha,\beta}}{(1 +|x|^{2+\beta-\alpha})^{4}}|x|^{\beta}v\quad\text{in }\mathbb{R}^{N},\end{aligned}\right. \tag{3.2}\]
in \(v\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\).
Firstly, we decompose \(v\) and \(w\) as follows:
\[v(r,\theta)=\sum_{k=0}^{\infty}\sum_{i=1}^{l_{k}}\phi_{k,i}(r)\Psi_{k,i}( \theta),\quad w(r,\theta)=\sum_{k=0}^{\infty}\sum_{i=1}^{l_{k}}\psi_{k,i}(r) \Psi_{k,i}(\theta), \tag{3.3}\]
where \(r=|x|\), \(\theta=x/|x|\in\mathbb{S}^{N-1}\), and
\[\phi_{k,i}(r)=\int_{\mathbb{S}^{N-1}}v(r,\theta)\Psi_{k,i}(\theta)\mathrm{d} \theta,\quad\psi_{k,i}(r)=\int_{\mathbb{S}^{N-1}}w(r,\theta)\Psi_{k,i}(\theta )\mathrm{d}\theta.\]
Here \(\Psi_{k,i}(\theta)\) denotes the \(k\)-th spherical harmonic, i.e., it satisfies
\[-\Delta_{\mathbb{S}^{N-1}}\Psi_{k,i}=\lambda_{k}\Psi_{k,i}, \tag{3.4}\]
where \(\Delta_{\mathbb{S}^{N-1}}\) is the Laplace-Beltrami operator on \(\mathbb{S}^{N-1}\) with the standard metric and \(\lambda_{k}\) is the \(k\)-th eigenvalue of \(-\Delta_{\mathbb{S}^{N-1}}\). It is well known that
\[\lambda_{k}=k(N-2+k),\quad k=0,1,2,\dots, \tag{3.5}\]
whose multiplicity is
\[l_{k}:=\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!} \tag{3.6}\]
(note that \(l_{0}=1\)) and that
\[\mathrm{Ker}(\Delta_{\mathbb{S}^{N-1}}+\lambda_{k})=\mathbb{Y}_{k}(\mathbb{R} ^{N})|_{\mathbb{S}^{N-1}},\]
where \(\mathbb{Y}_{k}(\mathbb{R}^{N})\) is the space of all homogeneous harmonic polynomials of degree \(k\) in \(\mathbb{R}^{N}\). It is standard that \(\lambda_{0}=0\) and the corresponding eigenfunction of (3.4) is the constant function that is \(\Psi_{0,1}=c\in\mathbb{R}\setminus\{0\}\). The second eigenvalue \(\lambda_{1}=N-1\) and the corresponding eigenfunctions of (3.4) are \(\Psi_{1,i}=x_{i}/|x|\), \(i=1,\dots,N\).
It is known that
\[\Delta(\varphi_{k,i}(r)\Psi_{k,i}(\theta))= \Psi_{k,i}\left(\varphi^{\prime\prime}_{k,i}+\frac{N-1}{r}\varphi ^{\prime}_{k,i}\right)+\frac{\varphi_{k,i}}{r^{2}}\Delta_{\mathbb{S}^{N-1}} \Psi_{k,i}\]
\[= \Psi_{k,i}\left(\varphi_{k,i}^{\prime\prime}+\frac{N-1}{r}\varphi_{k,i}^{\prime}-\frac{\lambda_{k}}{r^{2}}\varphi_{k,i}\right). \tag{3.7}\]
Furthermore, it is easy to verify that
\[\frac{\partial(\varphi_{k,i}(r)\Psi_{k,i}(\theta))}{\partial x_{j}}=\varphi_{k,i}^{\prime}\frac{x_{j}}{r}\Psi_{k,i}+\varphi_{k,i}\frac{\partial\Psi_{k,i}}{ \partial\theta_{l}}\frac{\partial\theta_{l}}{\partial x_{j}},\quad\text{for all}\quad l=1,\dots,N-1,\]
and
\[\sum_{j=1}^{N}\frac{\partial\theta_{l}}{\partial x_{j}}x_{j}=0,\quad\text{for all}\quad l=1,\dots,N-1,\]
hence
\[x\cdot\nabla(\varphi_{k,i}(r)\Psi_{k,i}(\theta))=\sum_{j=1}^{N}x_{j}\frac{ \partial(\varphi_{k,i}(r)\Psi_{k,i}(\theta))}{\partial x_{j}}=\varphi_{k,i}^{ \prime}r\Psi_{k,i}. \tag{3.8}\]
Therefore, by standard regularity theory, putting together (3.7) and (3.8) into (3.2), the function \((v,w)\) is a solution of (3.2) if and only if \((\phi_{k,i},\psi_{k,i})\in\mathcal{C}\times\mathcal{C}\) is a classical solution of the system
\[\left\{\begin{aligned} &\phi_{k,i}^{\prime\prime}+\frac{N-1+ \alpha}{r}\phi_{k,i}^{\prime}-\frac{\lambda_{k}}{r^{2}}\phi_{k,i}+\frac{\psi_{ k,i}}{r^{\alpha-\beta}}=0\quad\text{in}\quad r\in(0,\infty),\\ &\psi_{k,i}^{\prime\prime}+\frac{N-1+\alpha}{r}\psi_{k,i}^{\prime }-\frac{\lambda_{k}}{r^{2}}\psi_{k,i}+\frac{(p_{\alpha,\beta}^{*}-1)C_{N, \alpha,\beta}^{P_{\alpha,\beta}^{*}-2}}{r^{\alpha-\beta}(1+r^{2+\beta-\alpha })^{4}}\phi_{k,i}=0\quad\text{in}\quad r\in(0,\infty),\\ &\phi_{k,i}^{\prime}(0)=\psi_{k,i}^{\prime}(0)=0\quad\text{if} \quad k=0,\quad\text{and}\quad\phi_{k,i}(0)=\psi_{k,i}(0)=0\quad\text{if} \quad k\geq 1,\end{aligned}\right. \tag{3.9}\]
for all \(i=1,\dots,l_{k}\), where \(\mathcal{C}\) is defined by
\[\mathcal{C}:=\left\{\omega\in C^{2}([0,\infty))|\int_{0}^{\infty}\left[\omega ^{\prime\prime}(r)+\frac{N+\alpha-1}{r}\omega^{\prime}(r)\right]^{2}r^{N+2 \alpha-\beta-1}\mathrm{d}r<\infty\right\}.\]
Take the same change of variable as in the proof of Theorem 1.1, \(|x|=r=s^{t}\) where \(t=2/(2+\beta-\alpha)\) and let
\[X_{k,i}(s)=\phi_{k,i}(r),\quad Y_{k,i}(s)=t^{2}\psi_{k,i}(r), \tag{3.10}\]
that transforms (3.9) into the system
\[\left\{\begin{aligned} & X_{k,i}^{\prime\prime}+\frac{M-1}{s}X_{k,i}^{ \prime}-\frac{t^{2}\lambda_{k}}{s^{2}}X_{k,i}+Y_{k,i}=0\quad\text{in}\quad s\in( 0,\infty),\\ & Y_{k,i}^{\prime\prime}+\frac{M-1}{s}Y_{k,i}^{\prime}-\frac{t^{2} \lambda_{k}}{s^{2}}Y_{k,i}+\frac{(M+4)(M-2)M(M+2)}{(1+s^{2})^{4}}X_{k,i}=0 \quad\text{in}\quad s\in(0,\infty),\\ & X_{k,i}^{\prime}(0)=Y_{k,i}^{\prime}(0)=0\quad\text{if}\quad k =0,\quad\text{and}\quad X_{k,i}(0)=Y_{k,i}(0)=0\quad\text{if}\quad k\geq 1,\end{aligned}\right. \tag{3.11}\]
for all \(i=1,\dots,l_{k}\), in \((X_{k},Y_{k})\in\widetilde{\mathcal{C}}\times\widetilde{\mathcal{C}}\), where
\[\widetilde{\mathcal{C}}:=\left\{\omega\in C^{2}([0,\infty))|\int_{0}^{\infty} \left[\omega^{\prime\prime}(s)+\frac{M-1}{s}\omega^{\prime}(s)\right]^{2}s^{ M-1}\mathrm{d}s<\infty\right\},\]
and
\[M=\frac{2(N+\beta)}{2+\beta-\alpha}>4.\]
Here we have used the fact
\[t^{4}(p^{*}_{\alpha,\beta}-1)C^{p^{*}_{\alpha,\beta}-2}_{N,\alpha,\beta}=[(M-4)( M-2)M(M+2)]\left[\frac{2M}{M-4}-1\right]=(M+4)(M-2)M(M+2).\]
Fix \(M\) let us now consider the following eigenvalue problem
\[\left\{\begin{aligned} & X^{\prime\prime}+\frac{M-1}{s}X^{\prime}- \frac{\mu}{s^{2}}X+Y=0\quad\text{in}\quad s\in(0,\infty),\\ & Y^{\prime\prime}+\frac{M-1}{s}Y^{\prime}-\frac{\mu}{s^{2}}Y+ \frac{(M+4)(M-2)M(M+2)}{(1+s^{2})^{4}}X=0\quad\text{in}\quad s\in(0,\infty). \end{aligned}\right. \tag{3.12}\]
When \(M\) is an integer we can study (3.12) as the linearized operator of the equation
\[\Delta^{2}z=(M-4)(M-2)M(M+2)z^{\frac{M+4}{M-4}},\quad z>0\quad\text{in}\quad \mathbb{R}^{M},\]
around the standard solution \(z(x)=(1+|x|^{2})^{-\frac{M-4}{2}}\) (note that we always have \(M>4\)). In this case, as in [17] (see also [22]), we have that
\[\mu_{0}=0;\quad\mu_{1}=M-1\quad\text{and}\quad X_{0}(s)=\frac{1-s^{2}}{(1+s^{2 })^{\frac{M-2}{2}}};\quad X_{1}(s)=\frac{s}{(1+s^{2})^{\frac{M-2}{2}}}. \tag{3.13}\]
Moreover, even if \(M\) is not an integer we readily see that (3.13) remains true since (3.12) is an ODE system and the linearized operator admits at most one positive eigenvalue, see [2, Proposition 2.4]. Therefore, we conclude that (3.11) has nontrivial solutions if and only if
\[t^{2}\lambda_{k}\in\{0,M-1\},\]
where \(\lambda_{k}=k(N-2+k)\), \(k\in\mathbb{N}\). If \(t^{2}\lambda_{k}=0\) then \(k=0\). Moreover, if \(t^{2}\lambda_{k}=M-1\), then
\[\left(\frac{2+\beta-\alpha}{2}\right)^{2}\left(\frac{2(N+\beta)}{2+\beta- \alpha}-1\right)=k(N-2+k),\]
that is,
\[\beta=-N+\sqrt{N^{2}+\alpha^{2}+2(N-2)\alpha+4[k(N-2+k)-(N-1)]}, \tag{3.14}\]
for some \(k\in\mathbb{N}^{+}\). Turning back to (3.9) we obtain the solutions that, if (3.14) holds for some \(k\in\mathbb{N}^{+}\), then
\[\phi_{0}(r)=\frac{1-r^{2+\beta-\alpha}}{(1+r^{2+\beta-\alpha})^{\frac{N-2+ \alpha}{2+\beta-\alpha}}},\quad\phi_{k}(r)=\frac{r^{\frac{2+\beta-\alpha}{2}}} {(1+r^{2+\beta-\alpha})^{\frac{N-2+\alpha}{2+\beta-\alpha}}},\]
otherwise, if (3.14) does not hold for all \(k\in\mathbb{N}^{+}\), then
\[\phi_{0}(r)=\frac{1-r^{2+\beta-\alpha}}{(1+r^{2+\beta-\alpha})^{\frac{N-2+ \alpha}{2+\beta-\alpha}}}.\]
That is, if (3.14) holds for some \(k\in\mathbb{N}^{+}\), then the space of solutions of (3.2) has dimension \(1+\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!}\) and is spanned by
\[Z_{0}(x)=\frac{1-|x|^{2+\beta-\alpha}}{(1+|x|^{2+\beta-\alpha})^{\frac{N-2+ \alpha}{2+\beta-\alpha}}},\quad Z_{k,i}(x)=\frac{|x|^{\frac{2+\beta-\alpha}{2} }\Psi_{k,i}(x)}{(1+|x|^{2+\beta-\alpha})^{\frac{N-2+\alpha}{2+\beta-\alpha}}},\]
where \(\{\Psi_{k,i}\}\), \(i=1,\ldots,\frac{(N+2k-2)(N+k-3)!}{(N-2)!k!}\), form a basis of \(\mathbb{Y}_{k}(\mathbb{R}^{N})\), the space of all homogeneous harmonic polynomials of degree \(k\) in \(\mathbb{R}^{N}\). Otherwise the space of solutions of (3.2) has dimension one and is spanned by \(Z_{0}\), and note that \(Z_{0}\thicksim\frac{\partial U_{\lambda}}{\partial\lambda}|_{\lambda=1}\) in this case we say \(U\) is non-degenerate. The proof of Theorem 1.3 is now completed.
Now, based the proof of Theorem 1.3, we are ready to prove our main result. In order to shorten formulas, for each \(u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) we denote
\[\|u\|:=\left(\int_{\mathbb{R}^{N}}|x|^{-\beta}|\mathrm{div}(|x|^{\alpha}\nabla u )|^{2}\mathrm{d}x\right)^{\frac{1}{2}},\quad\|u\|_{*}:=\left(\int_{\mathbb{R} ^{N}}|x|^{\beta}|u|^{p^{*}_{\alpha,\beta}}\mathrm{d}x\right)^{\frac{1}{p^{*}_ {\alpha,\beta}}}. \tag{3.15}\]
### Proof of Theorem 1.4
We follow the arguments in the proof of [18, Corollary 1.2]. We define the functional \(\mathcal{I}\) on \(\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) by the right hand side of (1.8), i.e.,
\[\mathcal{I}(u):=\frac{\|u\|^{2}}{\|u\|_{*}^{2}},\quad u\in\mathcal{D}^{2,2}_{ \alpha,\beta}(\mathbb{R}^{N})\setminus\{0\}. \tag{3.16}\]
Define also the energy functional of equation (1.10) as
\[\mathcal{J}(u):=\frac{1}{2}\|u\|^{2}-\frac{1}{p^{*}_{\alpha,\beta}}\|u\|_{*}^{ p^{*}_{\alpha,\beta}},\quad u\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N}).\]
From Corollary 1.2 we know \(U\) given in (1.11) is a positive critical of \(\mathcal{J}\), thus \(\langle\mathcal{J}^{\prime}(U),\varphi\rangle=0\) for all \(\varphi\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\) and \(\|U\|^{2}=\|U\|_{*}^{p^{*}_{\alpha,\beta}}\). Then we obtain for all \(\varphi_{1},\varphi_{2}\in\mathcal{D}^{2,2}_{\alpha,\beta}(\mathbb{R}^{N})\), it holds that
\[\langle\mathcal{I}^{\prime}(U),\varphi_{1}\rangle= \frac{2}{\|U\|_{*}^{2}}\langle\mathcal{J}^{\prime}(U),\varphi_{1} \rangle=0,\]
and
\[\langle\mathcal{I}^{\prime\prime}(U)\varphi_{1},\varphi_{2}\rangle= \frac{2}{\|U\|_{*}^{2}}\langle J^{\prime\prime}(U)\varphi_{1}, \varphi_{2}\rangle\] \[-\frac{4}{\|U\|_{*}^{p^{*}_{\alpha,\beta}+2}}\int_{\mathbb{R}^{N} }|x|^{-\beta}\mathrm{div}(|x|^{\alpha}\nabla U)\mathrm{div}(|x|^{\alpha}\nabla \varphi_{1})\mathrm{d}x\int_{\mathbb{R}^{N}}|x|^{\beta}U^{p^{*}_{\alpha,\beta} -1}\varphi_{2}\mathrm{d}x\] \[-\frac{4}{\|U\|_{*}^{p^{*}_{\alpha,\beta}+2}}\int_{\mathbb{R}^{N} }|x|^{-\beta}\mathrm{div}(|x|^{\alpha}\nabla U)\mathrm{div}(|x|^{\alpha} \nabla\varphi_{2})\mathrm{d}x\int_{\mathbb{R}^{N}}|x|^{\beta}U^{p^{*}_{\alpha, \beta}-1}\varphi_{1}\mathrm{d}x\] \[+\frac{2(p^{*}_{\alpha,\beta}-2)}{\|U\|_{*}^{p^{*}_{\alpha,\beta} +2}}\int_{\mathbb{R}^{N}}|x|^{\beta}U^{p^{*}_{\alpha,\beta}-1}\varphi_{1} \mathrm{d}x\int_{\mathbb{R}^{N}}|x|^{\beta}U^{p^{*}_{\alpha,\beta}-1}\varphi_{2 }\mathrm{d}x.\]
From the proof of Theorem 1.3, we know that \(X_{1}(s)=\frac{s}{(1+|s|^{2})^{\frac{M-2}{2}}}\) with \(M=\frac{2(N+\beta)}{2+\beta-\alpha}>4\) is a solution of
\[\begin{cases}X_{1}^{\prime\prime}+\frac{M-1}{s}X_{1}^{\prime}-\frac{M-1}{s^{2} }X_{1}+Y=0\quad\text{in}\quad s\in(0,\infty),\\ Y^{\prime\prime}+\frac{M-1}{s}Y^{\prime}-\frac{M-1}{s^{2}}Y+\frac{(M+4)(M-2)M(M +2)}{(1+s^{2})^{4}}X_{1}=0\quad\text{in}\quad s\in(0,\infty).\end{cases}\]
Therefore, when \(\alpha>0\) and \(\beta_{\mathrm{FS}}(\alpha)<\beta<\frac{N}{N-2}\alpha\), we deduce that
\[\langle\mathcal{J}^{\prime\prime}(U)Z_{1,i},Z_{1,i}\rangle= \frac{\omega_{N-1}}{N}\left(\frac{2+\beta-\alpha}{2}\right)^{3}[ \mu-(M-1)]\bigg{\{}(M-3)\int_{0}^{\infty}(X^{\prime}(s))^{2}s^{M-4}\mathrm{d}s\] \[\quad+[(M-3)(M-1)+\mu]\int_{0}^{\infty}(X(s))^{2}s^{M-5}\mathrm{ d}s\bigg{\}}<0,\]
where \(\mu=\left(\frac{2}{2+\beta-\alpha}\right)^{2}(N-1)<M-1\), \(Z_{1,i}(x)=X_{1}(|x|)\frac{x_{i}}{|x|}\) for some \(i\in\{1,2,\ldots,N\}\). Then we deduce that
\[\langle\mathcal{I}^{\prime\prime}(U)Z_{1,i},Z_{1,i}\rangle<0,\]
due to \(\int_{\mathbb{R}^{N}}|x|^{\beta}U^{p_{\alpha,\beta}^{*}-1}Z_{1,i}\mathrm{d}x=0\). Consequently \(\mathcal{S}\) is strictly small than \(\mathcal{I}(U)=\mathcal{S}_{r}\), then no minimizers for \(\mathcal{S}\) are radial. Now, the proof of Theorem 1.4 is completed.
**Acknowledgements**
The research has been supported by National Natural Science Foundation of China (No. 11971392).
|
2302.05324 | SOCRATES: Text-based Human Search and Approach using a Robot Dog | In this paper, we propose a SOCratic model for Robots Approaching humans
based on TExt System (SOCRATES) focusing on the human search and approach based
on free-form textual description; the robot first searches for the target user,
then the robot proceeds to approach in a human-friendly manner. In particular,
textual descriptions are composed of appearance (e.g., wearing white shirts
with black hair) and location clues (e.g., is a student who works with robots).
We initially present a Human Search Socratic Model that connects large
pre-trained models in the language domain to solve the downstream task, which
is searching for the target person based on textual descriptions. Then, we
propose a hybrid learning-based framework for generating target-cordial robotic
motion to approach a person, consisting of a learning-from-demonstration module
and a knowledge distillation module. We validate the proposed searching module
via simulation using a virtual mobile robot as well as through real-world
experiments involving participants and the Boston Dynamics Spot robot.
Furthermore, we analyze the properties of the proposed approaching framework
with human participants based on the Robotic Social Attributes Scale (RoSAS) | Jeongeun Park, Jefferson Silveria, Matthew Pan, Sungjoon Choi | 2023-02-10T15:35:24Z | http://arxiv.org/abs/2302.05324v2 | # Towards Text-based Human Search and Approach with an Intelligent Robot Dog
###### Abstract
In this paper, we propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES) focusing on the _human search and approach_ based on free-form textual description; the robot first searches for the target user, then the robot proceeds to approach in a human-friendly manner. In particular, textual descriptions are composed of appearance (e.g., _wearing white shifts with black hair_) and location clues (e.g., _is a student who works with robots_). We initially present a Human Search Socratic Model that connects large pre-trained models in the language domain to solve the downstream task, which is searching for the target person based on textual descriptions. Then, we propose a hybrid learning-based framework for generating target-cordial robotic motion to approach a person, consisting of a learning-from-demonstration module and a knowledge distillation module. We validate the proposed searching module via simulation using a virtual mobile robot as well as through real-world experiments involving participants and the Boston Dynamics Spot robot. Furthermore, we analyze the properties of the proposed approaching framework with human participants based on the Robotic Social Attributes Scale (RoSAS) [6].
## I Introduction
In recent years, the coexistence of humans and robots has become increasingly common in daily life. Assistive robots are widely used in everyday commercial tasks within human environments such as delivering luggage at the airport or transporting packages such as food within an office. To ensure efficient human-robot interactions on such tasks, the ability for robots to interact with humans based on natural language and intuitive motions appears to be an important feature. In this paper, we propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES) which focuses on the problem of _human search and approach_. In this system, the target person is described via a free-form textual description; the robot first searches for a target person, then proceeds to approach in a human-friendly manner.
Recent success on large pre-trained models, e.g., GPT [5], CLIP [26], has opened a new paradigm for robots to understand human languages, such as instruction following [2, 13], vision language-based navigation [34, 28], and zero-shot object search [9, 16, 21, 24]. In this paper, we aim to build a robotic system that searches and approaches a target person based on a textual description of their appearance (e.g., _person with black hair wearing a white shirt_) and location clues (e.g., _is a student who works with robots_). Our task is similar to zero-shot object search [9, 16, 21, 24], which aims to search for objects based on descriptions provided in free-form text, but differs in that our goal is to search for target _people_ rather than objects. Because our objective is to enable robots to search for people, we must also focus on correctly identifying them based on free-form user-entered text, as well as having a robot competently and appropriately interact with targets.
Building on the success of large pre-trained models, Zeng et al. [33] proposed a modular framework called Socratic Model (SM). This framework leverages pre-trained models, which allows for an efficient and effective task-specific performance without fine-tuning on downstream task data. The SM framework formulates downstream tasks in a shared language domain between the pre-trained models and tackles the problem by building bridges among models from different modalities to facilitate communication. This approach has the advantage of leveraging pre-trained models trained on a large external database, which can avoid the need for task-specific data to train a single model. Inspired by the success of the SM framework, we propose to apply this approach to the task of _human search and approach_. Our hypothesis is that the capabilities of large pre-trained models can be effectively leveraged to solve this task.
To this end, we propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES), tackling the problem of _human search and approach_ where the target is given as a textual description. SOCRATES consists of two serial modules: search and approach. The search module of SOCRATES, also referred to as a Human Search Socratic Model, connects large pre-trained models on a shared language domain to solve search downstream tasks without fine-tuning. In particular, it is composed of a Large Language Model (LLM), Vision and Language Model (VLM), and a waypoint generator. The LLM infers the searching prior based on the location clue of the target person, the VLM localizes the human with an appearance description, and the waypoint generator estimates the next waypoints for the robot to take based on the prior knowledge by LLM. The approach module is used after the search phase: the robot has to approach
the target user in a comfortable and cordial manner, without being deemed threatening. To perform these functions, we propose a hybrid learning-based framework to estimate the cost function of the approach motion, composed of a learning from demonstration (LfD) step and knowledge distillation step from the large-language model.
The main contributions of the paper are as follows:
1. We propose SOCRATES, which efficiently searches and approaches a target person where its description is given as _free-form text_.
2. We present a Human Search Socratic Model, combining pre-trained models in the language domain to search for people based on textual descriptions.
3. We propose a hybrid learning-based framework to generate a target-friendly approach motion composed of the learning from demonstrations (LfD) module and the knowledge distillation module from large language models.
## II Related Work
In this section, we provide existing work related to applying the SM to searching for objects and humans and approaching a person in a socially acceptable way. Socratic Model (SM) [33] is a framework that enables composing pre-trained models to "talk to each other" so that the knowledge learned from a set of surrogate tasks can be applied to a new downstream target task. Zeng et al. [33] proposed various applications of SM, including robotic perception and planning, open-ended reasoning on egocentric video, and multimodal assistive dialogue. SM has been widely used in robotics to solve language-based problems such as task and motion planning or visual navigation. Huang et al. [12] proposed Inner Monologue for instruction-based task and motion planning, chaining Large-Language-Model (LLM) together with Vision-Language Models (VLM), robotic skills, affordance models, and human feedback in a shared language prompt. Shah et al. [28] proposed LM-Nav for vision-language navigation tasks, which executes language instructions in a self-supervised manner, combining three different pre-trained models (i.e., LLM, VLM, and Vision-Language-Navigation).
There have been recent attempts to search for an object described through free-form text, referred to as zero-shot object goal navigation [9, 16, 21, 24]. Gadre et al. [9] has proposed a framework called 'CLIP on the Wheels', which leverages Gradient-weighted Class Activation Mapping (Grad-CAM) [27] of CLIP [26] for localizing novel target objects and frontier based explorations [31] for exploration. Majumdar et al. [21] proposed the method for this task, projecting navigation goals into a common semantic embedding space using CLIP [26] to guide the agent to understand the image goal in language form. While some work has trained whole uni-model networks for text-based object search in simulated environments [21, 16], their lack of application in real-world settings serves as a limitation. Our task is similar to zero-shot object goal navigation as it is also based on text inputs; however, it differs in the fact that it requires adapting to the changing characteristics of the target person, as well as processing longer descriptions.
Approaching humans in a user-friendly manner is another key component in _human search and approach_. Several papers [1, 17, 7] have covered the socially aware navigation problem with learning-based frameworks, as it is difficult to model complex social interactions explicitly. Ahn et al. [1] presented the method to comfortably approach a user by considering each user's personal comfort field, measured by existing theories in anthropology, and learning personalized comfort fields from users. Konar et al. [17] proposed a sampling-based approximation to enable model-free inverse reinforcement learning for socially compliant navigations. In this paper, we contain a method based on a non-parametric inverse-reinforcement learning framework, this learning-based method has its strength in robustly learning the reward function with a small amount of noisy expert data.
## III Human Search Socratic Model
In this paper, we propose SOCRATES, which tackles the problem of text-based _human search and approach_. The problem is separated into search and approach phases. Here, we present the Human Search Socratic Model, which connects pre-trained models from different modalities to shared language domains as a method to solve the downstream task of searching for a target person based on textual descriptions.
The Human Search Socratic Model is composed of three key components: a large-language model, a vision-language model, and a waypoint generator. The large-language model [5] takes the description of a target person and estimates the search prior, whereas the vision-language model [30] processes images and descriptions to localize the target person. Lastly, the waypoint generator commands the robotic actions with the search prior and the detection results. An overview of the Socratic-based searching model is shown in Figure 1.
### _Problem Formulation_
In this framework, we tackle the problem of robotic human search based on textural descriptions \(t\). Such descriptions are composed of appearance \(t_{1}\) and location clues \(t_{2}\) (e.g., _wearing a white shift with black hair_ and _is a student who works with robots_). We assume that an annotated map and the location of the robot are accessible. The annotated map consists of area categories \(\{a_{i}\}_{i=1}^{K}\) and the label of each area \(a_{i}\). An example of the annotated map and the GUI used for textual description input of target people are illustrated in Figure 2. The output of the Human Search Socratic Model is a waypoint \(\mathbf{p}=(x,y,z,\theta)\), which directs the robot to a search location.
### _General Procedure_
Using large-language models, we estimate the search prior by computing the likelihood of the target person to occur in label \(a_{i}\) of the annotated map shown in Figure 2-(a). Based on this search prior, the waypoint generator calculates the cost of each label and generates the waypoint whose label has the lowest cost. Then, the waypoint generator conducts
a local search of the area belonging to the chosen label using the detection results from the vision-language model. As the ambiguity between language and vision is inevitable, there is a possibility of obtaining multiple answers (potential target detections). As such, the system asks for user feedback on whether a detected person fitting the textual description provided is the target person. If not, the system continues the local search. When the waypoint generator finishes the local search, the system recalculates the cost of the map's label, moves to the next label, and starts another local search. The following subsections explain the details of each process.
### _Large Language Model_
We leverage a large language model, i.e., GPT3 [5], in order to provide a search clue to the robot. We gain commonsense knowledge of where the target person may be located on the annotated map. This knowledge guides the robot to search for the areas that have a high likelihood of the target person's placing. The prompt given to GPT3 is as follows:
The lab is composed of [floorplan categories (\(\{a_{i},\cdots,a_{K}\}\))]. X is a [location clue (\(t_{2}\))] but not always. Where can I find X in the lab?
We denote a set of \(M\) generated sentences \(\{s_{i}\}_{i=1}^{M}\) as follows:
\[\{s_{i}\}_{i=1}^{M}=f_{1}(t_{2},a_{1},\cdots,a_{K}) \tag{1}\]
where \(f_{1}\) is GPT3.
### _Vision-Language Model_
For the successful localization of the target person based on the description of appearance \(t_{1}\), we utilize the vision-language model. In particular, we leverage PNP-VQA: a specific vision-question answering model proposed in [30]. The input of the vision-language model is an image \(I\) and a question, where the question is prompted as follows:
Is a person [appearance (\(t_{1}\))]?
As the vision-question-answering model provides an answer to a question, we process the answer (yes or no) as a binary label. Additionally, we leverage the Grad-CAM [27] of the VLM to localize the target person. Following the same approach in PNP-VQA [30], we compute the relevance between the text and image features to obtain an activation map. Based on un-normalized Grad-CAM on the text and image similarity, we obtain the bounding box \(\mathbf{b}=\{x_{1},y_{1},x_{2},y_{2}\}\) of the image on the area where the Grad-CAM score is higher than the threshold.
### _Waypoint Generator_
The waypoint generator operates in two phases; the global search phase and the local search phase. The first phase determines which label of the annotated map to visit. The
Fig. 1: Overview of the proposed Human Search Socratic Model, red: input, blue: output
Fig. 2: Input of the system.
second module generates waypoints to search for the target person in the specific annotated area.
Global SearchIn the global search phase, the module estimates the likelihood of occurrence of the target person in the specific label based on the prior knowledge in Section III-C. It then generates the waypoint to reach an area belonging to a specific label. The first step is to measure the occurrence score of label \(a_{i}\) in the annotated map. We calculate the score with maximum word similarity [22] between each word from the generated sentence in equation 1 and the label name. The occurrence score becomes the average of each score calculated by generated sentences from \(\{s_{i}\}_{i=1}^{M}\).
\[p(a_{i}|t_{2})=\frac{1}{M}\sum_{k=1}^{M}\max_{l}\ell(w(a_{i}),w(s_{k}^{l})) \tag{2}\]
where \(s_{k}^{l}\) is a \(l\)-th word in sentence \(s_{k}\), \(\ell\) is a cosine similarity and \(w\) is a word to vector embedding [22]. Based on the occurrence score, we compute the cost of visiting label \(a_{i}\) for each label in the annotated map. The cost function is as follows:
\[c_{l}(a_{i})=||(\mathbf{p}_{r}-\mathbf{p}_{a_{i}})||_{2}+w_{e}(1-p(a_{i}|t_{2})) \tag{3}\]
where \(\mathbf{p}_{r}\) is a current robot pose, \(\mathbf{p}_{a_{i}}\) is the closest reachable position in the annotated area where its label is \(a_{i}\), and \(w_{e}\) is a weight of the occurrence score. Then the robot navigates to \(\mathbf{p}_{a_{i}}\) that has the lowest cost among unvisited labels.
Local SearchIn the local search phase, the robot aims to search for a target person in the area that is annotated as the chosen label. After navigating to the area chosen as described in the global search phase, the robot conducts the local search. As detection based on language description is less reliable and also dependent on the specific viewpoint, we adopt an _indirect search_ approach. The robot searches for general humans first, obtains better viewpoints, and then detects the target based on the language description. We leverage the YOLOV5 [15] model pre-trained on the MS-COCO [19] dataset to detect general humans. With an indirect search approach, we can prevent missing or false detection due to poor viewpoints and reduce the amount of feedback asked back to the user.
The steps of the local indirect search are as follows. The robot first detects the general person, if any person is detected, the robot approaches or moves back to keep a 5m distance and centers the human to obtain a proper image. Then the vision-language model detects the person based on the description. If a potential target is detected, the system asks for user feedback. If no general person or text-based person is detected, the robot conducts frontier-based exploration [31] based on the map. We also use the history of the visited waypoints \(\{\mathbf{p}_{i}\}_{i=1}^{n}\) to eject the waypoint that has already been visited. The robot visits the waypoint if the minimum \(l2\)-norm between past waypoints and the target waypoint is smaller than a certain threshold. The details are in supplementary materials.
## IV Hybrid Learning-based Human Approach
In this section, we propose a hybrid learning-based framework for generating a cordial approach motion to a target person. Target-friendly approach motions are an essential part of the _human search and approach_ task since the robot has to move to a reachable position within the range of the target person without appearing threatening or distracting him/her. The proposed hybrid learning-based framework consists of two different modules: learning from demonstration (LfD) and knowledge distillation. Both modules estimate the reward function of the state space for a confiding approach trajectory and are combined to a final reward function. Next, the path planner conducts cost-aware planning based on the estimated reward function and sends actions (velocity commands) to the robot. The overview of the proposed hybrid learning-based framework is shown in Figure 3.
### _Problem Formulation_
The objective of the human approach is to generate a target-friendly approach motion. The inputs are estimated human pose \(\mathbf{p}_{h}\), robot pose \(\mathbf{p}_{r}\), captions \(\{\mathbf{c}_{i}\}_{i=1}^{N}\), and gaze parameter \(g\) which identifies whether the person is looking at the robot or not. The proposed framework first estimates the reward function \(R\) of the state space, in which the frame of the reward function is defined as the robot pose frame with respect to the human pose frame. We define the state space as a combination of the two-dimensional robot poses relative to the human pose, gaze parameter, and the robot's speed; \(\mathbf{x}=\{x_{r}^{h},y_{r}^{h},\theta_{r}^{h},g,v\}\)1. With the reward function \(R\), the path planner plans the trajectory and controls the robot by velocity \((v_{x},v_{y},v_{\theta})\) commands. We assume the local egocentric map is accessible for cost-aware planning without collision.
Footnote 1: We arranged as \(x_{r}^{h}\in\{-6,-5.5,\cdots,6\}\), \(y_{r}^{h}\in\{-3,-2.5,\cdots,3\}\)\(\theta_{r}^{h}\in\{-\pi,-3/4\pi,\cdots,3/4\pi\}\)\(v\in\{0.15,0.4,0.65\}\) with a total of \(14976\) points and \(7488\) inducing points.
### _Input Processing_
The inputs of the human approach system are the pose of the robot relative to the pose of the person, gaze parameter, and image captions. As the detection result in section III-D can only estimate its position with depth information, we utilize a pre-trained human face mesh estimation network proposed in [20] to estimate the orientation of a target person. The mesh detection is conducted on a cropped image of the target's head based on the detection result. If the absolute orientation of the human face is less than 40 degrees, the gaze parameter becomes \(1\); otherwise, \(0\). Furthermore, to process the image in the format of the input of the language model, i.e., text, we estimate the caption by the image-captioning branch (BLIP [18]) of PNP-VQA [30].
### _Learning from Demonstration_
As it is hard to explicitly model complex social behaviour for the robot to create appropriate and non-threatening approaching motions, we adopt a learning from demonstration
(LfD) paradigm. In particular, we leverage the Kernel Density Matching Reward Learning [8] (KDMRL) framework to estimate the reward function of the approaching motion. It optimizes the reward function that matches the state-action distribution of expert demonstrations.
In the density matching reward estimation framework, we assume that the demonstrations are sampled from stationary state distribution \(\mu(\mathbf{x})\). Following the objective of the inverse reinforcement learning formulation, the objective of density matching reward learning (DMRL) is to maximize the dot product of the density of the state space \(\hat{\mu}\) and the reward \(R\) subject to \(||R||_{2}\leq 1\). Then, the problem can be relaxed by solving the following optimization problem:
\[\underset{R}{\text{maximize }}\tilde{V}(R)=\sum_{\forall\mathbf{x}\in U}\hat{ \mu}(\mathbf{x})\tilde{R}(\mathbf{x})-\frac{\lambda}{2}||\tilde{R}||_{\mathcal{ H}}^{2} \tag{4}\]
where \(\tilde{R}\) is a optimization subject, \(\mathbf{x}\) is a state, \(\lambda\) is a smoothness parameter, \(U=\{\mathbf{x}_{i}^{U}\}_{i=1}^{N_{U}}\) is a set of \(N_{U}\) inducing points, and \(||\tilde{R}||_{\mathcal{H}}^{2}\) is the squared Hilbert norm. Then the reward function with inducing points is as follows:
\[\tilde{R}(\mathbf{x})=\sum_{i=1}^{N_{U}}\alpha_{i}k(\mathbf{x},\mathbf{x}_{i}^ {U}) \tag{5}\]
where \(\mathbf{x}\in\mathcal{X}\), \(\mathcal{X}\) is the state space, \(\mathcal{U}\subset\mathcal{X}\) is a set of predefined inducing points, \(\alpha\in\mathbb{R}^{N_{U}}\) is a parameter determining the shape of the reward function, and \(k(\cdot,\cdot)\) is a positive semi-definite kernel function. Kernel density estimation is used to estimate the probability density function of \(\mu\),
\[\hat{\mu}(\mathbf{x})=\frac{1}{Z}\sum_{k=1}^{N_{D}}\cos(\frac{\pi}{2}(1-\gamma_ {k}))k_{\mu}(\mathbf{x},\mathbf{x}_{k}^{D}) \tag{6}\]
where \(N_{D}\) is the number of training samples and \(\mathbf{x}_{k}^{D}\) is the \(k\)-th training data, \(\gamma_{k}\) is a leverage value, and \(k_{\mu}(\cdot,\mathbf{x}^{D})\) is a kernel function whose integral is one. The leverage value \(\gamma_{k}=\delta^{T-t}\) indicates the weight of \(k\)-th data with \(\delta\in[0,1]\), \(T\) and \(t\) is the length and time-step of the trajectory of the data and \(Z\) is a normalization constant.
With equation 6, 5, and norm regularization \(\frac{\beta}{2}||\alpha||_{2}^{2}\), we can formulate equation 4 as matrix form as follows
\[\max_{a}\tilde{V}=\frac{1}{Z}\alpha^{T}K_{U}K_{D}\mathbf{g}-\frac{\lambda}{2} \alpha^{T}K_{U}\alpha-\frac{\beta}{2}\alpha^{T}\alpha \tag{7}\]
where \([K_{U}]_{ij}=k(\mathbf{x}_{i}^{U},\mathbf{x}_{j}^{U})\), \([K_{D}]_{ij}=k_{\mu}(\mathbf{x}_{i}^{U},\mathbf{x}_{j}^{D})\), and \(\mathbf{g}=\{\cos(\frac{\pi}{2}(1-\gamma_{k}))\}_{k=1}^{N_{D}}\) is a vector of leverage values. The analytic solution of equation 7 with respect to \(\alpha\) is as follows:
\[\hat{\alpha}=\frac{1}{Z}(\beta K_{U}+\lambda I)^{-1}K_{U}K_{D}\mathbf{g} \tag{8}\]
### _Knowledge Distillation_
Furthermore, we distillate the knowledge from the Large-Language model [5] to generate the approaching motion of the robot. We extract commonsense knowledge from GPT3 [5] to estimate the reward function. With gaze parameter \(g\) and captions \(\{\mathbf{c_{i}}\}_{e=1}^{N}\), we make prompt for GPT3 as follows:
```
[caption (c\({}_{1}\))]andtherobotis[([\(g=1\))/not(\(g=0\))]lookingataperson.Whatisthetrajectoryfortherobotsto gentlyapproachaperson?
```
We generate sentences with GPT3, \(\{s_{i}^{t}\}_{i=1}^{n:N}\), \(n\) times for each \(N\) captions. Then, we extract the keywords2 that are relevant to the trajectory, i.e., words related to the position, velocity, and negative sign. The keywords extraction function is denoted as \(f_{k}(s_{i}^{t})\) and extracted keyword as \(\{w_{i}^{l}\}_{l=1}^{L}=f_{k}(s_{i}^{t})\).
Footnote 2: straight, 45, side, behind, curve, front, curved for the position, slow, slowly for velocity, and not for negative sign.
To generate a trajectory based on a set of keywords, we set a word-to-trajectory function \(f_{d}(w)\) for generating a partial path based on the word. The detail of the function is explained in the supplementary materials. Based on this function \(f_{d}(w)\), we generate an estimated trajectory by iterating over the extracted keyword and stacking the partial path from the word. Next, we transfer the generated trajectories to the reward function; the procedure of estimating the reward function is shown in algorithm 1. Following reward estimation, we smooth the
Fig. 3: Overview of Proposed hybrid learning-based approach
reward function with the radial basis kernel function \(k_{r}(\cdot,\cdot)\). The knowledge-distillation-based reward function becomes \(R=k_{r}(\mathbf{x},\mathbf{x})\cdot R\).
```
\(R=[0]_{s}\) where \(s\) is a dimension of the state space for i = 0 to \(n\cdot N\)do \(\{w_{i}^{l}\}_{l=1}^{L}=f_{k}(s_{i}^{l})\), \(x=0\), \(y=0\) for\(l=L\) to \(0\)do \(\{d\mathbf{x}^{j},d\mathbf{y}^{j}\}_{j=1}^{N_{L}}=f_{d}(w_{i}^{l})\) // extract keyword \(v=0.15\) if {'slow','slowly' \(\in\{w_{i}^{l}\}_{l=1}^{L}\}\) otherwise \(v=0.6\) // set velocity for j = 0 to \(N_{L}\)do \(\theta=\arctan(d\mathbf{y}^{j}[-1],d\mathbf{x}^{j}[-1])\) \(R(x+d\mathbf{x}^{j},y+d\mathbf{y}^{j},[\theta]_{d},\cdot,[v]_{d})+=1\) // update reward function \(x\gets x+d\mathbf{x}^{j}[-1]\), \(y\gets y+d\mathbf{y}^{j}[-1]\) // update last position endfor endfor endfor \(R\gets R/(N\cdot n)\)
```
**Algorithm 1** Reward Estimation for Knowledge Distillation Based Human Approach
### _Path Planner_
In the path planning step, the main goal is to generate paths based on the reward function for any initial configuration of the robot. To achieve this, we apply the FMT* path planning algorithm [14] made available in the Open Motion Planning Library (OMPL) [29] in an environment where the cost of each state is a function of the rewards obtained through the hybrid learning-based reward estimation.
With reward functions from the learning-based framework \(R_{I}\) and knowledge distillation-based reward function \(R_{L}\), we estimate the combined reward function by weighted summation of the two parts, \(R_{T}=w_{r}R_{I}+(1-w_{r})R_{L}\) such that \(R_{T}\in[0,1]\). In this system, we set \(w_{r}=0.2\). The resulting function is converted into a cost \(1-R_{T}\) that is used by FMT* to penalize undesirable states. We set the cost function to take into account the reward \(R_{T}\) and the path length of the arriving state. The cost function is as follows:
\[c_{m}(\mathbf{\bar{x}}_{1},\mathbf{\bar{x}}_{2})=\zeta(1-R_{T}(\mathbf{\bar{x }}_{2}))\text{dist}(\mathbf{\bar{x}}_{1},\mathbf{\bar{x}}_{2}), \tag{9}\]
\[\text{dist}(\mathbf{\bar{x}}_{1},\mathbf{\bar{x}}_{2})=w_{p}||\mathbf{q}_{2}- \mathbf{q}_{1}||+w_{o}|\theta_{2}-\theta_{1}|, \tag{10}\]
where \(\mathbf{\bar{x}}_{i}=\{\mathbf{q},\theta\}\) is the \(i\)-th configuration of the robot such that \(\mathbf{q}\in\mathbb{R}^{2}\) and \(\theta\in SO(2)\). The weights \(w_{p}\) and \(w_{o}\) are used in the distance function to give more or less importance to each component of the state. \(\zeta\) is a parameter that is used to fine-tune whether the solution converges towards the shortest path or to one that closely matches the reward function according to its magnitude.
## V Experiments
In this section, we analyze the properties of the proposed method and discuss the experimental results. We validate our search module in the real world with human participants while also conducting an additional experiment on simulated environments. In addition, we conduct human-subject pilot studies on the proposed approaching motion of our system. The details of hyperparameters and the environments are explained in the supplementary materials.
### _Real-world Search Experiments_
In this section, we will discuss the experimental results of the proposed Human Search Socratic Model in the real-world environment. We use a Spot robot mounted with a Luxonis OAK-D-Pro camera [23] as shown in Figure 4-(a). The real-world environment setting is a robotics lab environment shown in Figure 4-(b), Five different annotations of the floorplan are used to distinguish locations in the lab. We conducted our real-world experiments using ten different locations, with five different people and two unique initial positions for each person. The performance of the system is evaluated using three metrics: success rate (SR), success rate per path length (SPL), and the number of false detections (# FD). SR and SPL are widely used in the evaluation of visual search tasks, as reported in previous studies such as [4]. The # FD metric is the average number of false detections, equivalent to the
Fig. 4: Robot settings and Environmental settings for Search Evaluation.
number of false feedback from the users. Success is defined as the robot successfully localizing the target person within a 5 m radius and receiving positive feedback from the users. A trial is marked as a failure if the system does not meet the success criteria, in total path length less than 30 m and # FD less than five.
We compare the proposed system with three different approaches: baseline, without prior knowledge, and direct search. _Direct search_ is the method that solely utilizes the vision language model without general 'human' detection. If the target is detected, it approaches within 5m of the target and requests user feedback; otherwise, the robot explores based on FBE [31]. In addition, the method denoted as _without prior_ adapts only the distance term in equation 3 without the prior knowledge from language models. The _baseline_ is where we leverage direct search without prior knowledge, which is similar to Gadre et al. [9] on replacing the Grad-CAM [27] of CLIP [26] with BLIP [18].
The performance of the proposed search module is reported in Table I. The system proposed in this work has the best performance on the SPL metric with a gap of \(0.137\). While the direct search has the highest success rate, the gap of \(0.1\)
Fig. 5: Illustration of the Experimental Results. (a) shows the exemplary demonstrations of the proposed search module. The circles represent waypoints. (b) shows how the trajectories vary between methods. Circles represent waypoints and the circles with dashed lines represent the viewpoint that contained the false detection. (c) illustrates the proposed human approaching motion on gaze condition and how trajectories differ from a different method.
is not significant. In addition, the direct search has more false detections compared to the indirect search, with a gap of \(0.6\). As the proposed method obtains a human-centered and well-scaled image, this property leads to a decrease in the number of false detection and an increase in SPL. Compared to methodologies without prior knowledge for searching, methods that utilize prior knowledge have a significantly higher SPL metric, with an average \(0.23\) gap. As such, we would like to posit that prior knowledge plays an essential role in the efficient search by robots.
Figure 5-(a) and (b) shows the trajectory of the robot. Figure 5-(a) illustrates three different demonstrations of the proposed searching method. The robot estimates the area where the target person is likely to be located first and then performs a local search within the likely area. For example, in Figure 5-(a)-(2), the robot infers, according to Section III-C that the target person (a student that works on VR headsets) is likely to be located in the lower-floor student cubic area or upper-floor open-space where the VR headsets are stored. The robot searches for the target within the lower-floor cubicles first, as it is closer to its current position, and then moves to the upper-floor space when the target is not detected. Figure 5-(b) shows the trajectory with different methods. With prior knowledge, the robot searches the upper-floor open-space first and then moves to the workspace with robots. On the other hand, without prior knowledge, the robot searches in the order of open-space area, sofa, and offices, leading to failure. Although the proposed indirect search does not show a significant gap in path length with direct search, the indirect search asks for less feedback from the user.
### _Simulation Search Experiments_
To show the expansion of the proposed method to various scenes other than the lab, we evaluate the system in two more simulation environments, i.e., a house and an office. We utilized a virtual Pioneer3AT robot mounted with an RGB-D camera and Lidar within the Gazebo simulator. We ran 24 experiments, two different initial positions for each person, with four different people in the house setting and eight in the office setting. We assume all people are static in the simulation environment. The comparison methods and metrics used are the same as Section V-A. When the total path length is more than \(15\) or the number of false detection is larger than three, we deem the trial as a failure.
The proposed method outperforms the compared method in all of the metrics. The number of false detection is higher with a \(0.08\) gap compared to the direct search method. In addition, the result shows \(0.041\) higher SPL on indirect search compared to direct search. The method with prior knowledge increases the performance, \(0.32\) in SPL and \(0.178\) in SR. The success rate of indirect search is higher than direct search on the simulational result, but the gap of \(0.041\) is minor. Unlike real-world experiments, the number of false detection increases when we do not adapt the prior knowledge. Since the simulation environment is more crowded and contains more overlapping clothes, the longer path length leads to more false detections.
### _Real-world Approach Experiments_
In this section, we conduct a real-world human subject pilot study to analyze the properties of the proposed approach module with the SPOT robot in Figure 4-(a). The robot approaches the person from 5 m away to closer than 0.6 m, which is inside the reachable space for the target person. We evaluate the proposed method based on the Robotic Social Attribute Scale (RoSAS) [6]. We compare the proposed approaching motion with three different methods of approach generation: learning from demonstration (LfD), knowledge distillation (KD), and a baseline condition (having the robot directly move toward the target person based on position control). The LfD-based approach only leverages the reward function described in Section IV-C, and the KD-based approach solely adapts the reward function shown in Section IV-D.
We recruited 16 people (gender: M=10, F=6, ages: 21-47) to participate in the approach experiments. We divided the participants into two groups: an experienced group containing those who have worked with robots previously (n=8) and an inexperienced group (n=8). The participants were asked to rate the robot's approach motion using 18 word descriptors as part of the RoSAS inventory, which measures user perceptions of the robot's behaviour along the dimensions of _competence_, _warmth_, and _discomfort_. Each participant reported how applicable each keyword described their perception of the robot under different robot conditions on 7-point Likert scale that ranged from 1 (not at all) to 7 (very much so). Each participant evaluated eight different trajectories balanced using a Latin
Fig. 6: The illustration of data collection for the Learning-from-Demonstration module and the resulting reward function. The arrow heading indicates the orientation of the robot and the size indicates the speed. We only plot the reward that has a maximum reward on corresponding orientation and speed.
square design: four trajectories with gazing at the robot and four trajectories without gazing at the robot.
Before the experiment, we collected data using the Spot robot's joystick controller to create the LfD module described in Section IV-C, consisting of 18 expert demonstrations from 4 different people. We carefully collected data from experts with feedback from the target person, such as if the person felt safe or threatened according to the motion. The estimated reward function from the LfD is shown in Figure 6. The estimated reward increases when the robot gets closer to a target person, where the coordinate and the angle of the target person are fixed as \((0,0,0)\). On the learned reward function with gaze condition, the robot approaches from the front of the person, and the robot obtains a larger reward by heading to the target person at a decreasing speed. Under the no-gaze condition, the robot approaches from the back, and the reward function contains a higher value for heading diagonally to a target person.
Figure 5-(c) shows the robotic motion of the proposed method and how the trajectories vary between different approaching methods. In Figure 5-(c)-(1), when the target person is facing the robot, the robot moves sideways in a short distance and goes straight to the person at a decreasing speed. When the target person is not looking at the robot, as shown in Figure 5-(c)-(2), it turns toward the direction in which the person is looking and then moves laterally to the target. Figure 5-(c)-(3) and 5-(c)-(4) illustrate the trajectories from different methods conditioned on the gaze parameter. The limitation of the knowledge distillation (KD) framework is that the robot does not slow down near the human. As the LfD-based reward function is trained from various complex expert demonstrations, it models diverse directions to approach motion, leading to frequent changes in heading directions. The proposed method tends to slowly approach from the forward direction on gaze condition and slowly from sideways in no-gaze condition. We believe that the proposed hybrid learning-based approach has the benefit of smoothing the LfD-based trajectories, reducing the number of abrupt changes in the heading of the robot.
We have conducted human-subject pilot studies using the RoSAS. To analyze our results, we conducted a two-way repeated-measures MANOVA on the RoSAS results obtained from the real-world approach experiment. As we have recruited limited subjects in this pilot, this study is not sufficiently powered to assess the effect; the results should be interpreted with caution. We observe that the approaching motion significantly affects the participants on the _warmth_ dimension of the RoSAS (\(F(7)=1.297\), \(p=0.047\), \(\eta^{2}=0.132\) with \(\alpha=0.05\)). No other RoSAS dimension exhibits any significant results. Figure 7 reports the mean of each rating with different methods. Post hoc pairwise comparisons with Bonferroni correction were not able to detect any significant effects, likely due to a large number of contrasts (seven) as well as reduced statistical power. However, on the _warmth_ scale, the proposed method appears to outperform on the gaze condition with a gap of \(0.62\) and is comparable in no-gaze condition with a minor gap of \(0.22\). The other attributes, _competent_ and _discomfort_ follow the same tendency. With curved motion and decrement in speed, the participants observed motion to be more competent and warm with less discomfort compared to straight and fast-approaching motion. As the standard deviation of _warmth_ scale on the proposed method on no-gaze condition \(0.347\) is higher than gaze condition \(0.291\), we would like to hypothesize that the preference of the participants relies more on the no-gaze condition. This result, however, would require a larger participant pool for confirmation.
Furthermore, we measure the difference between the robot-experienced group and the robot-inexperienced group. The experienced group reported \(0.46\) and \(0.23\) higher _competence_ and _warmth_ respectively on the proposed method compared to the baseline in the no-gaze condition, while the participants from the inexperienced group rated \(0.50\) and \(0.35\) lower. From the survey comments, the lateral motion from the proposed method made it hard for the inexperienced group to predict the direction of the robot while the experienced group observed that the motion is more dog-like.
### _Limitations_
Although the human-subject studies of the approaching motion did not appear to show significant effects, we would like to state that the user had a large variance in their perspectives. As the data of the LfD framework is collected from robot-experienced experts, the proposed method adapts the preferences of the experts more than the non-expert group. We believe that the proposed method can be further improved by considering the preference of the user.
## VI Conclusion
In this paper, we have introduced SOCRATES, which tackles the problem of _human search and approach_ based on
Fig. 7: Average rate of the RoSAS attributes from different groups with 95% confidence bound. |
2310.06414 | Plane Constraints Aided Multi-Vehicle Cooperative Positioning Using
Factor Graph Optimization | The development of vehicle-to-vehicle (V2V) communication facil-itates the
study of cooperative positioning (CP) techniques for vehicular applications.
The CP methods can improve the posi-tioning availability and accuracy by
inter-vehicle ranging and data exchange between vehicles. However, the
inter-vehicle rang-ing can be easily interrupted due to many factors such as
obsta-cles in-between two cars. Without inter-vehicle ranging, the other
cooperative data such as vehicle positions will be wasted, leading to
performance degradation of range-based CP methods. To fully utilize the
cooperative data and mitigate the impact of inter-vehicle ranging loss, a novel
cooperative positioning method aided by plane constraints is proposed in this
paper. The positioning results received from cooperative vehicles are used to
construct the road plane for each vehicle. The plane parameters are then
introduced into CP scheme to impose constraints on positioning solutions. The
state-of-art factor graph optimization (FGO) algo-rithm is employed to
integrate the plane constraints with raw data of Global Navigation Satellite
Systems (GNSS) as well as inter-vehicle ranging measurements. The proposed CP
method has the ability to resist the interruptions of inter-vehicle ranging
since the plane constraints are computed by just using position-related data. A
vehicle can still benefit from the position data of cooperative vehicles even
if the inter-vehicle ranging is unavaila-ble. The experimental results indicate
the superiority of the pro-posed CP method in positioning performance over the
existing methods, especially when the inter-ranging interruptions occur. | Chen Zhuang, Hongbo Zhao | 2023-10-10T08:31:33Z | http://arxiv.org/abs/2310.06414v1 | # Plane Constraints Aided Multi-Vehicle Cooperative Positioning Using Factor Graph Optimization
###### Abstract
The development of vehicle-to-vehicle (V2V) communication facilitates the study of cooperative positioning (CP) techniques for vehicular applications. The CP methods can improve the positioning availability and accuracy by inter-vehicle ranging and data exchange between vehicles. However, the inter-vehicle ranging can be easily interrupted due to many factors such as obstacles in-between two cars. Without inter-vehicle ranging, the other cooperative data such as vehicle positions will be wasted, leading to performance degradation of range-based CP methods. To fully utilize the cooperative data and mitigate the impact of inter-vehicle ranging loss, a novel cooperative positioning method aided by plane constraints is proposed in this paper. The positioning results received from cooperative vehicles are used to construct the road plane for each vehicle. The plane parameters are then introduced into CP scheme to impose constraints on positioning solutions. The state-of-art factor graph optimization (FGO) algorithm is employed to integrate the plane constraints with raw data of Global Navigation Satellite Systems (GNSS) as well as inter-vehicle ranging measurements. The proposed CP method has the ability to resist the interruptions of inter-vehicle ranging since the plane constraints are computed by just using position-related data. A vehicle can still benefit from the position data of cooperative vehicles even if the inter-vehicle ranging is unavailable. The experimental results indicate the superiority of the proposed CP method in positioning performance over the existing methods, especially when the inter-ranging interruptions occur.
Cooperative positioning (CP), inter-vehicle ranging, plane constraints, factor graph optimization (FGO).
## I Introduction
Providing accurate and reliable positioning service is essential for vehicular applications in intelligent transportation systems (ITS) [1], such as lane-level guidance, flexible lane management, formation driving and self-driving. The Global Navigation Satellite Systems (GNSS), which have become the core enabler of the vehicular positioning systems, are capable of providing users with absolute positions in open-sky areas. Unfortunately, the performance of standalone GNSS can be degraded severely due to GNSS non-line-of-sight (NLOS) delays, multipath effects and signal outages in dense urban areas [2]. To improve the positioning performance in urban scenes, many onboard sensors such as inertial navigation system (INS) [3], Lidar [4], and camera [5] are introduced into vehicular navigation system. These sensor-rich vehicles (SRVs) can get benefit from high-performance sensors by applying advanced sensor fusion methods. However, it is of great difficulty for common vehicles (CoVs) equipped with few low-cost perception sensors to obtain high-precision positions by fusing their own measurements [6]. In fact, CoVs will still coexist with SRVs in the near future. It is worth studying how to improve the positioning performance of CoVs, especially in dense urban areas.
Owing to the rapid development of vehicle-to-vehicle (V2V) communication, cooperative positioning (CP) methods also receive the widespread attention recent years [7]. In a CP system, the vehicles can exchange their measuring data with each other for improving their positioning performance [8]. Since a vehicle is feasible to conduct the cooperative positioning without utilizing expensive sensors except for inter-vehicle communication devices, CP becomes one of the most promising positioning methods in the case of coexistence of CoVs and SRVs. The inter-node ranging is a requisite for most of the existing CP methods. Based on different techniques for inter-node ranging, the CP methods can be divided into two categories, including GNSS-based CP methods and range-based CP methods [9]. The GNSS-based CP methods mainly apply the double difference (DD) on the pseudorange [10, 11] or carrier phase measurements [12, 13] to obtain inter-node distances or relative positions between two vehicles. Although GNSS DD techniques have been used by many researchers for inter-node ranging in CP methods, the availability and accuracy of double differenced GNSS is limited in dense urban environments due to multipath effects, NLOS delays and signal outages [14, 15]. Different from GNSS-based CP methods, the range-based CP methods estimate the inter-node ranges by using radio ranging techniques such as time-of arrival (TOA) [16], angle of arrival (AOA) [17], receiver signal strength (RSS) [18] and round-trip time (RTT) [19]. The range-based CP methods are more suitable for the scenarios where the availability of GNSS is limited [20]. Additional radio ranging measurements could help to improve the positioning availability and decrease the positioning errors.
A variety of range-based CP methods have been proposed in the latest literatures [21-25]. Most of these works assume that the inter-node ranging is always available and accurate enough. However, the inter-node ranging can be easily affected by many factors such as NLOS delays, obstacles between two cars and limited perception range. Besides, unstable vehicular communication also degrades the performance of range-based CP methods. The excessively large transmission latency, packet collisions and limited link life time could reduce the availability and reliability of inter-node ranging. Some researchers have mentioned the problem of abnormal inter-node ranging and ranging data loss in range-based CP methods. To solve the problem of abnormal inter-node ranging, some fault-tolerant CP methods are proposed based on fault detection and exclusion (FDE) algorithm [26-28]. However, the availability of inter-node ranging will decline severely if numerous abnormal measurements are removed. To mitigate the influence of ranging data loss on CP methods, some works focus on optimizing the communication protocols and recovering the missing inter-node distances. The authors in [29] analyzed the impact of range information exchange overhead on CP methods and proposed protocol improvements which can reduce the packet collisions and improve the positioning accuracy. Nevertheless, protocol improvements can only solve the problem of ranging data loss brought by communication congestion. In [30], the missing inter-node distances are estimated by applying the singular value thresholding so that the impact of inadequate inter-node ranging can be mitigated. However, the accumulated errors are inevitable for inter-node distance estimation if a large proportion of inter-node ranges are missing. To sum up, it is still questionable whether the CP methods can benefit from the shared information in the case that most of the inter-ranging measurements are interrupted or even unavailable.
In addition to the ranging information, a positioning method can also benefit from the range-free environmental data such as geographic information. The road conditions have already been considered in some non-cooperative positioning methods [31, 32]. Considering that the urban roads are relatively smooth and the road conditions are satisfying, a vehicle can be regarded to travel on a plane most of the time. If the road plane information is available, the plane constraints can be imposed on the positioning solutions to improve the positioning performance. Fortunately, the road plane can be constructed by just using the position-related data from the cooperative vehicles that have ever traveled on this plane, as shown in Fig. 1. Therefore, it is possible to use the position solutions shared among the vehicles for CP purposes.
In this paper, we aim at using the plane constraints to enhance the performance of the range-based CP method. The state-of-art factor graph optimization (FGO) algorithm is employed to fuse all the measurements in CP system. The FGO algorithm is suitable for applications with multiple constraints. There have been some works that apply FGO to GNSS positioning and cooperative positioning [33-35]. The related works show the superiority of FGO over the extended Kalman filter in positioning estimation. The main contributions of this paper are listed as follows:
1) We propose to use the positioning results received from the cooperative vehicles to construct the road plane for each vehicle, and then introduce the plane parameters into the CP scheme. In this way, a vehicle can still benefit from the position data of cooperative vehicles even if the inter-node ranging is unavailable.
2) We design the plane availability detection and fault exclusion method to ensure the effectiveness of the plane constraints. The outliers can be removed from the plane fitting by using Random Sample Consensus (RANSAC) algorithm.
3) Based on the FGO algorithm, we propose a novel centralized framework to integrate the plane constraints with the GNSS pseudorange, Doppler measurements and inter-vehicle ranging measurements (if available).
The paper is organized as follows. In Section 2, we give an overview of the proposed method. In section 3, the plane construction method is described, including the plane detection and outlier exclusion algorithm. In Section 4, the CP method based on FGO is introduced in detail. In Section 5, we present and analyze the experimental results from different aspects. The conclusions are given in the end of the paper.
## II Overview of The Proposed Method
The overview of the proposed CP method aided by plane constraints is shown in Fig. 2. It is assumed that the vehicular network is composed by a set of nodes with cardinality \(M\). For each node, a road plane where the vehicle node is traveling is constructed locally based on the cooperative positioning results calculated in the past. The positioning solutions from all the nodes in the vehicular network can be used to construct the road planes. The plane parameters may be different among the nodes because the vehicle nodes are possible to be located on different road planes. A vehicle node will collect the plane parameters together with the raw measurements including GNSS pseudorange, GNSS Doppler and inter-node distance to prepare the node message. All the node messages will be sent to center server for cooperative positioning. The global states
Fig. 1: Illustration of the road plane construction by just using the position-related data of vehicles. The blue spots denote the positioning results of different vehicles in multiple epochs, which will be used for fitting road plane.
of all the nodes are estimated centralized based on the state-of-art FGO algorithm. The positioning results of all the vehicles are collected and fed back to each vehicle for the road plane construction.
The innovation of our work is to introduce the plane constraints into CP method and then study how to integrate the plane constraints with other measurements using FGO, so as to improve the performance of the CP methods in the case of unstable inter-node ranging. On the one hand, the proposed CP method is able to benefit from the position information of cooperative vehicles even if the inter-vehicle ranging is unavailable. On the other hand, the use of plane constraints can further improve the positioning accuracy when the inter-node ranging is available. The details about the plane construction and the FGO-based CP algorithm will be introduced in the following sections.
## III The Road Plane Construction Based on Positioning Results
The road plane construction is the key of the proposed method. The procedure of the road plane construction is shown in Fig. 3. Firstly, the plane fitting based on SVD is conducted by fusing a series of positioning results from both local and cooperative vehicles. Then, the availability of the fitted plane is determined via the plane detection. If the plane detection is passed, the plane parameters will be output for constraining the positions. Otherwise, a fault searching strategy based on RANSAC will be used to search and exclude the outliers in the plane fitting. After that, the plane fitting and the plane detection will be conducted once again. The plane will be marked as unavailable if the plane detection fails to be passed after the fault exclusion. The plane fitting, the plane detection and the fault exclusion method will be introduced as follows.
### Plane fitting
The positioning results of the vehicles are related to the location of GNSS antennas, which are usually placed on the top of the vehicles. Since the heights of vehicles are different, the heights of GNSS antennas from the ground are also different from each other. Before fitting the road planes, we need to eliminate the height differences of GNSS antennas among the vehicles. The projection of the positioning results on the road surface will be used to fit the plane instead of the positioning results themselves.
For a positioning result \(\tilde{\mathbf{p}}_{u_{m},t}\) of the node \(u_{m}\) at epoch \(t\) in Earth-Centered-Earth-Fixed (ECEF) frame, the projection of this position on the road plane can be calculated by:
\[\tilde{\mathbf{p}}^{\prime}_{u_{m},t}=\tilde{\mathbf{p}}_{u_{m},t}-\mathbf{S} \left[0;0;h_{u_{m}}\right] \tag{1}\]
where \(\mathbf{S}\) represents the transformation matrix from East-North-Up (ENU) to ECEF coordinate system. \(h_{u_{m}}\) is the antenna height of \(u_{m}\). We use \(\tilde{\mathbf{p}}^{\prime}_{u_{m},t}=\left[\tilde{x}^{\prime}_{u_{m},t}, \tilde{y}^{\prime}_{u_{m},t},\tilde{z}^{\prime}_{u_{m},t}\right]^{T}\) to fit the planes in the follows.
Fig. 3: The structure of road plane construction based on positioning results.
Fig. 2: The framework of the proposed cooperative positioning method aided by plane constraints.
The positioning results of all the vehicles in the past epochs are collected together. As introduced in the previous section, a total of \(M\) nodes is assumed to be considered. The set of the positioning results from the beginning to epoch (\(t\)-1) for all the vehicles is formulated as follows:
\[pos^{t}=\left\{\mathbf{\tilde{p}}_{m,0}^{t},\mathbf{\tilde{p}}_{m,1}^{t}\dots, \mathbf{\tilde{p}}_{m,t-1}^{t}\dots,\mathbf{\tilde{p}}_{m,t}^{t},\mathbf{\tilde {p}}_{m,t}^{t},\dots,\mathbf{\tilde{p}}_{m,t-1}^{t}\right\} \tag{2}\]
By default, we will fit a plane for each vehicle at each measuring epoch. For a specific vehicle, we select the positioning results close to this vehicle to fit the plane. The point set used to fit the plane for \(u_{m}\) at epoch \(t\) is defined as:
\[\begin{split} Set_{fitting}^{u_{m},t}=\left\{\mathbf{\tilde{v}} \mathbf{\tilde{p}}^{t}\in pos^{t}\left\|\mathbf{\tilde{p}}^{t}-\mathbf{\tilde {p}}_{time}^{SPP}\right\|\mathbf{\tilde{<}}\cdot range_{fitting}\right\}\\ =\left\{\mathbf{\tilde{p}}_{fitting,1}^{u_{m},t},\mathbf{\tilde{p }}_{fitting,2}^{u_{m},t},\dots,\mathbf{\tilde{p}}_{fitting,n}^{u_{m},t},\dots \mathbf{\tilde{p}}_{fitting,n}^{u_{m},t}\right\}\end{split}\right. \tag{3}\]
where \(\mathbf{\tilde{p}}_{i_{m},t}^{SPP}\) is the raw position calculated by GNSS single point positioning (SPP). \(range_{fitting}\) denotes the range for selecting the fitting points, which is set to 50 meters in our research. \(\mathbf{\tilde{p}}_{fitting,n}^{u_{m},t}=\left[\mathbf{\tilde{x}}_{fitting,n}^{ u_{m},t},\mathbf{\tilde{y}}_{fitting,n}^{u_{m},t},\mathbf{\tilde{z}}_{fitting,n}^{u_{m},t}, \mathbf{\tilde{z}}_{fitting,n}^{u_{m},t}\right]^{T}\) represents a fitting point in \(Set_{fitting}^{u_{m},t}\). \(N_{f}\) denotes the initial number of fitting points in this set. It is noted that the maximum value of \(N_{f}\) is set to 20 in this paper so as to reduce the computational burden of plane fitting. If there are more than 20 fitting points in \(Set_{fitting}^{u_{m},t}\), the fitting points will be removed randomly from the set until the number of fitting points is equal to 20.
In this paper, the least squares (LS) method is employed to fit the road plane \(\boldsymbol{\alpha}\) for \(u_{m}\) at epoch \(t\). The equation of this plane can be expressed as:
\[\boldsymbol{\alpha}:P_{u_{m},t}^{A}x+P_{u_{m},t}^{B}y+P_{u_{m},t}^{C}z+P_{u_{m },t}^{D}=0 \tag{4}\]
where \(P_{u_{m},t}^{A}\), \(P_{u_{m},t}^{B}\), \(P_{u_{m},t}^{C}\) and \(P_{u_{m},t}^{D}\) are the plane parameters to be estimated. Here, we impose a constraint on the normal vector of the fitted plane as follows:
\[\left(P_{u_{m},t}^{A}\right)^{2}+\left(P_{u_{m},t}^{B}\right)^{2}+\left(P_{u _{m},t}^{C}\right)^{2}=1 \tag{5}\]
We calculate the center of the fitting points in \(Set_{fitting}^{u_{m},t}\) by:
\[\begin{split}\mathbf{\tilde{p}}_{fitting}^{u_{m},t}=& \left[\frac{1}{N_{f}}\sum_{n=1}^{N_{f}}\tilde{x}_{fitting,n}^{u_{m},t}, \frac{1}{N_{f}}\sum_{n=1}^{N_{f}}\tilde{x}_{fitting,n}^{u_{m},t},\frac{1}{N_{f }}\sum_{n=1}^{N_{f}}\tilde{x}_{fitting,n}^{u_{m},t}\\ =&\left[\mathbf{\tilde{x}}_{fitting,n}^{u_{m},t}, \mathbf{\tilde{y}}_{fitting,n}^{u_{m},t},\mathbf{\tilde{z}}_{fitting}^{u_{m},t }\right]^{T}\end{split}\right.\\ \end{split} \tag{6}\]
where \(\mathbf{\tilde{p}}_{fitting}^{u_{m},t}\) should also be on the plane. Then, we can derive the following equations:
\[P_{u_{m},t}^{A}\times\left(\mathbf{\tilde{x}}_{fitting,n}^{u_{m},t}-\mathbf{ \tilde{x}}_{fitting}^{u_{m},t}\right)+P_{u_{m},t}^{B}\times\left(\mathbf{\tilde{y }}_{fitting,n}^{u_{m},t}-\mathbf{\tilde{y}}_{fitting}^{u_{m},t}\right)+P_{u_{m},t }^{C}\times\left(\mathbf{\tilde{z}}_{fit}^{u_{m},t}\right. \tag{7}\]
Considering that there are \(N_{f}\) fitting points, we extend the equation (7) to the follows:
\[\mathbf{A}\mathbf{X}_{P}^{u_{m},t}=\mathbf{0} \tag{8}\] \[\text{with}\,\mathbf{A}= \left[\begin{array}{c}\left(\mathbf{\tilde{p}}_{fitting,1}^{u_{m},t }\right)^{T}-\left(\mathbf{\tilde{p}}_{fitting}^{u_{m},t}\right)^{T}\\ \left(\mathbf{\tilde{p}}_{fitting,2}^{u_{m},t}\right)^{T}-\left(\mathbf{\tilde{p }}_{fitting}^{u_{m},t}\right)^{T}\\ \vdots\\ \left(\mathbf{\tilde{p}}_{fitting,N_{f}}^{u_{m},t}\right)^{T}-\left(\mathbf{ \tilde{p}}_{fitting}^{u_{m},t}\right)^{T}\end{array}\right]_{N_{f}\times 3}\]
where \(\mathbf{X}_{P}^{u_{m},t}=\left[P_{u_{m},t}^{A},P_{u_{m},t}^{B},P_{u_{m},t}^{C} \right]^{T}\) is the normal vector to be estimated. We utilize the singular value decomposition (SVD) method to calculate the normal vector. Here, \(\mathbf{A}\) can be decomposed into:
\[\mathbf{A}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{T} \tag{9}\]
where \(\mathbf{U}\) and \(\mathbf{V}\) belong to unitary matrices. \(\mathbf{\Sigma}\) is a diagonal matrix in which the diagonal elements are singular values. It is assumed that the last diagonal element is the minimum singular value. The eigenvector associated with the minimum singular value corresponds to the normal vector, which can be formulated as follows:
\[\begin{cases}\tilde{P}_{u_{m},t}^{A}=\mathbf{V}\big{(}1,3\big{)}\\ \tilde{P}_{u_{m},t}^{B}=\mathbf{V}\big{(}2,3\big{)}\\ \tilde{P}_{u_{m},t}^{C}=\mathbf{V}\big{(}3,3\big{)}\end{cases} \tag{10}\]
where \(\left[\tilde{P}_{u_{m},t}^{A},\tilde{P}_{u_{m},t}^{B},\tilde{P}_{u_{m},t}^{C} \right]^{T}\) is the estimated normal vector. The remaining term of the plane equation can be calculated by:
\[\tilde{P}_{u_{m},t}^{D}=-\left[\tilde{P}_{u_{m},t}^{A},\tilde{P}_{u_{m},t}^{B}, \tilde{P}_{u_{m},t}^{C}\right]^{T}\cdot\mathbf{\tilde{p}}_{fitting}^{u_{m},t} \tag{11}\]
In order to use the planes to constrain the positioning results, it is necessary to translate the road planes to the location of GNSS antennas. For vehicle \(u_{m}\), a plane parallel to the fitted plane \(\boldsymbol{\alpha}\) can be expressed as:
\[\boldsymbol{\beta}:\tilde{P}_{u_{m},t}^{A}x+\tilde{P}_{u_{m},t}^{B}y+\tilde{P}_{ u_{m},t}^{C}z+\tilde{P}_{u_{m},t}^{A}=0 \tag{12}\]
where,
\[\tilde{P}_{u_{m},t}^{D}=\left[\tilde{P}_{u_{m},t}^{D}+h_{u_{m},t},\tilde{P}_{ u_{m},t}^{D}\geq 0\right. \tag{13}\]
The distance between these two planes is equal to the antenna height of the target vehicle. It is noted that we will employ the plane \(\boldsymbol{\beta}\) instead of the plane \(\boldsymbol{\alpha}\) to constrain the positioning results.
### 3.2 Plane Availability Detection and Fault Exclusion
Not all the fitted planes can be used for constraining the positioning solutions. There are some requirements that should be met before using the plane constraints. Firstly, a flat road without curved surface is an essential prerequisite for applying plane constraints. If a vehicle travels on a road with curved surface, it is impossible to fit a plane where the vehicle is truly located. Secondly, the positioning results used to fit the planes should be accurate enough. Some positioning solutions with excessively large errors (i.e. outliers) can lead to an inaccurate
fitted plane. The fitted plane may be far away from the true road plane in this case. Therefore, it is necessary to determine whether the plane constraints are effective or not by applying the plane availability detection. A fault exclusion procedure is also needed to remove the potential outliers from plane fitting.
```
0: The set \(Set_{\textit{fitting},n}^{u_{m},t}\) with \(N_{f}\) fitting points, the maximum iteration number \(I_{\text{max}}\), the threshold for selecting inliers \(T_{\textit{f}}\).
1. Initialize a point set \(Set_{\textit{RMSE},\textit{RMSE}}^{u_{m},t}\leftarrow\emptyset\) and the number of the elements in this set \(N_{f}^{\prime}\gets 0\);
2.foriteration\(\leftarrow\) 1 to \(I_{\text{max}}\)do
3. Initialize the set of inliers \(Set_{\textit{inlier}}\leftarrow\emptyset\) and the number of inliers \(N_{in}\gets 0\);
4. Compute a plane \(\boldsymbol{\alpha}_{1}\) based on three fitting points selected randomly from \(Set_{\textit{fitting}}^{u_{m},t}\);
5.for\(n\gets 1\) to \(N_{f}\)do
6.\(D_{n}^{\prime}\leftarrow\) Calculate the distance between a fitting point \(\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{ \alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}} {\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{\hat{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}}{ \hat{\boldsymbol{\alpha}}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}}{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{ \alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}}{\hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{ \hat{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{\boldsymbol{ \alpha}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}}{ \boldsymbol{\alpha}_{\hat{\boldsymbol{\alpha}}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{ \alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{ \boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \boldsymbol{\alpha}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{ \boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{ \alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{ \alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{ \boldsymbol{\alpha}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}} {\boldsymbol{\alpha}}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{\hat{ \boldsymbol{\alpha}}{\boldsymbol{\alpha}_{\hat{\boldsymbol{\alpha}}{\boldsymbol{\alpha}}_{ \hat{\boldsymbol{
## IV Cooperative Positioning Algorithm Based on Factor Graph Optimization
The structure of the proposed factor graph for solving the cooperative positioning is shown in Fig. 4. The dashed boxes represent physical nodes, i.e. vehicles. Time is simply denoted by the discrete epoch index \(t\). Referring to a particular node \(u_{m}\), the state of the node at epoch \(t\) is defined as:
\[\mathbf{x}_{u_{m},t}=\left[\mathbf{p}_{u_{m},t}^{T},b_{u_{m},t}\right]^{T} \tag{17}\]
where \(\mathbf{p}_{u_{m},t}=\left[x_{u_{m},t},y_{u_{m},t},z_{u_{m},t}\right]^{T}\) is the position of the node in ECEF frame. \(b_{u_{m},t}\) represents the clock bias of the node in distance units.
Within the factor graph of a single node, the purple and red rectangles represent GNSS and the plane constraint, respectively. It is noted that the plane data obtained from Section III will be used to calculate the plane constraint factors in this section. The blue rectangles denote the velocity factors which connect two consecutive epochs inside a node. The green rectangles connecting two nodes denote the inter-vehicle ranging factors. If the inter-ranging measurements are available, the inter-vehicle ranging factors will also be considered in the FGO-based positioning algorithm.
The optimal positioning solution of each node can be obtained by finding a configuration of node states that matches all the factors as much as possible. To reduce the computation load, only the factors inside a sliding window will be utilized in FGO. The rest of this section presents the details about how to calculate the factors mentioned above and how to optimize the states by integrating these factors.
### GNSS Pseudorange Factor
For a particular node \(u_{m}\), the GNSS pseudorange measurement between the node and the satellite \(s\) can be expressed as:
\[\rho_{u_{m},t}^{s}=r_{u_{m},t}^{s}+b_{u_{m},t}-b_{t}^{s}+I_{t}^{s}+T_{t}^{s}+ \varepsilon_{\rho,u_{m}}^{s} \tag{18}\]
where \(r_{u_{m},t}^{s}=\left[\mathbf{p}_{u_{m},t}-\mathbf{p}_{s,t}\right]\) represents the geometric distance between the node \(u_{m}\) and the satellite \(s\). \(\mathbf{p}_{s,t}=\left[x_{s,t},y_{s,t},z_{s,t}\right]^{T}\) is the position of the satellite \(s\) at epoch \(t\) in ECEF frame. The symbol \(\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[h_{t,PC}^{u_{m}}\left(\mathbf{p}_{u_{m},t},\mathbf{u}_{u_{m},t}^{PC} \right) = \mathbf{p}_{u_{m},t}\cdot\mathbf{u}_{u_{m},t}^{PC}\] ( \[= -\tilde{P}_{u_{m},t}^{A}\times x_{u_{m},t}-\tilde{P}_{u_{m},t}^{B} \times y_{u_{m},t}-\tilde{P}_{u_{m},t}^{C}\times z_{u_{m},t}\]
where \(\mathbf{u}_{m,t}^{PC}=\left[-\tilde{P}_{u_{m},t}^{A},-\tilde{P}_{u_{m},t}^{B},-\tilde{P}_{u_{m},t}^{C}\right]^{T}\) is the unit normal vector of the fitted plane. \(\tilde{P}_{u_{m},t}^{A}\), \(\tilde{P}_{u_{m},t}^{B}\), \(\tilde{P}_{u_{m},t}^{C}\) and \(\tilde{P}_{u_{m},t}^{D}\) can be estimated in section III. \(\varepsilon_{PC}\) is the residual in the plane fitting. Therefore, we calculate the error function for a plane constraint factor as follows:
\[f_{t,PC}^{u_{m}}=P_{u_{m},t}^{t\,D}-h_{t,PC}^{u_{m}}\left(\mathbf{p}_{u_{m},t},\mathbf{u}_{u_{m},t}^{PC}\right) \tag{24}\]
By default, the plane constraint factors will be calculated for each vehicle once a new epoch is added to the factor graph. To reduce the computational burden on vehicles, we can also use the same plane to constrain the positioning solutions as long as the variation of plane parameters within the sliding window is small enough.
### Inter-vehicle Ranging Factor
The inter-vehicle ranging can be performed by the round-trip-time (RTT) method, which avoids the clock bias between two nodes. For the node \(u_{m}\) and the node \(u_{m}\), the inter-vehicle ranging model can be expressed as:
\[d_{t}^{u_{m},u_{m}}=h_{t,R}^{u_{m},u_{m}}\left(\mathbf{p}_{u_{m},t},\mathbf{p }_{u_{m},t}\right)+\varepsilon_{d} \tag{25}\]
\[h_{t,R}^{u_{m},u_{m}}\left(\mathbf{p}_{u_{m},t},\mathbf{p}_{u_{m},t}\right)= \left\|\mathbf{p}_{u_{m},t}-\mathbf{p}_{u_{m},t}\right\| \tag{26}\]
where \(d_{t}^{u_{m},u_{m}}\) denotes the inter-node ranging measurement. \(\varepsilon_{d}\) is the additive white Gaussian noise variables for inter-node ranging. Thus, the error function for an inter-vehicle ranging factor can be obtained as follows:
\[f_{t,R}^{u_{m},u_{m}}=d_{t}^{u_{m},u_{m}}-h_{t,R}^{u_{m},u_{m}}\left(\mathbf{ p}_{u_{m},t},\mathbf{p}_{u_{m},t}\right) \tag{27}\]
It is noted that the inter-vehicle ranging factor would be excluded from the objective function of the FGO if the inter-node ranging is unavailable.
### Velocity Factor
The velocity factors connect two consecutive states of a node. Since GNSS Doppler measurements are less sensitive to the multipath effects, we use Doppler measurements to estimate the velocity of a vehicle node. The relationship between a GNSS Doppler measurement and the velocity can be expressed as follows:
\[-\mathbf{\Delta}f_{d,u_{m},t}^{s} = \left(\mathbf{v}_{s,t}-\mathbf{v}_{u_{m},t}\right)\cdot \mathbf{e}_{u_{m},t}^{s}+\mathbf{\delta}f_{u_{m},t}-\mathbf{\delta}f_{t}^{s}+\mathbf{ \delta}f_{d,u_{m}}^{s} \tag{28}\] \[\Rightarrow-\mathbf{\Delta}f_{d,u_{m},t}^{s}-\mathbf{v}_{s,t}\cdot \mathbf{e}_{u_{m},t}^{s}+\mathbf{\delta}f_{t}^{s}=-\mathbf{v}_{u_{m},t}\cdot \mathbf{e}_{u_{m},t}^{s}+\mathbf{\delta}f_{u_{m},t}^{s}\]
where \(f_{d,u_{m},t}^{s}\) is GNSS Doppler measurement for satellite \(s\) and \(\mathbf{\Delta}\) represents the wavelength of the signal. \(\mathbf{v}_{u_{m},t}=\left[v_{u_{m},t}^{u_{m}},v_{y,t}^{u_{m}},v_{z,t}^{u_{m}} \right]^{T}\) is the velocity of the node and \(\mathbf{\delta}f_{u_{m},t}^{s}\) stands for the clock drift of the node, both of which are unknowns in velocity estimation. \(\mathbf{v}_{s,t}\) denotes the velocity of the satellite and \(\mathbf{\delta}f_{t}^{s}\) is the clock drift of the satellite, which can be calculated based on the broadcast ephemeris. \(\mathbf{\delta}_{d,u_{m}}^{s}\) represents the residual errors. \(\mathbf{e}_{u_{m},t}^{s}\) is the unit direction vector from the node to the satellite, which is formulated as:
\[\mathbf{e}_{u_{m},t}^{s}= \left[e_{x,u_{m},t}^{s},e_{y,u_{m},t}^{s},e_{y,u_{m},t}^{s}\right] ^{T} \tag{29}\] \[= \left[\frac{x_{s,t}-\hat{x}_{u_{m},t}}{\left\|\tilde{\mathbf{p} }_{u_{m},t}^{SPP}-\mathbf{p}_{s,t}\right\|},\frac{y_{s,t}-\hat{y}_{u_{m},t}}{ \left\|\tilde{\mathbf{p}}_{u_{m},t}^{SPP}-\mathbf{p}_{s,t}\right\|},\frac{z_{s,t}-\hat{z}_{u_{m},t}}{\left\|\tilde{\mathbf{p}}_{u_{m},t}^{SPP}-\mathbf{p}_{s,t}\right\|}\right]^{T}\]
where \(\tilde{\mathbf{p}}_{u_{m},t}^{SPP}=\left[\hat{x}_{u_{m},t}^{SPP},\hat{y}_{u_{m },t}^{SPP},\hat{z}_{u_{m},t}^{SPP}\right]^{T}\) is the raw position calculated by GNSS SPP.
According to equation (28), we can obtain the observation model for Doppler measurements as follows:
\[\mathbf{y}_{d,u_{m},t}=\mathbf{G}\left[\begin{array}{c}v_{x,t}^{u_{m}}\\ v_{y,t}^{u_{m}}\\ v_{z,t}^{u_{m}}\\ \mathbf{\delta}f_{u_{m},t}\end{array}\right]+\mathbf{\varepsilon}_{f_{d},u_{m}} \tag{30}\]
where,
\[\mathbf{y}_{d,u_{m},t}= \left[y_{d,u_{m},t}^{s},y_{d,u_{m},t}^{s},y_{d,u_{m},t}^{s},\ldots \right]^{T} \tag{31}\] \[y_{d,u_{m},t}^{s}= -\mathbf{\Delta}f_{d,u_{m},t}^{s}-\mathbf{v}_{s,t}\cdot\mathbf{e}_{u_{m},t}^{s}+\mathbf{\delta}f_{t}^{s}\] (32) \[\mathbf{G}= \left[-e_{x,u_{m},t}^{s},-e_{y,u_{m},t}^{s},-e_{x,u_{m},t}^{s}, \quad 1\right]\] (33) \[-e_{x,u_{m},t}^{s},-e_{y,u_{m},t}^{s},-e_{x,u_{m},t}^{s}, \quad 1\right]\] \[\mathbf{G}= \left[-e_{x,u_{m},t}^{s},-e_{y,u_{m},t}^{s},-e_{x,u_{m},t}^{s}, \quad 1\right]\] \[-e_{x,u_{m},t}^{s},-e_{y,u_{m},t}^{s},-e_{x,u_{m},t}^{s}, \quad 1\right]\] \[\vdots\quad\vdots\quad\vdots\quad\vdots\quad\]
Therefore, the velocity of the vehicle node can be estimated by using the least squares (LS) method as follows:
\[\left[\begin{array}{c}v_{x,t}^{u_{m}}\\ v_{y,t}^{u_{m}}\\ v_{z,t}^{u_{m}}\\ \mathbf{\delta}f_{u_{m},t}\end{array}\right]= \left(\mathbf{G}^{T}\mathbf{G}\right)^{-1}\mathbf{G}^{T}\mathbf{y}_{d,u_{m},t} \tag{34}\]
Then, the observation model for the velocity is formulated as:
\[\mathbf{v}_{u_{m},t} = h_{t,u}^{u_{m}}\left(\mathbf{p}_{u_{m},t},\mathbf{p}_{u_{m},t}+ \right)+\mathbf{\varepsilon}_{t,u_{m}} \tag{35}\] \[= \left[\begin{array}{c}\left(x_{u_{m},t}-x_{u_{m},t-1}\right) \left/\Delta t\\ \left(y_{u_{m},t}-y_{u_{m},t-1}\right)\left/\Delta t\\ \left(z_{u_{m},t}-z_{u_{m},t-1}\right)\left/\Delta t\end{array}\right]+ \mathbf{\varepsilon}_{t^{\prime},u_{m}}\]
where \(\mathbf{\varepsilon}_{t^{\prime},u_{m}}\) represents the noise vector of the velocity measurements, \(\Delta t\) is the time difference between two
consecutive epochs. The error function for a velocity factor is written as:
\[\begin{split} f_{t,T}^{u_{m}}=\mathbf{v}_{u_{m},t}-h_{t,T}^{u_{m}} \left(\mathbf{p}_{u_{m},t},\mathbf{p}_{u_{m},t-1}\right)\end{split} \tag{36}\]
where the velocity is given by the estimation (34).
### 4.5 Pseudorange/Plane constraint/Inter-Ranging/Velocity Integration by FGO
After the factors are derived, we can use them to estimate the states by solving the objective function. We formulate the objective function of the proposed FGO as follows:
\[\begin{split}\chi^{*}=&\arg\min\sum\nolimits_{ \chi}\sum\nolimits_{m,s,t}\left|f_{t,D}^{u_{m},s}\right|^{2}\sum\nolimits_{ \chi_{t,p}^{u_{m},s}}+\sum\nolimits_{m,t}\left|f_{t,T}^{u_{m}}\right|^{2}\sum \nolimits_{\chi_{t,p}^{u_{m},s}}\\ +&\sum\nolimits_{m,u,t}\left|f_{t,R}^{u_{m},u_{m}} \right|^{2}\sum\nolimits_{\chi_{t,p}^{u_{m},s}}+\sum\nolimits_{m,t}\left|f_{t, T}^{u_{m}}\right|^{2}\sum\nolimits_{\chi_{t,p}^{u_{m}}}\end{split} \tag{37}\]
where \(\chi=\left[\mathbf{X}_{u_{1}}^{T},\mathbf{X}_{u_{2}}^{T},\ldots,\mathbf{X}_{u_ {M}}^{T}\right]^{T}\) is the global state of \(M\) nodes. \(\mathbf{X}_{u_{m}}=\left[\mathbf{x}_{u_{m},t_{0}}^{T},\mathbf{x}_{u_{m},t_{0} +1}^{T},\ldots,\mathbf{x}_{u_{m},t_{0}+t-1}^{T}\right]^{T}\) is the set of states of the node \(u_{m}\) from the epoch \(t_{0}\) to the current epoch (\(t_{0}+l-1\)). \(l\) denotes the total epochs considered in the FGO, i.e. window length. \(\chi^{*}\) is the optimal estimation of the global state. \(\Sigma_{t,D}^{u_{m},s}\) represents the noise covariance matrix of pseudorange factors, which is calculated based on carrier-to-noise ratio (CNR) of the signals [37]. The noise covariance matrix of the plane constraint factors is given as \(\Sigma_{t,P}^{u_{m}}\), which is determined by the size of residual in plane fitting. \(\Sigma_{t,R}^{u_{m},u_{n}}\) denotes the noise covariance matrix of the inter-node ranging factors, which depends on the inter-ranging device. Considering that UWB sensors are used in our research, the standard deviation of the inter-node ranging measurement is set to 0.3 m according to product data sheet [38]. \(\Sigma_{t,T}^{u_{m}}\) is the noise covariance matrix of the velocity factors associated with Doppler velocity measurements. \(\Sigma_{t,T}^{u_{m}}\) is set as \(\sigma_{\gamma}\mathbf{I}_{3\times 3}\). \(\sigma_{V}\) is the scaling factor which is set to 0.6 according to [35].
Finally, we employ the Levenberg-Marquardt optimization method to solve the least squares optimization problem. Since the update time will climb linearly with increasing epochs, we delete the old states and their associated factors so as to reduce the computational load. The states and the factors are pruned depending on their age. All the states and factors that are older than 5 measuring cycles will be removed from the FGO.
## V Experiments and performance analysis
The proposed method is verified via the real dataset collected in the urban areas. Several experiments were carried out in Zhongguancun science park, Beijing, China. The test routes and the surrounding environments are depicted in Fig. 5. In order to apply GNSS differential corrections to the positioning results, a reference station was set in an open-sky area adjacent to the test routes. The Network Real Time Kinematic (NRTK) technique is employed to calculate the absolute position of the reference station.
### 5.1 Experiment setup
As depicted in Fig. 6, four vehicles were involved in the experiments, referred to as u\({}_{1}\), u\({}_{2}\), u\({}_{3}\) and u\({}_{4}\) in this section. All the vehicles traveled along the same routes during the experiments. There are two lanes in one direction and the vehicles traveled in different lanes most of the time. The vehicle u1 was equipped with a GNSS receiver named M300 and a high-performance navigation system named NovAtel SPAN-ISA-100C. The other three vehicles were equipped with the same devices, including a GNSS receiver named OEM628 and a high-performance navigation system named NPOS220, as shown in Fig. 7. The GNSS measurements collected by M300 and OEM628 are used for algorithm evaluation, including pseudorange and Doppler measurements of GPS L1/L2 and Beidou B1/B2. The sampling rate is 1Hz for GNSS measurements. The satellite visibility of vehicle u\({}_{1}\) is shown in Fig. 8. It can be seen that the number of visible satellites decreases obviously in dense urban areas. Benefiting from geodetic-grade GNSS receivers with multiple antennas and high-precision inertial measuring units (IMU), the high-performance navigation systems can provide the reference trajectory for each vehicle. By the post-processing of the NovAtel Inertial Explorer software, the root mean squared error (RMSE) of the ground truth can reach 0.01m for horizontal direction and 0.02m for vertical direction.
Fig. 5: The test routes and the driving situations.
All the vehicles were also equipped with Ultra-Wide Band (UWB) sensors to obtain the inter-node ranging measurements. Both GNSS receiver and UWB sensor were connected to a laptop which was used for data storage.
The GNSS receivers can provide UWB sensors with precise GNSS time to synchronize GNSS and UWB measurements. The sampling rate is also 1Hz for UWB measurements. To reduce the system biases induced by the position difference between the GNSS antenna and the UWB antenna, all the UWB sensors were placed nearby the GNSS antennas. The maximum ranging distance for these UWB sensors is 500m. Since the vehicles kept close to each other during the experiments, the UWB measurements were available almost at all time. To simulate the losses of inter-node ranging measurements, we remove a proportion of the UWB measurements artificially in the following algorithm analysis.
To obtain more reliable positioning results in urban areas, it is necessary to perform a fault detection and exclusion procedure before using the measurements. The cooperative integrity monitoring (CIM) method proposed in [28] is conducted to remove the GNSS faults and the UWB faults simultaneously. Most of the faulty measurements with large errors can be excluded by using CIM.
### 5.2 Comparison of Positioning Performance Using Different Methods
This section aims to demonstrate the effectiveness of the proposed CP method and compare the performance of the proposed method with the existing methods. Five control groups are considered in this section, which are presented as follows:
* Non-cooperative positioning (Non-CP): The GNSS standalone positioning method based on extended Kalman filtering (EKF) is chosen as a non-CP method, which is compared with CP methods to demonstrate the benefit of cooperation. The pseudorange and Doppler measurements are used to estimate the states. A widely-used software named RTKLIB is employed to calculate GNSS standalone positioning results. The details about RTKLIB can be found in [39].
* GNSS-based CP method: The GNSS-based CP method applies the double difference (DD) on the shared pseudoranges to obtain relative positions between two vehicles, which are then combined with absolute positions to calculate final CP solutions. Here, the EKF is used for CP tight integration. The details about the GNSS-based CP method can be found in [10].
* Generalized approximate message passing (GAMP): This method can make full use of GNSS pseudoranges, Doppler shifts, inter-node ranging measurements and
Fig. 8: The satellite visibility of vehicle \(u_{i}\) during the tests for (a) GPS and (b) Beidou system.
Fig. 6: The test vehicles and the devices equipped on vehicle \(u_{i}\).
Fig. 7: The devices equipped on vehicle \(u_{i}\). The same devices are equipped on the vehicle \(u\) and \(u\).
measurements of other sensors inside the CP system by applying GAMP. This method is based on belief propagation that employs the marginal distribution to estimate the states, which shows superior positioning performance than traditional Kalman filtering. Here, we only consider GNSS measurements and inter-node ranging measurements so as to make a fair comparison with other range-based CP methods. The details about GAMP-based CP method can be found in [19].
* Multi-Agent collaborative integration (MCI): This CP method can integrate the measurements from GNSS, inter-ranging sensors and other sensors using FGO algorithm. This method is chosen because it also applies FGO algorithm to the cooperative positioning. We only consider GNSS measurements and inter-ranging measurements so that we can make a fair comparison between MCI and other range-based CP methods. The details about MCI can be found in [34].
* Plane constraint (PC) aided CP: This is the proposed CP method, in which the plane constraints are introduced into the range-based CP scheme.
In this section, all the four vehicles are involved to calculate positioning results. Only the results of vehicle u\({}_{\mathrm{i}}\) are presented to show the performance of different positioning methods. All the available UWB ranging measurements are used for range-based CP methods. We focus on both the horizontal and vertical positioning performance in the following analysis.
The cumulative distribution functions (CDF) of horizontal positioning errors (HPE) and vertical positioning errors (VPE)for vehicle u\({}_{\mathrm{i}}\) under different positioning methods are depicted in Fig. 9 and Fig. 10. The statistic results are given in Table I, including horizontal root mean squared error (RMSE), vertical RMSE, circular error at 95% probable (CEP95), the maximum HPE and the maximum VPE. Without using any cooperative information, non-CP method shows the worst performance among all these positioning methods. The maximum HPE of non-CP method reaches 15.02 m due to the limited visibility of GNSS satellites and multipath effects in dense urban areas. All the CP methods can benefit from the shared data and show improvements on positioning accuracy. Compared to non-CP method, the GNSS-based CP method sees a slight decline in horizontal RMSE, with the figure decreasing from 1.98 m to 1.78 m. The improvement is not obvious because the inter-node ranges computed by GNSS DD pseudorange are not accurate enough. Although the correlated errors are eliminated through GNSS differencing, uncorrelated errors (multipath and receiver noise) increase instead, especially in urban areas. The GAMP method and MCI method show better performance than GNSS-based CP methods since these two methods utilize inter-node ranging measurements additionally. The positioning error of MCI method is smaller than GAMP method, with the figure of horizontal RMSE declining from 1.61 m to 1.5 m. The reason for this is that the FGO used by MCI method can take advantages of the historical measurements to further enhance the positioning performance.
The proposed CP method aided by plane constraints shows the highest positioning accuracy among these methods. Compared with MCI method, the horizontal RMSE of the proposed method is further reduced, reaching to 1.35 m for vehicle u\({}_{\mathrm{i}}\). There is also a significant improvement on vertical positioning accuracy, with the figure of vertical RMSE dropping to 0.95 m. The plane constraints which can be regarded as new observations for each vehicle make a great contribution to performance improvements. It is noted that the vertical positioning errors are remarkably larger than the horizontal ones for non-CP, GNSS-based CP, GAMP and MCI method. However, for the proposed method, the positioning accuracy in vertical direction is significantly higher than that in horizontal direction. The performance gain is greater in the vertical direction although the performance improvements can be seen in both horizontal and vertical directions for the proposed method. Besides, the proposed method also shows a drop in the maximum HPE and VPE, with the figure decreasing to 7.65 m and 9.41 m, which demonstrates the reliability of the proposed method in dense urban areas.
Fig. 10: The CDFs of the vertical positioning errors for different methods.
Fig. 9: The CDFs of the horizontal positioning errors for different methods.
### 5.3 Performance Analysis Under Inter-Node Ranging Interruption
To further demonstrate the superiority of the proposed method in the case that the inter-node ranging is limited, the inter-node ranging interruption is simulated by removing a proportion of inter-ranging measurements artificially. Specifically, we randomly remove 20%, 40%, 60%, 80% and 100% of inter-node ranging measurements between each pair of the nodes, respectively. The MCI method is chosen to make a comparison with the proposed method in this subsection.
axis represents the time of week (TOW) of GPS in seconds. Here, we select the positioning results from the epoch 289045 to 289445. The inter-ranging interruption occurs from the epoch 289045 to 289280, at which all the inter-node ranging measurements are unavailable for MCI and PC aided CP method. From the epoch 289280 to 289445, all the inter-ranging measurements are available for cooperative positioning. It can be seen that both the proposed method and MCI method show satisfied positioning performance during the period when all the inter-node ranges are available. However, the positioning errors of MCI method may increase considerably when the inter-node ranging measurements are missing. Taking the result of the epoch 289265 for an example, the HPE of the MCI method rises to 7.59 m in the case that all the inter-node ranging measurements are removed. The MCI method is equivalent to non-CP method in this case because there is no spatial correlation between u\({}_{1}\) and the other three vehicles. Compared to MCI method, the proposed method can still benefit from the positioning information of the other three vehicles by introducing the plane constraint factors, with the figure of HPE declining to 1.23 m at epoch 289265. In short, the proposed CP method can also benefit from cooperative position data even if the inter-node ranging is unavailable.
### 5.4 Performance Analysis of Plane Availability Detection and Fault Exclusion
In the previous analysis, the plane availability detection and fault exclusion are conducted for the proposed method by default. To demonstrate the effectiveness of the fault exclusion (FE) procedures, we make a comparison between the proposed method with FE and the proposed method without FE. It is noted that the inter-node ranging is removed from the proposed CP method so that we can focus on the influence of plane constraints. The control groups are introduced as follows:
* Non-CP: The GNSS standalone positioning method based on RTKLIB.
* PC aided CP without inter-ranging (IR): This is also the CP method proposed in this paper. However, the inter-node ranging measurements are removed from the proposed algorithm. The proposed method becomes a range-free CP method in this case.
* PC aided CP without IR (no FE): This is the proposed method without using IR measurements as well as the fault exclusion procedure.
The CDF of the horizontal and vertical positioning errors for vehicle u\({}_{1}\) in three cases are given in Fig. 14 and Fig. 15. The statistic results are given in Table II, including horizontal RMSE, vertical RMSE, CEP95, the maximum HPE and the maximum VPE. It can be seen that there is little performance improvement by introducing plane constraints into CP system without using the plane detection and fault exclusion. Compared to the proposed method with FE, the proposed method without fault exclusion sees a remarkably increase in the positioning errors, with the figure of horizontal RMSE growing from 1.54 m to 1.93 m. Since GNSS multipath effects are inevitable for vehicles traveling in dense urban areas, some positioning results are relatively large, leading to the reliability degradation of the plane constraints. The planes fitted by the positions with large errors are incapable of restraining the positioning solutions, and may even result in the decrease of positioning accuracy.
To show where the fault exclusion occurs during the experiments, the locations where the fault exclusion is conducted are marked in the test route, as shown in Fig. 16. The blue circles represent the positions where the plane detection is passed without fault exclusion. The red circles denote the positions where the plane detection is passed after fault exclusion. The black circles represent the positions where the plane detection cannot be passed even with the help of fault exclusion.
urban canyons where GNSS suffers from severe multipath effects. By using the plane detection and fault exclusion method, the positions with relatively large errors are removed from the plane fitting so as to ensure the reliability of the plane constraints. To sum up, it is necessary to apply the plane availability detection and fault exclusion procedure before using the fitted planes.
## VI Conclusion
In this paper, a cooperative positioning method aided by plane constraints is proposed to mitigate the impact of inter-ranging losses. The position-related data from cooperative vehicles are utilized to construct the road plane where the vehicles are traveling. Then the plane parameters can be used to impose the constraints on positioning results. In this way, the position-related data of cooperative vehicles can also be used to conduct cooperative positioning even if the inter-ranging data are missing. A plane availability detection module is designed to determine whether the fitted planes can be used for cooperative positioning. Besides, a fault detection and exclusion method based on RANSAC algorithm is proposed to remove the potential outliers in the plane fitting.
The experimental results show the superiority of the proposed method over the existing CP methods, especially when the inter-vehicle ranging interruptions occur. Compared to other range-based methods, the proposed method can still benefit from the position-related data of the cooperative vehicles by introducing the plane constraint factors. The effectiveness of the plane detection and fault exclusion is also validated by experimental results.
The plane constraints are unavailable at the locations of black circles because there are not enough positioning results under the bridge. According to the driving situations depicted in Fig. 5, we find that the fault exclusion is mainly carried out in deep
The future work will focus on more complicated road conditions, such as bridges and hilly lands. Theoretically, the plane constraints can also be used on a slope as long as the curvature of the slope is small. We also plan to design a decentralized CP scheme rather than the centralized scheme in this paper.
|
2302.09536 | Is 30 MHz Enough for C-V2X? | Connected vehicles are no longer a futuristic dream coming out of a science
fiction, but they are swiftly taking a bigger part of one's everyday life. One
of the key technologies actualizing the connected vehicles is
vehicle-to-everything communications (V2X). Nonetheless, the United States
(U.S.) federal government decided to reallocate the spectrum band that used to
be dedicated to V2X uses (namely, the ``5.9 GHz band'') and to leave only 40\%
of the original chunk (i.e., 30 MHz of bandwidth) for V2X. It ignited concern
of whether the 30-MHz spectrum suffices key V2X safety messages and the
respective applications. We lay out an extensive study on the safety message
types and their latency requirements. Then, we present our simulation results
examining whether they can be supported in the 30-MHz spectrum setup. | Dhruba Sunuwar, Seungmo Kim, Zachary Reyes | 2023-02-19T11:07:16Z | http://arxiv.org/abs/2302.09536v1 | # Is 30 MHz Enough for C-V2X?
###### Abstract
Connected vehicles are no longer a futuristic dream coming out of a science fiction, but they are swiftly taking a bigger part of one's everyday life. One of the key technologies actualizing the connected vehicles is vehicle-to-everything communications (V2X). Nonetheless, the United States (U.S.) federal government decided to reallocate the spectrum band that used to be dedicated to V2X uses (namely, the "5.9 GHz band") to leave only 40% of the original chunk (i.e., 30 MHz of bandwidth) for V2X. It ignited concern of whether the 30-MHz spectrum suffices key V2X safety messages and the respective applications. This paper aims at addressing this issue with the particular focus on the New Radio (NR)-V2X Mode 1. We lay out an extensive study on the safety message types and their latency requirements. Then, we present our simulation results examining whether they can be supported in the 30-MHz spectrum setup.
Connected vehicles, C-V2X, NR-V2X, 5.9 GHz
## I Introduction
#### I-1 Background
V2X technology allows vehicles to communicate with other vehicles, infrastructure, and vulnerable road users to enhance safety, thereby preventing traffic crashes, mitigating fatalities, alleviating congestion, and reducing the environmental impact of the transportation system [1]. This capability gives V2X communications the central role in the constitution of intelligent transportation systems (ITS) for connected vehicle environments.
The full 75 MHz of the 5.9 GHz spectrum band (5.850-5.925 GHz) has long been reserved for intelligent transportation services such as V2X technologies. Nonetheless, the U.S. Federal Communications Commission (FCC) voted to allocate the lower 45 MHz (i.e., 5.850-5.895 GHz) for unlicensed operations to support high-throughput broadband applications (e.g., Wireless Fidelity, or Wi-Fi) [3]. Moreover, the reform went further to dedicating the upper 30 MHz (i.e., 5.895-5.925 GHz) for cellular V2X (C-V2X) as the only technology facilitating ITS.
To this line, we deem it prudent to evaluate _what is possible in a limited 30 MHz spectrum_ to ensure that the ITS stakeholders can continue to develop and deploy these traffic safety applications.
#### I-2 Related Work
Obviously, C-V2X has been forming a massive body of literature. Nonetheless, only little attention was paid to the feasibility of C-V2X in the reduced 30 MHz spectrum for safety applications.
The end-to-end per-packet latency, defined as the time spent by a successful packet to travel from its source to final destination, is a classical networking metric. An advanced latency metric--namely, the inter-reception time (IRT)--was proposed, which measures the time length between successful packet deliveries [4]. However, we find the IRT to have limited applicability as it becomes efficient in broadcast-based safety applications only. Considering the variety of our target applications, this paper employs the classical latency as the main metric, as shall be detailed in Section III.
Now, in regard to the characterization of a V2X system, the literature introduced a wide variety of proposals. Several approaches were compounded into large bodies such as theoretical/mathematical approaches [5]-[7], simulation-based [8][9], and channel sounding-based [10]. Even reinforcement learning has been applied as a means to characterize a highly dynamic vehicular network [11]-[14]. The prior art certainly provides profound insights, yet is not directly conclusive whether the reduced 30-MHz bandwidth makes it feasible to operate C-V2X on realistic road and traffic scenarios.
The same limitation can be found in the current literature of V2X safety-critical applications [15]-[18]: the proposals lay out approaches to support such applications but leave it unaddressed what the influence will be after the C-V2X got deprived of 60% of its bandwidth.
#### I-3 Contributions
This paper is a preliminary study as a starting point for discussions within the ITS literature in regard to operating C-V2X technologies in such an environment. As such, rather than final nor conclusive, this work should be regarded an initiative, igniting further tests and assessments on the impact of 30 MHz environment on the application deployment. Provided that, we extend the C-V2X literature on the following fronts:
* Pioneer to clarify the feasibility of safety-critical applications in the reduced 30 MHz spectrum setting
* Develop a quantification framework for C-V2X performance in a comprehensive but easy manner to maneuver
* Identify message types associated with safety-critical applications
## II System Model
### _Spatial Setup_
Fig. 1 illustrates an urban environment setup that was used in this paper's simulations [19]. A two-dimensional space \(\mathbb{R}^{2}\) is supposed, which is defined by the dimensions of 520 m and 240 m for the north-south and the east-west axes, respectively.
Here is the summary of our spatial model shown in Fig. 1. The RSUs are marked as green squares. The range of operation of each RSU is set to 150 m, indicated by a black circle around each RSU. There are two types of physical obstacles: trailer trucks (marked as black rectangles) and buildings (drawn as big gray squares).
We suppose _two junctions_ (rather than just one) as an effort to examine any possible interference between roadside units (RSUs) on the C-V2X performance as each junction is equipped with a RSU.
The connection from a RSU to a vehicle is shown by either a red or blue line: the red indicates a "blocked" connection whereas the blue means a "connected" link. The blockage can be caused by physical obstacles, viz., a building or a large trailer truck that are displayed by a large gray square and a black rectangle, respectively.
The distribution of the vehicles follows a _homogeneous PPP_ in \(\mathbb{R}^{2}\). We define a general situation where a safety-critical application disseminates a message of its respective type over a C-V2X network. (See Section III-A for details on the message types.) Unlike vehicles, locations of RSUs are fixed each junction [20].
By \(\lambda\) and \(\theta\), we denote the densities of vehicles and trucks per road segment, respectively. According to the densities \(\lambda\) and \(\theta\), the probability of signal being blocked varies, which, in turn, influences the end-to-end latency of a message. For instance, a large \(\lambda\) and a large \(\theta\) yield a higher level of competition for medium and an increased level of physical signal blockage, which therefore elevates the latency accordingly.
It is also noteworthy that each vehicle, upon reaching the end of a road, starts over from the opposite end of the same lane. This setup is to keep the total number of vehicles constant at all times, as a means to maintain the same level of competition for the medium at any given time and thus guarantee the accuracy for further stochastic analyses.
### _Communications Parameters_
This paper adopts the 3rd Generation Partnership Project (3GPP) Release 17 NR-V2X for the physical-layer (PHY) [25] and radio resource control (RRC) functions [26]. We assume _Mode 1_ where a NR base station (also known as "Next Generation Node B" or "gNB") schedules sidelink resources to be used by the user equipment (UE) (i.e., vehicle) for sidelink transmissions. Nonetheless, we claim that the versatility of our simulation framework can easily be extended to accommodate Mode 2 as well.
To elaborate on the sidelink of NR-V2X, our simulation implements all the key channels [27] including: the Physical Sidelink Broadcast Channel (PSBCH) for sending broadcast information (i.e., synchronization of the sidelink); the PSCCH for sending control information; the PSSCH for sending control; data and Channel State Information (CSI) in case of unicast; and the Physical Sidelink Feedback Channel (PSFCH) for sending HARQ feedback in case of unicast and groupcast modes.
We suppose that all the vehicles distributed in \(\mathbb{R}^{2}\) have the same ranges of carrier sensing and communication. The NR-V2X supports {10, 15, 20, 25, 50, 75, 100} resource blocks (RBs) per subchannel [24]. As shall be elaborated in Section IV, this paper supposes 50 RBs per subchannel, which matches our assumption of 10 MHz per channel.
Our simulation also features a very high level of precision in implementing the spatial environment. Since a city road environment is considered for the simulation as shown in Fig. 1, the Urban Micro (UMi)-Street Canyon path loss model [25] is implemented. However, we reiterate that our simulation can easily accommodate other path loss models defined in the standard [25].
## III Proposed NR-V2X Performance Evaluation Framework
### _Message Types_
Table I categorizes several representative message types (i.e., as the last row) according to the "traffic families" (i.e., as the 3rd row). We particularly highlight that the ongoing SAE J3161 standardization activity [19] is primarily based on the end-to-end latency, namely, the packet delay budget (PDB). Discussion on the metric selection shall be revisited in Section III-C.
We assign different ProSe per-packet priorities (PPPP) [21] based on the importance of a message type. This proposition is to further extension to optimization of C-V2X via assigning different communication profiles (viz., number of subchannels, modulation and coding scheme (MCS), number of retransmissions, etc.) for the packets based on packet size, velocity, and channel busy ratio (CBR).
Here is elaboration of Table I [22][23]: basic safety (BSM), emergency vehicle alert (EVA), road safety (RSM), map data (MAP), signal phase and timing (SPaT), Radio Technology Commission for Maritime Services corrections (RTCM), signal status (SSM), signal request (SRM), traveler information (TIM), and road weather (RWM), as well as even transport-layer protocols such as transmission control (TCP) and user datagram (UDP). These types of messages support a broad set of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) applications, e.g., forward collision warning, pre-crash
Fig. 1: Geometrical setup of the proposed simulator (with vehicle density of \(\lambda=6\) in the entire system space \(\mathbb{R}^{2}\))
sensing, emergency vehicle warning and signal preemption, and infrastructure-to-vehicle warning messages.
As found from the "V2V" column of Table I, some applications operate based on the same message types, allowing numerous applications to be operated without requiring additional spectrum. However, different applications using the same message types can have vastly different spectrum needs due to differing message sizes and frequency of message transmission, so there are scenarios in which some applications using the same message types could and could not be deployed. Additionally, available spectrum will be dependent in part on the number of vehicles within communication range and the types of applications operating in a given area. Because of this, it will likely be necessary to establish a scheme that prioritizes safety-critical applications while underrating non-safety-critical applications in such situations.
### _Simulator Development_
This proposed research features _integration_ of the NR-V2X PHY and RRC simulator with two other major simulators: namely, the geometrical simulator and the driving simulator. Fig. 2 illustrates how this integration is achieved.
First of all, the NR-V2X PHY and RRC simulator implements the communications functions that were mostly explained earlier in Section II-B. As such, it effectuates the sidelink communications among the vehicles and RSUs, following major TRs and TSs [24]-[27] of the 3GPP Release 17 standard. This NR-V2X simulator forms the basis for two other major components of the proposed simulation framework.
On top of the NR-V2X simulator, the spatial environment simulator facilitates an urban environment shown in Section II-A. As Fig. 2 depicts, the existence of this spatial environment component adds the context of _the NR-V2X performance in different road/traffic settings_. We emphasize that this component will be strengthened by adding a wider variety of road environments and traffic scenarios.
Now, we highlight that our simulation framework features the driving simulator that actually puts a human user into a 1st-person driving setup. That way, the user can have _live experience of connected vehicles_: the experience can actualize the user on how car-to-car connections can promote the safety in various traffic scenarios and road conditions. As shall be elaborated in Section IV-B, this driving simulator will also play the role of adding realistic contexts to V2X simulations, which clearly highlights the unique contribution of this research.
We combine all of those so the user can not only (i) deploy vehicles, RSUs, and obstacles but (ii) promptly quantify the C-V2X performance out of the scenario.
### _Metrics_
What it takes to call the V2X performance "enough"? It is critical to address to this question in order to address what this paper's title asks: _is 30 MHz enough for NR-V2X Mode 1?_ We stress that this paper adopts the _end-to-end latency_ as the primary performance metric measuring a C-V2X system.
The end-to-end latency is the length of maximum allowed time between the generation of a message at the transmitter's application and the reception of the message at the receiver's application [28]. As this paper focuses on Mode 1 of the NR-V2X, we implement the the latency as the length of time taken from the generation of a message at an application (of those listed in Table I) at a RSU to the reception of the message by a vehicle.
Here is the justification of "why" the latency is chosen as the key performance metric in this paper over other metrics. First and foremost, the 3GPP 5G Service Requirement also identifies the end-to-end latency as one of the most critical performance indicators [29], based on which other requirement factors are defined. Not only that, the ongoing SAE J3161 standardization activity [19] is almost solely based on the latency, i.e., PDB.
However, in addition to the delay, we also reiterate that this paper features the integration with driving simulator. We measure the _near crash rate_ after a sufficiently large number of driving simulations on human subjects. Notice that a near-crash is defined as any circumstance that requires a rapid, evasive maneuver by the subject vehicle, or any other vehicle, pedestrian, cyclist, or animal to avoid a crash [30]. A rapid, evasive maneuver is defined as a steering, braking, accelerating, or any combination of control inputs that approaches the limits of the vehicle capabilities. This helps add contexts on how the improved performance of C-V2X can _actually affect the road safety_. More details on the simulation scenario follows in Section IV-B.
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c|c} \hline Service Type & \multicolumn{4}{c|}{Safety Services} & \multicolumn{4}{c}{Mobility Services} \\ \hline Traffic Direction & \multicolumn{2}{c|}{V2V} & \multicolumn{2}{c|}{V2I-I2V} & \multicolumn{2}{c}{V2I-I2V} \\ \hline Traffic Families & Critical V2V & Essential V2V & Critical V2I-I2V & Essential V2I-I2V & Transaction & Low-priority & Background \\ \hline Minimum PPPP & 2 & 5 & 3 & 5 & 6 & 6 & 8 \\ \hline Minimum PDB & 20 msec & 100 msec & 100 msec & 100 msec & 100 msec & 100 msec & 100 msec \\ \hline \hline Example Messages & BSM, EVA & BSM & RSM, MAP & SPAT, RTCM & SSM, SRM & TIM, RWM & TCP, UDP \\ \hline \hline \end{tabular}
\end{table} TABLE I: Mapping of message types to traffic priority levels
Fig. 2: Structure of the proposed simulator
## IV Numerical Results
### _NR-V2X RSU-to-Vehicle Latency_
Table II summarizes the key parameters that were used in our NR-V2X simulation. Notice from the table that we assume 50 RBs per subchannel, which occupies 180 kHz/RB \(\times\) 50 RBs/subchannel \(\approx\) 9 MHz and thus takes up most of an entire 10-MHz channel considering 1.25 MHz of a guard band [29]. The vehicle density is another noteworthy parameter: \(\lambda=\{5,10,20\}\) vehicles per lane equal \(\{30,60,120\}\) vehicles in \(\mathbb{R}^{2}\), which in turn indicate \(\{48,24,12\}\) m of minimum and \(\{76,38,19\}\) m of maximum inter-vehicle distance. As such, we intend that the \(\lambda=\{5,10,20\}\) vehicles per lane represent the {low, medium, high} vehicle density, respectively.
Fig. 3 demonstrates the result of this simulation. Each subfigure presents \(f_{X}(x)\), the probability density function (pdf) of the RSU-to-vehicle latency \(x\) in miliseconds (msec). The pdf is compared to the latency requirements of {20,100} msec as ({black,red} vertical lines) that have earlier been discussed in Table I. Via the comparison, Fig. 3 displays very clearly _how much proportion of vehicles generated on the \(\mathbb{R}^{2}\) is able to support which message types and applications_.
Horizontal comparison in a single row indicates that a higher vehicle density yields a higher latency and thus a higher chance of exceeding the 20-msec latency requirement. Vertical comparison in a single column implies that a larger number of RSUs (each taking a full 10-MHz channel) gives a lower latency and thus a lower probability of exceeding the latency
\begin{table}
\begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline \hline Inter-broadcast interval & 100 msec [26] \\ Bandwidth per channel & 10 MHz [3] \\ Number of RBs per subchannel & 50 [24] \\ Payload length & 40 bytes [22] \\ Vehicle density & \{5,10,20\} vehicles/lane \\ \hline \end{tabular}
\end{table} TABLE II: Key parameters for simulation
Fig. 3: Distribution of RSU-to-vehicle latency according to number of RSUs (compared column-wise) and traffic density (compared row-wise) compared to latency requirements of {20,100} msec as {black,red} vertical lines
requirement.
We deem it a safe statement if one finds that even in the 30-MHz setting, the applications requiring the latency of 100 msec can be supported in most scenarios. The only exception from this study was the "1 RSU and 20 vehicles/lane" case; Fig. 2(c) implies that such high vehicle density may cause some portions of vehicles to be served at a higher latency than the 100-msec requirement. Henceforth, it translates that in the 30-MHz spectrum setting, while almost all other message types defined in Table I can be supported, the "V2V critical BSM" may not function with only one RSU in a high traffic density scenario.
### _Verification via Driving Simulation_
We are developing various driving simulation scenarios for testing the message types identified in Table I. As has been mentioned in Fig. 2, such _integration_ of a driving simulator and a NR-V2X simulator forms key merit of this research.
A usual setup for our driving simulator is as follows. A human subject is located in front of a driving simulator that we built. There are 4 monitors, a wheel, and pedals, emulating the windshield, the directional and speed maneuvers, respectively.
Fig. 4 illustrates the proposed driving simulation setup for the verification. The driven vehicle (the "vehicle-of-interest" or "Vol" hereafter) is put into the following scenario where continued exchange of V2V BSM via NR-V2X can improve the safety. The Vol is put onto a 2-lane road: i.e., a single lane per direction. The driver cares to pass the large trailer truck in front of the Vol, which blocks the driver's sight of the other-direction lane. We design the simulation that because of the sight blockage, an attempt to pass the truck causes a _near crash_ with another vehicle approaching from the other lane (the "vehicle-to-crash" or "VtC"). The assumption here (which is very realistic) is that if the Vol and VtC have successfully exchanged BSMs, the near crash can be avoided by the driver being able to refrain from the pass with the VtC approaching.
## V Conclusions
This paper proposed a framework for evaluating the performance of a NR-V2X system. The research was particularly motivated from the U.S. federal government's recent decision of leaving only 30 MHz of spectrum for V2X. As such, this research identified key V2X safety messages and the respective applications, and examined whether they can be still supported with the reduced bandwidth. In an urban Mode 1 setting, most of the safety-critical applications appeared to satisfy their latency requirements. Our holistic simulation framework integrating the NR-V2X PHY/RRC, spatial environment, and driving simulators strengthened the validity of the results.
As future work, we plan on further improvement of this simulator suite such that it accommodates a wider variety of NR-V2X functionalities, road conditions, and traffic scenarios, e.g., Mode 2, suburban highway, etc.
## Acknowledgement
We acknowledge the Intelligent Transportation Society of America (ITSA) and the delegates from their member organizations including Qualcomm, Cisco, etc. for valuable feedback via continued discussions on the C-V2X message types and traffic families that were presented in Section III-A.
|
2301.08378 | Strongly incoherent gravity | While most fundamental interactions in nature are known to be mediated by
quantized fields, the possibility has been raised that gravity may behave
differently. Making this concept precise enough to test requires consistent
models. Here we construct an explicit example of a theory where a
non-entangling version of an arbitrary two-body potential $V(r)$ arises from
local measurements and feedback forces. While a variety of such theories exist,
our construction causes particularly strong decoherence compared to more subtle
approaches. Regardless, expectation values of observables obey the usual
classical dynamics, while the interaction generates no entanglement. Applied to
the Newtonian potential, this produces a non-relativistic model of gravity with
fundamental loss of unitarity. The model contains a pair of free parameters, a
substantial range of which is not excluded by observations to date. As an
alternative to testing entanglement properties, we show that the entire
remaining parameter space can be tested by looking for loss of quantum
coherence in small systems like atom interferometers coupled to oscillating
source masses. | Daniel Carney, Jacob M. Taylor | 2023-01-20T01:09:12Z | http://arxiv.org/abs/2301.08378v1 | # Strongly incoherent gravity
###### Abstract
While most fundamental interactions in nature are known to be mediated by quantized fields, the possibility has been raised that gravity may behave differently. Making this concept precise enough to test requires consistent models. Here we construct an explicit example of a theory where a non-entangling version of an arbitrary two-body potential \(V(r)\) arises from local measurements and feedback forces. While a variety of such theories exist, our construction causes particularly strong decoherence compared to more subtle approaches. Regardless, expectation values of observables obey the usual classical dynamics, while the interaction generates no entanglement. Applied to the Newtonian potential, this produces a non-relativistic model of gravity with fundamental loss of unitarity. The model contains a pair of free parameters, a substantial range of which is not excluded by observations to date. As an alternative to testing entanglement properties, we show that the entire remaining parameter space can be tested by looking for loss of quantum coherence in small systems like atom interferometers coupled to oscillating source masses.
Introduction
The past ten years have seen an explosion of proposals [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], building on earlier conceptual suggestions [18; 19; 20], to test whether or not gravitational interactions can entangle objects. When general relativity is quantized in perturbation theory, in direct analogy with quantum electrodynamics or any of the other known field theories in nature [21; 22; 23], non-relativistic gravitational interactions do generate entanglement [1; 24]. It is thus of crucial importance to develop theoretically consistent models of the alternative hypothesis, in which gravity _cannot_ entangle objects, in order to understand what these entanglement experiments may be able to rule out.
Models where non-entangling ("classical") gravity interacts with quantum matter take a variety of forms; a non-exhaustive set of examples includes [1; 2; 3; 4; 5; 6; 7; 8; 9; 25; 26; 27; 28; 29; 30; 31; 32]. Typically, these involve a classical gravitational field \(h_{\mu\nu}\) coupled to matter like \(H_{\rm int}\sim h_{\mu\nu}\left\langle T_{\rm matter}^{\mu\nu}\right\rangle\). To avoid fundamental causality problems [33], this usually requires that the effective quantum dynamics of the matter is non-unitary [1]. This leads to a variety of observable signatures: entanglement is not generated, energy-momentum conservation is violated [34], and quantum coherence can be lost. While entanglement generation offers a clean interpretation, the other effects may be more amenable to near-term experiments, and therefore merit detailed study.
In this paper, we focus on the loss of quantum coherence of matter these scenarios. To make the issue precise, we develop a model in which the non-relativistic gravitational interaction arises as an error process, building on prior work on a lattice [3; 4]. Random position measurements act on the matter, much as in local spontaneous collapse models [35], and the measurement results are used to generate a noisy gravitational force. Classical gravitational interactions arise in the limit of large unentangled masses, but new effects appear in the quantum regime. In contrast to general measure-and-feedback models of gravity (e.g., [2; 28]), this theory has measurements that are strong and feedback that is weak, and is thus "strongly incoherent". This incoherence means that observing the persistence of specific coherences enables one to rule out the entire parameter space in a direct fashion, such as by using the quantum collapse-and-revival experiment proposed in [36] (see also [37]). In particular, this experiment is substantially easier than a direct test of entanglement generation, and could be performed by studying interactions between a state-of-the-art atom
interferometer and high-quality mechanical system [38; 39; 40].
## II Construction of the model
Our aim is to find a time evolution law on a non-relativistic \(N\)-body quantum system which realizes a "classical" version of some two-body potential \(V(\mathbf{x}_{i}-\mathbf{x}_{j})\) on the particles. By classical version of the potential, we mean something where no entanglement can be generated, but where the average particle motion obeys the usual semiclassical version of the Ehrenfest limit1
Footnote 1: We will have many equations that use both the position operator \(\hat{\mathbf{x}}\) and its eigenvalues \(\mathbf{x}\), so we will put hats on the position operators for clarity. We also note that bold-face variables are 3-vectors. Other operators, for example \(\mathbf{p}\), will remain hatless. We use \(\hbar=c=k_{B}=1\) units, returning to regular SI units when computing explicit bounds.
\[\frac{d\left\langle\mathbf{p}_{i}\right\rangle}{dt}=-i\left\langle[H,\mathbf{ p}_{i}]\right\rangle-\sum_{j\neq i}\left\langle\nabla V(\hat{\mathbf{x}}_{i}- \hat{\mathbf{x}}_{j})\right\rangle. \tag{1}\]
In the limit that the total state is approximately non-entangled and particles are distant, \(\left\langle V(\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j})\right\rangle\approx V (\left\langle\hat{\mathbf{x}}_{i}\right\rangle-\left\langle\hat{\mathbf{x}}_{ j}\right\rangle)\) can be realized, and we see a semiclassical limit emerge. Here \(H\) means the Hamiltonian other than the potential interaction \(V\) of interest. In the gravitational context \(V=-G_{N}m_{i}m_{j}/|\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j}|\), this limit means that the model can accurately reproduce previously observed classical gravitational phenomena like the orbit of the Earth around the Sun.
As described above, the model consists of random "errors" which involve both a measurement and an injection of energy. First, we describe the measurement process. Consider a measurement of a particle's position \(\hat{\mathbf{x}}\) with some uncertainty \(\sigma\). Since this is a non-projective measurement, it is described by a positive operator-valued measurement (POVM)
\[\int d^{3}\mathbf{x}P^{\dagger}(\mathbf{x})P(\mathbf{x})=1, \tag{2}\]
where for example
\[P(\mathbf{x})=(2\pi\sigma^{2})^{-3/4}\exp\left\{-(\mathbf{x}-\hat{\mathbf{x}} )^{2}/4\sigma^{2}\right\} \tag{3}\]
represents the outcome that the particle's position is \(\hat{\mathbf{x}}=\mathbf{x}\) within a Gaussian error of \(\sigma\). The probability of outcome \(\mathbf{x}\) is, as usual, \(p(\mathbf{x})=\left\langle P^{\dagger}(\mathbf{x})P(\mathbf{x})\right\rangle\). The precise form of the POVM is not important in what follows.
The classical record of the outcome \(\mathbf{x}\) can then be fed back onto the other particles in the form of a force. Given outcome \(\mathbf{x}_{i}\) for the \(i\)th particle, we can act on another particle \(j\neq i\) with coordinate \(\hat{\mathbf{x}}_{j}\) through the operator
\[U_{ij}(\mathbf{x}_{i}-\hat{\mathbf{x}}_{j})=\exp\left\{-i\eta_{ij}\phi( \mathbf{x}_{i}-\hat{\mathbf{x}}_{j})\right\}. \tag{4}\]
Notice that \(U\) is a unitary operator that acts _only_ on particle \(j\), using the classical record of the other particle's position \(\mathbf{x}_{i}\). The "coupling constants" \(\eta_{ij}\) and so-far arbitrary potential function \(\phi(\mathbf{x}-\mathbf{y})\) will be used to mimic the desired classical potential; we will discuss them in detail shortly.
With these two elements, we can now describe the total time evolution. Define the Lindblad (jump) operators
\[E_{i}(\mathbf{x})=P_{i}(\mathbf{x})\prod_{j\neq i}U_{ij}(\mathbf{x}-\hat{ \mathbf{x}}_{j}). \tag{5}\]
This operator measures particle \(i\) and applies a kick to each other particle with potential determined by the outcome \(\mathbf{x}_{i}=\mathbf{x}\). The total time evolution is then taken to be of Lindblad form, with the \(E_{i}\) used as the Lindblad operators:
\[\dot{\rho}=-i[H,\rho]+\sum_{i}\gamma_{i}\int d^{3}\mathbf{x}\Big{[}E_{i}( \mathbf{x})\rho E_{i}^{\dagger}(\mathbf{x})-\frac{1}{2}\left\{E_{i}^{\dagger} (\mathbf{x})E_{i}(\mathbf{x}),\rho\right\}\Big{]}. \tag{6}\]
Here we have neglected the Hamiltonian terms for simplicity. The parameters \(\gamma_{i}\) are rates which describe how often the measurement-feedback operation occurs. By construction, this structure is explicitly LOCC, and thus implies that the interactions cannot generate any entanglement. In addition, this is local in time, preserves the density matrix norm \(\operatorname{tr}\rho=1\), each Lindblad operator is separable \(E_{i}=\mathcal{O}_{1}\otimes\mathcal{O}_{2}\otimes\cdots\), and the effective Hamiltonian terms \(E_{i}^{\dagger}E_{i}\) operate only on particle \(i\). This latter set of constraints is why we call this'strongly incoherent', and enables our revival test in the latter half of the paper. Note however that this does not rule out other proposed non-entangling theories, just the category of'strongly incoherent ones' introduced here.
The time evolution (6) represents a series of random measurements and kicks, leading to an overall motion of the particles; see Fig. 1. Notice that the entire process is local in the sense of Hilbert space, but non-local in the sense of spacetime. Both the measurement and feedback happen instantaneously and at spacelike separation. Since our goal is to reproduce
a non-relativistic potential interaction, this is acceptable. The model in this non-relativistic form has interactions which are explicitly dependent on single-particle locations through the \(P_{i}\) operators, and thus breaks translation invariance. Further, the jump operators \(E_{i}\) do not commute with the particle kinetic energy terms, so time translation invariance is also broken. Thus the model will have anomalous heating and dephasing effects, as discussed in the next section.
Finally, we want to impose our requirement that the averaged dynamics obey semiclassical equations of motion like (1) in the limit that the initial states are nearly classical. For an un-entangled \(N\)-body state \(\rho=\rho_{1}\otimes\rho_{2}\otimes\cdots\), the correlation function factors, and we can insert a complete set of states in (1) to write a given interaction term as
\[\left\langle\nabla V(\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j})\right\rangle= \int d^{3}\mathbf{x}\left\langle\nabla V(\hat{\mathbf{x}}_{i}-\mathbf{x}) \right\rangle\rho_{j}(\mathbf{x}). \tag{7}\]
Here \(\rho_{j}(\mathbf{x})=\left\langle\mathbf{x}|\rho_{j}|\mathbf{x}\right\rangle\) are the diagonal elements of the \(j\)th particle's position-space density matrix. We want to reproduce this in the same limit in our strongly incoherent interaction model. Using our time evolution (6), and ignoring the Hamiltonian terms for simplicity,
Figure 1: Schematic time evolution of a pair of massive particles under strongly incoherent gravitational interactions. The average trajectories obey the usual classical equations of motion, but there are random deviations caused by both the position measurements \(P(x)\) and noisy gravitational kicks \(U(x)\). The grey bands represent position-space wavefunctions and the black discs represent insertions of the measurement operators.
direct computation using the Heisenberg-picture Lindblad equation gives
\[\begin{split}\frac{d\left\langle\hat{\mathbf{p}}_{i}\right\rangle}{ dt}&=-\sum_{j\neq i}\eta_{ij}\gamma_{j}\int d^{3}\mathbf{x}\left\langle \nabla\phi(\mathbf{x}-\hat{\mathbf{x}}_{i})P_{j}^{\dagger}(\mathbf{x})P_{j}( \mathbf{x})\right\rangle\\ &\approx-\sum_{j\neq i}\eta_{ij}\gamma_{j}\int d^{3}\mathbf{x} \left\langle\nabla\phi(\mathbf{x}-\hat{\mathbf{x}}_{i})\right\rangle p_{j}( \mathbf{x})\end{split} \tag{8}\]
where the first equality is exact, and the second line follows in the limit that the \(N\)-body state is unentangled. Here, we identified the quantity
\[p_{j}(\mathbf{x})=\left\langle P_{j}^{\dagger}(\mathbf{x})P_{j}(\mathbf{x})\right\rangle \tag{9}\]
as a probability distribution for the position of particle \(j\). This is just a noisy representation of the diagonal elements of the density matrix \(p_{j}(\mathbf{x})\approx\rho_{j}(\mathbf{x})\) for particle \(j\), with errors of order \(\sigma\). Comparing to (7), we see that to reproduce a given two-body potential of the form \(V(\mathbf{x}_{i}-\mathbf{x}_{j})=\alpha_{ij}\phi(\mathbf{x}_{i}-\mathbf{x}_{j})\), we need to identify the coupling constants
\[\alpha_{ij}=\eta_{ij}\gamma_{j}. \tag{10}\]
With these identifications, and assuming that our position measurement errors \(\sigma\) are small enough to give a good approximation to the particle density matrix \(p_{j}(\mathbf{x})\approx\rho_{j}(\mathbf{x})\), our noisy equation of motion (8) reduces to the correct Ehrenfest limit (1).
Let us now apply this construction to Newtonian gravity. In this case, the couplings \(\alpha_{ij}=-G_{N}m_{i}m_{j}\) and \(\phi(\mathbf{x}_{i}-\mathbf{x}_{j})=1/|\mathbf{x}_{i}-\mathbf{x}_{j}|\). Thus we need to choose the coupling parameters \(\eta_{ij},\gamma_{i}\) so that their product gives \(\eta_{ij}\gamma_{j}=G_{N}m_{i}m_{j}\) (no sum on the \(j\) index). The factor \(\gamma_{i}\) has units of a rate, so we can take it to be proportional to the appropriate mass. This fixes the \(\eta_{ij}\) to be \(i\)-independent, and we can write
\[\gamma_{i}=v^{2}m_{i},\quad\eta_{ij}=G_{N}m_{j}/v^{2}, \tag{11}\]
where \(v\) is a dimensionless real parameter that can be thought of as a velocity in units of \(c\), the speed of light. This ability to rescale the two couplings by reciprocal powers of \(v\) reflects a physical symmetry of the model. Our demand is only that the average equations of motion look like Newtonian gravity. This can be accomplished by either many rapidly applied weak kicks (large \(v\)), or rare but strong kicks (small \(v\)). Thus in total, our "strongly incoherent gravity" model depends on two free parameters: \(v\) and the position measurement error \(\sigma\)
In Sec. III, we will show how to use a variety of existing experimental data to constrain a large range of these parameters; in Sec. IV, we show how to test the currently unconstrained parameter space.
Before moving to the tests, it is instructive to see in detail how our strongly incoherent gravity model reproduces the predictions of classical gravity, as well as the standard quantum Newton potential, in a situation involving non-trivial quantum systems. Consider placing a large source mass \(M\) in a fixed location, and positioning a small mass \(m\) nearby, with \(m\) prepared in an initial superposition of two well-localized states \((\left|L\right\rangle+\left|R\right\rangle)/\sqrt{2}\) separated by a distance \(\ell\). This small mass could be, for example, a neutron in an interferometer placed above the Earth [41], or a fountain atom interferometer next to a kg-scale mass [42]. We can approximate the position operator of the small mass as
\[\hat{\mathbf{x}}=\mathbf{d}+\ell\mathbf{e}_{z}\sigma_{z}/2, \tag{12}\]
where \(\sigma_{z}=\left|L\right\rangle\left\langle L\right|-\left|R\right\rangle \left\langle R\right|\), \(\mathbf{d}\) is the vector from the center of the two sites to the source mass, and \(\mathbf{e}_{z}\) is a unit vector (see Fig. 2). We will be interested in the behavior of the fringe coherence \(\left\langle\sigma_{-}(t)\right\rangle\), where \(\sigma_{-}=\left|L\right\rangle\left\langle R\right|\) measures the off-diagonal component of the density matrix.
To understand the basic idea, we can first consider a heuristic model of a classical source
Figure 2: Two geometries of interest, with a small mass \(m\) in superposition of two localized states \(\left|L\right\rangle,\left|R\right\rangle\) near a large mass \(M\). (a) The dipole interaction leads to a classical relative phase shift from a stationary heavy mass (Sec. II). (b) Here \(M\) is suspended as a movable pendulum, and the quadrupole interaction between \(M\) and \(m\) can lead to either entanglement or noisy gravitational interactions, depending on the gravity model under consideration (Sec. IV).
mass \(M\) coupled to the atom, following [43]. Consider taking the source mass position as a fixed, classical value, so that \({\bf d}\) is just a number. This generates a simple external Newton potential on the atom. We can expand the Newton potential in multipoles [see Eq. (B1)]. The monopole term acts identically on both branches and gives an overall phase. The first non-trivial effect comes from the dipole term, which generates the evolution
\[\left|L\right\rangle+\left|R\right\rangle\to e^{-i\varphi(t)}\left|L \right\rangle+e^{+i\varphi(t)}\left|R\right\rangle,\ \ \varphi(t)=-\frac{G_{N} Mm\ell t}{2d^{2}}, \tag{13}\]
so the coherence evolves as
\[\left\langle\sigma_{-}(t)\right\rangle=e^{-2i\varphi(t)}\left\langle\sigma_{- }(0)\right\rangle. \tag{14}\]
This explains the observed oscillating interference fringes with contrast proportional to \(G_{N}\) in these experiments [41; 42]. When the Newton potential is instead treated as an entangling two-body operator, and the source \(M\) is taken to be very heavy \(M\gg m\) in a well-localized state, precisely the same coherent evolution occurs. Thus these types of experiments cannot distinguish between this heuristic classical model and the usual quantum Newtonian potential that arises in effective field theory.
We can now contrast this to our strongly incoherent gravity model. Crucially, although this model is designed to reproduce semi-classical gravity, it requires treating the source mass \(M\) as a quantum system. As above, the first non-trivial effect comes from the dipole term in the potential function \(\phi\). The "kick" operator \(U\) acting on the light mass in the error operators (5), to this order, is \(U=\exp(-iG_{N}m\ell\sigma_{z}/2v^{2}d^{2})\). Using this in the Heisenberg picture version of (6), the equation of motion for \(\left\langle\sigma_{-}(t)\right\rangle\) is
\[\left\langle\dot{\sigma}_{-}\right\rangle=\gamma_{M}\left[e^{iG_{N}m\ell/v^{2 }d^{2}}-1\right]\left\langle\sigma_{-}\right\rangle=i\frac{G_{N}Mm\ell}{d^{2}} \left\langle\sigma_{-}\right\rangle+{\cal O}(G_{N}^{2}). \tag{15}\]
Here we used \(\gamma_{M}=v^{2}M\), the completeness of the POVM (2), and ignored the dephasing term \(\sim\gamma_{m}=v^{2}m\) from the error operators that measure the small mass position. The powers of our free parameter \(v^{2}\) have cancelled, and this result shows that the coherence oscillates precisely as in the classical and standard quantum models above, c.f. equation (14). To see how the strongly incoherent gravity model differs, we will need to go to the next order in the interaction, where noise effects become visible.
## III Constraints on the free parameters
We now turn to understanding constraints on the possible values of the free parameters of our model, \(\sigma\) and \(v\). By construction, the model reproduces classical gravity at the level of average values for observables. Therefore, we want to study deviations from the averages in order to look for constraints. In particular, as discussed above, the noisy gravitational interaction will produce anomalous heating and dephasing. These effects are strongly constrained by various precision tests of cold and quantum coherent systems. We show the totality of these bounds in Fig. 3.
### Spontaneous collapse effects
In our strongly incoherent gravity model, each massive object is occasionally measured with the operators \(P(\mathbf{x})\), regardless of whether the mass is interacting with another object. This means that our model has "spontaneous collapse" or localization effects. In particular, the measurement process will both collapse spatial superpositions and generate random momentum kicks, even for a completely isolated particle.
We begin with the collapse of superpositions. Consider a pair of position eigenstates \(\left|\mathbf{x}_{L}\right\rangle,\left|\mathbf{x}_{R}\right\rangle\) separated by a distance \(\Delta x=\left|\mathbf{x}_{L}-\mathbf{x}_{R}\right|\). The density matrix element \(\rho_{LR}=\left\langle\mathbf{x}_{L}|\rho|\mathbf{x}_{R}\right\rangle\) quantifies the coherence of the superposition of these two states. Taking the appropriate matrix element in (6), we see that this coherence evolves as
\[\frac{d\rho_{LR}}{dt}=\gamma\left[e^{-\Delta x^{2}/8\sigma^{2}}-1\right]\rho_ {LR}\approx-\gamma\rho_{LR} \tag{16}\]
where the approximation is good for superpositions larger than the measurement length \(\Delta x\gtrsim\sigma\). Thus a spatial superposition of a particle is decohered at the rate \(\gamma=v^{2}m\). Comparing to the observed \(\sim 60\) s coherence times in atom interferometers [39; 40] then places an upper bound on \(v^{2}\), for all \(\sigma\lesssim 1\) cm, the separation scale in these experiments.
The momentum kicks lead to anomalous heating effects. Consider the action of the position measurement \(P(\mathbf{x})\) on a momentum eigenstate \(\left|\mathbf{p}\right\rangle\). With the explicit Gaussian POVM (3), the resulting post-measurement state \(P(\mathbf{x})\left|\mathbf{p}\right\rangle\) is a Gaussian of width \(\Delta p_{\text{kick}}=1/2\sigma\), assuming \(\sigma\) is smaller than the initial wavefunction scale. Thus the action of \(P(\mathbf{x})\) on a generic initial state will increase the momentum uncertainty of the particle by at least
\(\Delta p_{\rm kick}\), or in other words it will generate an RMS energy of order
\[\Delta E_{\rm kick}\approx\frac{1}{m\sigma^{2}}. \tag{17}\]
These kicks happen at a rate \(\gamma=v^{2}m\).
In the limit that the kicks are strong but rare (small \(v\)), strong bounds can be derived by consider some prototypical atomic systems. If the typical kick exceeds the depth of a trapped atom \(\Delta E_{\rm kick}\gtrsim E_{\rm trap}\), then the will be ejected roughly once per \(1/\gamma\). In a free-falling atom interferometer, a kick even of order \(\Delta E_{\rm kick}\gtrsim E_{\rm int}\approx k\), where \(k\) is the Raman momentum, will displace the atom from the interference region and effectively look like a loss. In either case, we can place a very conservative bound by demanding that in an experiment of duration \(\tau\gtrsim 1/\gamma=1/v^{2}m_{a}\), with \(m_{a}\) the atomic mass, on average no atoms will be lost, \(\Delta E_{\rm kick}\lesssim E_{\rm trap},E_{\rm int}\). In Fig. 3 we plot these bounds based on state of the art atomic experiments.
In the opposite regime of large \(v\), where the kicks are weak but happening very often, one needs to instead consider continuous anomalous heating effects. This can be quantified by using the Heisenberg picture version of (6) to compute, for a single isolated particle, the RMS change in momentum:
\[\left\langle\frac{d\Delta{\bf p}^{2}}{dt}\right\rangle_{\rm BA}=v^{2}m\int d^{ 3}{\bf x}\left\langle\left(\nabla P({\bf x})\right)^{2}\right\rangle\approx \frac{v^{2}m}{4\sigma^{2}}, \tag{18}\]
where the final estimate comes from using the explicit Gaussian measurement (3). The subscript "BA" indicates that this is backaction noise from the measurement process. This can be viewed as an anomalous heating
\[\left\langle\frac{dE_{\rm rms}}{dt}\right\rangle_{\rm BA}\approx\frac{v^{2}}{ \sigma^{2}}\approx 0.6\ \frac{\mu{\rm K}}{{\rm s}}\times\left(\frac{v}{10^{-15}}\right)^{2}\left(\frac{ 1\ {\rm nm}}{\sigma}\right)^{2}, \tag{19}\]
independent of the particle mass. We can compare this to a large mass composed of \(N\) atoms placed in a dilution refrigerator [35]. These systems have typical cooling power \(P_{\rm cool}\approx 10\ \mu{\rm W}\)[44], which corresponds to \(\left\langle dE_{\rm rms}/dt\right\rangle_{\rm cool}\approx P_{\rm cool}/N\). This corresponds to an upper bound on \(v/\sigma\), the diagonal region in Fig. 3, where we used a kilogram-scale copper mass as a benchmark. One can also consider bounds from smaller systems like trapped ions or Bose-Einsten condensates, which give similar bounds.
### Noise in the gravitational interaction
In the previous section, we quantified the effects arising entirely from the spontaneous position measurements in the model. Now we will consider the emergent gravitational interaction. This is no longer coherent but has a degree of shot noise, because the gravitational force (the \(U_{ij}\) operators) is sourced by a noisy estimate of the source position.
Again using the Heisenberg-picture Lindblad equation, one finds by a direct but somewhat lengthy computation that the variance \(\Delta\mathbf{p}_{i}^{2}\) of the \(i\)th particle's momentum will increase over time, according to
\[\frac{d\Delta\mathbf{p}_{i}^{2}}{dt}=\left(\frac{d\Delta\mathbf{p}_{i}^{2}}{dt }\right)_{\text{BA}}+\left(\frac{d\Delta\mathbf{p}_{i}^{2}}{dt}\right)_{\text {shot}}. \tag{20}\]
The first term is given by (18), and the second term comes from the noisy interaction with the other particles:
\[\left\langle\frac{d\Delta\mathbf{p}_{i}^{2}}{dt}\right\rangle_{\text{shot}}= \frac{G_{N}^{2}m_{i}^{2}}{2v^{2}}\int d^{3}\mathbf{x}\rho_{m,i}(\mathbf{x}) \left\langle\left(\nabla\phi(\mathbf{x}-\mathbf{x}_{i})\right)^{2}\right\rangle. \tag{21}\]
Here, \(\rho_{m,i}(\mathbf{x})=\sum_{j\neq i}m_{j}p_{j}(\mathbf{x})\) is a noisy estimate of the mass density of all the particles
Figure 3: Constraints on the parameters \(v\) and \(\sigma\). Lack of single lost atoms leads to a bound on the backaction kick in cold atom traps (red, top left, [45]) and long-hold Penning traps (green, top, [46]). Bounds on position-dephasing driven heating arise from observed cooling capabilities of dilution refrigerators (purple, [44]). Constraints on the shot noise in the gravitational interaction arise from a lack of anomalous heating in interferometry experiments near the Earth or other large mass (orange, bottom region, [39; 40; 42]). Finally, short-range observations of Newton’s gravitational force law preclude \(\sigma>100\ \mu m\) (red, right region, [47]). The resulting white region appears to be allowed by current experimental constraints.
\(j\neq i\), and \(\phi({\bf x})=1/|{\bf x}|\) as before. Since we will apply this computation only in the semiclassical limit, we have assumed that the \(N\)-body state is unentangled as before, and also that the effective potential on the \(i\)th particle is approximately constant across its spatial wavefunction. See Appendix A for a detailed calculation.
Consider the effects of this gravitational shot noise on a small system near a large source mass, like an atom interferometer some distance \(d\) above the surface of the Earth. As the \((\nabla\phi)^{2}\) term goes as \(1/r^{4}\), it is this nearest distance that matters when \(d\) is much less than the radius of the Earth. One can estimate the integral in (21) to obtain a heating rate for the atom of order
\[\left\langle\frac{d\Delta{\bf p}^{2}}{dt}\right\rangle_{\rm shot}\approx\frac{ G_{N}^{2}m_{a}^{2}\rho_{0}}{v^{2}d} \tag{22}\]
where \(\rho_{0}\approx 1800\) kg/m\({}^{3}\) is the solid density of the Earth and \(m_{a}\) is the atom's mass. A constant heating rate will lead to position uncertainty in the atom of order \(\Delta x^{2}\approx(\tau^{3}/m^{2})(d\Delta p^{2}/dt)\) after a time \(\tau\). This cannot be larger than the de Broglie wavelength \(\lambda_{dB}\) of atoms in the interferometer without destroying phase contrast, leading to a _lower_ bound on the parameter \(v\):
\[\Delta x^{2}\approx\frac{G_{N}^{2}\rho_{0}\tau^{3}}{v^{2}d}\lesssim\lambda_{dB }^{2}. \tag{23}\]
In Fig. 3, we display this bound assuming an atom interferometer using rubidium \(m_{a}=85\) u placed \(d=1\) m above the Earth and held for \(\tau\approx 20\) s [39; 40; 42].
### Modified Newtonian gravity
In addition to the noise in the Newtonian interaction, the average DC potential itself will be modified at short distances \(\Delta x\lesssim\sigma\) in this strongly incoherent gravity model. The nominal particle locations are only accurate to around \(\sigma\), so the effective potential will like convolution of the Newtonian force law with a distribution function of the particles' locations. Specifically, consider the effective potential appearing in (7),
\[\left\langle V_{\rm eff}(\hat{\bf x}_{i})\right\rangle\approx-\sum_{i}G_{N}m_{i }m_{j}\int d^{3}{\bf x}\left\langle\frac{1}{|\hat{\bf x}_{i}-{\bf x}|}\right \rangle p_{j}({\bf x}). \tag{24}\]
The probability distribution \(p_{j}({\bf x})\) varies non-trivially on the measurement distance scale \(\sigma\). Thus, if mass \(i\) is measured at a distance around \(\sigma\) or less from the other particles \(j\neq i\), the effective potential will be significantly modified, and in particular the short-distance
singularity in the potential would be "softened". Similar effects in astrophysics are often modelled by potentials of the form \(V_{\rm eff}\sim 1/\sqrt{|r|^{2}+\sigma^{2}}\)[48]. Searches for modifications to the Newton potential currently operate around 100 \(\mu m\) and above and are consistent with an unmodified potential [47]. Demanding that these effects are not present then sets a conservative upper bound on the allowed values of \(\sigma\), as displayed in Fig. 3.
## IV Direct experimental tests
As described in the previous section, there is a large range of model parameters \(v,\sigma\) for which the strongly incoherent gravitational interaction appears to be viable. Testing the remainder of the parameter space could be done by further refining the kinds of experiments discussed there, for example, more precise anomalous heating measurements. A more direct set of tests arise in situations where gravitationally-coupled objects are prepared in non-trivial quantum states. A straightforward approach is to observe entanglement generated between two massive objects via their gravitational interaction [2; 3; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Observation of this entanglement would entirely rule out our strongly incoherent gravity model, which cannot entangle objects. Current proposals for these experiments, however, are very challenging, and may take decades to complete. A more straightforward alternative would be to look for the _lack_ of the loss of quantum coherence predicted in a small system coupled to a gravitational source due to the inherent noise in the incoherent interaction.
Consider again a heavy source mass \(M\) near a spatially superposed light mass \(m\), as discussed at the end of Sec. II. There, we showed that the lowest-order gravitational effect, namely the dipole-induced coherent phase shift, arises identically in the heuristic classical model, in standard quantized gravity with a coherent entangling Newton potential, and in our strongly incoherent gravity model.
Let us now consider the leading corrections. In the heuristic classical model of [43], the prediction (14) is exact. In both standard quantized gravity and in our strongly incoherent gravity model, the source \(M\) is a quantum object. In the former case, with a coherent entangling Newton potential, (14) is accurate up to extremely small corrections coming from standard decoherence. In our strongly incoherent gravity model, however, there is a substantial correction which leads to loss of coherence in the superposition. We need
to specify a quantum state for the heavy mass; we can approximate the static classical situation by assuming \(M\) is in a highly localized state, for example, \(\psi_{M}(z_{M})=Ne^{-z_{M}^{2}/4\Delta z_{M}^{2}}\) with small uncertainty \(\Delta z_{M}\to 0\). We will focus only on the \(z\) axis in what follows. The first non-trivial corrections to (15) arise from the quadrupole term in the potential of the form \(\phi_{qp}=2z_{m}z_{M}/d^{3}\) [c.f. equation (B1)]. This term leads to an \({\cal O}(G_{N}^{2})\) correction, namely
\[\left\langle\dot{\sigma}_{-}\right\rangle=\left[i\frac{G_{N}Mm\ell}{d^{2}}- \frac{1}{v^{2}M}\left(\frac{G_{N}Mm\ell\sigma}{d^{3}}\right)^{2}+{\cal O}(G_{N }^{3})\right]\left\langle\sigma_{-}\right\rangle, \tag{25}\]
which can be solved easily,
\[\left\langle\sigma_{-}(t)\right\rangle_{\rm incoh}=e^{-2i\varphi(t)}e^{- \Gamma_{\rm incoh}t},\quad\Gamma_{\rm incoh}=\frac{G_{N}^{2}Mm^{2}\ell^{2} \sigma^{2}}{d^{6}v^{2}}. \tag{26}\]
Here the first term leads to the same coherent oscillation as in (14), \(\sigma\) is the measurement length scale of the model, and we assumed \(\sigma\gg\Delta z_{M}\). Notice in particular that this decoherence rate \(\sim 1/v^{2}\gg 1\). We provide a detailed calculation of (25) in Appendix B, but the essential physics is clear from the presence of \(\sigma\) in the exponent: this decoherence comes from the shot noise in the gravitational interaction. Observation of atomic coherence longer than \(1/\Gamma_{\rm incoh}\) would then rule out this model.
Finally, we remark on a much cleaner test, which can definitively probe the entire unconstrained parameter space of strongly incoherent gravity. If the source mass \(M\) has periodic behavior, like a suspended pendulum as depicted in Fig. 2, then one can look for the periodic quantum collapse and revival of the atomic state due to the periodic dynamics of the pendulum. Observationally, this manifests as a visibility \(V(t)\) which oscillates sinuisoidally with the frequency \(\omega_{M}\) of the source. This signature was studied in detail in [36], where it was shown that the revival occurs assuming the usual quantum-coherent gravitational interaction, even if the oscillator is in a high temperature state. The same revival also occurs in the simple heuristic classical model discussed above, as emphasized in [43] (see also [32]). However, as one can see from (26), our strongly incoherent gravity model predicts \(\dot{V}<1\). In Appendix B, we show that this effect persists with an oscillator and/or with initial entanglement between \(M\) and \(m\). Thus, observation of any non-negativity in the visibility--i.e., any level of atomic revival--would rule out the entire parameter space of this model.
Outlook
All non-entangling models of gravity can be ruled out by a direct observation of gravitational entanglement generation. The results of this paper suggest that a much more immediately available goal is to rule at least some of them out through simpler quantum coherence measurements. In particular, the strongly incoherent gravity model presented in this paper leads inevitably to a complete non-revival of atomic visibility in an experiment like that proposed in [36]. A similar but weaker conclusion can be drawn in other known classical, non-entangling models of gravity, which all appear to feature anomalous heating effects, leading to partial loss of quantum coherence. We leave a general classification of non-entangling models and their incoherence signatures to future work. Our strongly incoherent gravity model should provide a useful benchmark for these experiments, as well as a helpful theoretical construction which can be built out into more sophisticated, and ideally relativistic, models of non-entangling gravity.
## Acknowledgements
We thank Scott Glancy, Alexey Gorshkov, and Igor Pikovski for comments on this paper, and Julen Pedernales, Martin Plenio, and Kirill Streltsov for correspondence regarding related ideas. DC is supported by the US Department of Energy under contract DE-AC02-05CH11231 and Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics grant KA2401032.
|
2308.04049 | $p$-Laplacian operator with potential in generalized Morrey Spaces | We study some basic properties of generalized Morrey spaces
$\mathcal{M}^{p,\phi}(\R^{d})$. Also, the problem $-\mbox{div}(|\nabla
u|^{p-2}\nabla u)+V|u|^{p-2}u=0$ in $\Omega$, where $\Omega$ is a bounded open
set in $\R^d$, and potential $V$ is assumed to be not equivalent to zero and
lies in $\mathcal{M}^{p,\phi}(\Omega)$, is studied. Finally, we establish the
strong unique continuation for the $p$-Laplace operator in the case
$V\in\mathcal{M}^{p,\phi}(\R^d)$. | René Erlin Castillo, Héctor Camilo Chaparro | 2023-08-08T04:54:02Z | http://arxiv.org/abs/2308.04049v1 | # \(p\)-Laplacian operator with potential in generalized Morrey spaces
###### Abstract.
We study some basic properties of generalized Morrey spaces \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\). Also, the problem \(-\mathrm{div}(|\nabla u|^{p-2}\nabla u)+V|u|^{p-2}u=0\) in \(\Omega\), where \(\Omega\) is a bounded open set in \(\mathds{R}^{d}\), and potential \(V\) is assumed to be not equivalent to zero and lies in \(\mathcal{M}^{p,\phi}(\Omega)\), is studied. Finally, we establish the strong unique continuation for the \(p\)-Laplace operator in the case \(V\in\mathcal{M}^{p,\phi}(\mathds{R}^{d})\).
Key words and phrases:\(p\)-Laplacian, generalized Morrey spaces, Strong unique continuation _2020 Mathematics Subject Classification:_ 35J05, 46E30, 26D10
## 1. Introduction
The so called Morrey space were introduced in 1938 (see [10]) in relation to regularity problems of solution to partial differential equations.
Although used quite often in problems related to PDE's, the awareness of such spaces among mathematicians is not so widespread as in the case of Lebesgue or Sobolev spaces. Detailed information about Morrey spaces may be found in [1, 11].
The Morrey space \(\mathcal{M}^{p,\lambda}(\Omega)\) is defined as
\[\mathcal{M}^{p,\lambda}(\Omega)=\left\{f\in L_{p}(\Omega):\sup_{\begin{subarray} {c}\lambda\in\Omega\\ r>0\end{subarray}}\frac{1}{r^{\lambda}}\int\limits_{B(x,r)}|f(y)|^{p}\,dy<\infty\right\}\]
where \(1\leq p<\infty\), \(\lambda\geq 0\), \(\Omega\subset\mathds{R}^{d}\) is a bounded open set and \(B(x,r)\) denotes the open ball centered at \(x\) with radius \(r\), i.e. \(B(x,r)=\{y\in\Omega:|y-x|<r\}\).
This is a Banach space with respect to the norm
\[\|f\|_{\mathcal{M}^{p,\lambda}(\Omega)}=\sup_{\begin{subarray}{c}x\in\Omega \\ r\in(0,\mathrm{diam}(\Omega))\end{subarray}}\left(\frac{1}{r^{\lambda}}\int \limits_{B(x,r)}|f(y)|^{p}\,dy\right)^{\frac{1}{p}}.\]
We will explore some properties of spaces \(\mathcal{M}^{p,\lambda}(\Omega)\), aiming to establish some relations with the Fefferman's inequality, the Poisson equation, the \(p\)-Laplacian and the unique continuation principle. The last is fundamental and of independent interest in the theory of partial differential equations. Application regarding the vanishing of solutions are often associated for instance with solvability, stability, geometrical properties of solutions, and so on.
We are concerned with the following problem
\[-\text{div}(|\nabla u|^{p-2}\nabla u)+V|u|^{p-2}u=0\quad\text{in }\Omega \tag{1.1}\]
and the potential \(V\) is assumed to be not equivalent to zero and lies in \(\mathcal{M}^{p,\phi}(\Omega)\) for \(1<p<\infty\) and \(\phi\) is an almost decreasing function which satisfies the doubling condition.
Specifically, we are interested in studing a family of functions which enjoys the strong unique continuation property, that is, functions besides the possible zero functions which have zero infinite order.
By \(L_{p,loc}(\Omega)\) we denoted the set of functions \(u\) such that
\[\int\limits_{K}|u(x)|^{p}\,dx<\infty\]
for all compact subset \(K\subset\Omega\).
_Definition 1.1_.: We say tha a function \(u\in L_{p,loc}(\Omega)\) vanishes of infinite order at point \(x_{0}\) if for any natural number \(N\) there exists a constant \(C_{N}\) such that
\[\int\limits_{B(x_{0},r)}|u(x)|^{p}\,dx\leq C_{N}r^{N} \tag{1.2}\]
for all \(n\in\mathds{N}\) and for small positive number \(r\).
_Definition 1.2_.: We say that (1.1) has strong unique continuation property if and only if any solution \(u\) of (1.1) in \(\Omega\) is identically zero in \(\Omega\) provided that \(u\) vanishes of infinite order that a point \(x_{0}\) in \(\Omega\).
There is an extensive literature on unique continuation property. We refer the reader to the work of Zamboni on unique continuation for nonnegative solutions of quasilinear elliptic equation [12], also the work of Jerison-Kenig on unique continuation for Schrodinger operator [9]. The same work is done by Chiarenza and Frasca but for linear elliptic operator in the case where \(V\in L^{\frac{n}{2}}\) when \(n>2\)[2].
This paper is organized as follows. In section 2, we study a generalization of Morrey space. Some basic results, such as embeddings,
completeness, etc. are established. In section 3, we shall investigate the relationship between Fefferman's inequality and the Poisson equation. Finally, section 4 is devoted to investigate the strong unique continuation property of the so called \(p\)-Laplacian. The \(p\)-Laplacian has been much studied during the last sixty eight years, although its theory is by now rather developed, some challenging problems remain unsolved.
## 2. Generalized Morrey Space
Generalized Morrey space \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\) are equipped with the parameter, that is, \(1\leq p<\infty\), and the function \(\phi:(0,\infty)\longrightarrow(0,\infty)\). We assume that \(\phi\) is in \(\mathscr{G}_{p}\), which is the set of all functions \(\phi:(0,\infty)\longrightarrow(0,\infty)\) such that \(\phi\) is almost decreasing, that is \(r\leq s\) implies \(\phi(r)\geq C\phi(s)\) and \(t\mapsto t^{\frac{n}{p}}\) is almost increasing, that is \(r\leq s\) implies \(r^{\frac{n}{p}}\phi(r)\leq Cs^{\frac{n}{p}}\phi(s)\) for some \(C>0\). Note that, \(\phi\in\mathscr{G}_{p}\) implies that \(\phi\) satisfies the doubling condition, that is, there exists \(C>0\) such that
\[\frac{1}{C}\leq\frac{\phi(r)}{\phi(s)}\leq C\]
for every \(r\) and \(s\) with \(\frac{1}{2}\leq\frac{r}{s}\leq 2\).
_Definition 2.1_ (Generalized Morrey space).: The generalized Morrey Space \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})=\mathcal{M}^{p,\phi}\) is defined as
\[\mathcal{M}^{p,\phi}(\mathds{R}^{d})=\{f\in L_{p}(\mathds{R}^{d}):\|f\|_{ \mathcal{M}^{p,\phi}}<\infty\}\]
with
\[\|f\|_{\mathcal{M}^{p,\phi}(\mathds{R}^{d})}=\sup_{B(a,r)}\frac{1}{\phi(r)} \left(\frac{1}{r^{n}}\int\limits_{B(a,r)}|f(x)|^{p}\right) \tag{2.1}\]
where \(B(a,r)=\{x\in\mathds{R}^{d}:|x-a|<r\}\).
Observe that if \(1\leq p<\infty\) and \(\phi(r)=r^{\frac{\lambda-n}{p}}\) for \(\lambda>0\), then \(\mathcal{M}^{p,\phi}=\mathcal{M}^{p,\lambda}\) which means that we recovered the space defined in (2.1). Now it is propitious to point out that \(\mathds{R}^{d}\) is equipped with a Borel measure \(\mu\) satisfying the growth condition of order \(n\), with \(0<n\leq d\), that is, there exist a constant \(C>0\) such that
\[\mu(B(a,r))\leq Cr^{n}\]
for every ball \(B(a,r)\) centered at \(a\in\mathds{R}^{d}\) with radius \(r>0\).
The following lemma indicates that the characteristic function on some balls is contained in generalized Morrey spaces.
**Lemma 2.1**.: _Let \(1\leq p<\infty\) and \(\phi\in\mathscr{G}_{p}\). If \(B_{0}=B(0,r_{0})\) for some \(r_{0}>0\). Then, there exist \(C>1\) and \(B>0\) such that_
\[\frac{B}{\phi(r_{0})}\leq\|\chi_{B_{0}}\|_{\mathcal{M}^{p,\phi}}\leq\frac{C}{ \phi(r_{0})}.\]
Proof.: By definition of \(\|\cdot\|_{\mathcal{M}^{p,\phi}}\), we have
\[\|\chi_{B_{0}}\|_{\mathcal{M}^{p,\phi}}= \sup_{B(a,r)}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int\limits_{ B(a,r)}|\chi_{B_{0}}(x)|^{p}\,d\mu\right)^{\frac{1}{p}}\] \[\geq \frac{1}{\phi(r_{0})}\left(\frac{\mu(B_{0})}{r_{0}^{n}}\right)^{ \frac{1}{p}}\] \[= \frac{B}{\phi(r_{0})}\]
where \(B=\left(\frac{\mu(B_{0})}{r_{0}^{n}}\right)^{\frac{1}{p}}\), proving the first inequality.
For the second inequality, we separate the proof into two cases:
First case \(r\leq r_{0}\), for this case, we have
\[\phi(r)\geq C\phi(r_{0}).\]
Thus
\[\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int\limits_{B(a,r)}|\chi_{ B_{0}}(x)|^{p}\,d\mu\right)^{\frac{1}{p}}\leq \frac{C}{\phi(r_{0})}\left(\frac{\mu(B(a,r)\cap B_{0})}{r^{n}} \right)^{\frac{1}{p}}\] \[\leq \frac{C}{\phi(r_{0})}.\]
Second case \(r\geq r_{0}\) Since
\[r_{0}^{\frac{n}{p}}\phi(r_{0})\leq Cr^{\frac{n}{p}}\phi(r),\]
we have
\[\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int\limits_{B(a,r)}|\chi_{ B_{0}}(x)|^{p}\,d\mu\right)^{\frac{1}{p}}\leq Cr^{\frac{n}{p}}r_{0}^{-\frac{n}{p}}\left(\frac{\mu(B(a,r)\cap B_{0})}{r ^{n}}\right)^{\frac{1}{p}}\] \[\leq \frac{Cr^{\frac{n}{p}}r_{0}^{-\frac{n}{p}}}{\phi(r_{0})}\left( \frac{\mu(B_{0})}{r^{n}}\right)^{\frac{1}{p}}\]
\[\leq\frac{C}{\phi(r_{0})}.\]
From these two cases, we can conclude that
\[\|\chi_{B_{0}}\|_{\mathcal{M}^{p,\phi}}\leq\frac{C}{\phi(r_{0})}.\]
Thus we have proved the inequality.
The following lemma shows us some conditions on the functions \(\phi_{1},\phi_{2}\) for which we can compare the spaces \(\mathcal{M}^{p_{1},\phi_{1}}\) and \(\mathcal{M}^{p_{2},\phi_{2}}\). The result is as follows.
**Lemma 2.2**.: _Let \(1\leq p_{1}\leq p_{2}<\infty\), \(\phi_{1},\phi_{2}\in\mathscr{G}_{p}\). Then the following statements are equivalent:_
1. \(\phi_{2}\leq C\phi_{1}\)__
2. \(\mathcal{M}^{p_{2},\phi_{2}}\subset\mathcal{M}^{p_{1},\phi_{1}}\) _with_ \[\|f\|_{\mathcal{M}^{p_{1}},\phi_{1}}\leq C\|f\|_{\mathcal{M}^{p_{2}},\phi_{2}}\] _for every_ \(f\in\mathcal{M}^{p_{2},\phi_{2}}\)_._
Proof.: Assume that (I) holds and let \(f\in\mathcal{M}^{p_{2},\phi_{2}}\) hence by Holder's inequality we have
\[\frac{1}{\phi_{1}(r)}\left(\frac{1}{r^{n}}\int\limits_{B(a,r)}|f (x)|^{p_{1}}\right)^{\frac{1}{p_{1}}}\\ \leq\frac{C}{\phi_{2}(r)}\left[\left(\frac{1}{r^{n}}\int\limits_{ B(a,r)}\left(|f(x)|^{p_{1}}\right)^{\frac{p_{2}}{p_{1}}}d\mu\right)^{\frac{p_{1}} {p_{2}}}\left(\int\limits_{B(a,r)}d\mu\right)^{1-\frac{p_{1}}{p_{2}}}\right]^{ \frac{1}{p_{1}}}\\ \leq\frac{C}{\phi_{2}(r)}\left[\left(\frac{1}{r^{n}}\int\limits_{ B(a,r)}|f(x)|^{p_{2}}\,d\mu\right)^{\frac{p_{1}}{p_{2}}}\left(\mu(B(a,r))\right)^{1- \frac{p_{1}}{p_{2}}}\right]^{\frac{1}{p_{1}}}\\ \leq\frac{C}{\phi_{2}(r)}\left[\left(\frac{1}{r^{n}}\int\limits_{ B(a,r)}|f(x)|^{p_{2}}\,d\mu\right)^{\frac{p_{1}}{p_{2}}}\left(r^{n}\right
\[\leq\frac{C}{\phi_{2}(r)}\left(\frac{1}{r^{n}}\int\limits_{B(a,r)}|f(x)|^{p _{2}}\,d\mu\right)^{\frac{1}{p_{2}}}\] \[\leq C\|f\|_{\mathcal{M}^{p_{2},\phi_{2}}}.\]
By taking the supremum of the left side for every \(a\) and \(r\) we obtain
\[\|f\|_{\mathcal{M}^{p_{1},\phi_{1}}}\leq C\|f\|_{\mathcal{M}^{p_{2},\phi_{2}}}.\]
Now, assume that (II) hods. Let \(B_{0}=B(0,r_{0})\) as before, then
\[\|\chi_{B_{0}}\|_{\mathcal{M}^{p_{1},\phi_{1}}}\leq C\|\chi_{B_{0}}\|_{ \mathcal{M}^{p_{2},\phi_{2}}}. \tag{2.2}\]
By Lemma 2.1 we have
\[\frac{1}{\phi_{1}(r_{0})}\leq\|\chi_{B_{0}}\|_{\mathcal{M}^{p_{1},\phi_{1}}} \tag{2.3}\]
and
\[\|\chi_{B_{0}}\|_{\mathcal{M}^{p_{1},\phi_{1}}}\leq\frac{C}{\phi_{2}(r_{0})}. \tag{2.4}\]
By (2.2), (2.3) and (2.4) we arrive at
\[\frac{1}{\phi_{1}(r_{0})}\leq\frac{C}{\phi_{2}(r_{0})},\]
and so
\[\phi_{2}(r_{0})\leq C\phi_{1}(r_{0}).\]
Since \(r_{0}\) is any positive real number, we have
\[\phi_{2}\leq\phi_{1}.\]
The next result gives us a Holder's type inequality.
**Theorem 2.1**.: _Let \(f\in\mathcal{M}^{p,\phi}(\mathds{R}^{d})\) and \(g\in\mathcal{M}^{q,\phi}(\mathds{R}^{d})\). Then_
\[\|fg\|_{\mathcal{M}^{1,\phi}}\leq\phi(r)\|f\|_{\mathcal{M}^{p,\phi}}\|g\|_{ \mathcal{M}^{q,\phi}}\]
_with \(\frac{1}{p}+\frac{1}{q}=1\)._
Proof.: By Holder's inequality we have
\[\frac{1}{\phi(r)r^{n}}\int\limits_{B}|f(x)g(x)|\,d\mu=\frac{\phi( r)}{[\phi(r)]^{2}}\int\limits_{B}\left|\frac{f(x)}{r^{\frac{n}{p}}}\right| \left|\frac{g(x)}{r^{\frac{n}{q}}}\right|\,d\mu\] \[\leq\phi(r)\left(\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int \limits_{B}|f(x)|^{p}\,d\mu\right)^{\frac{1}{p}}\left(\frac{1}{\phi(r)}\left( \frac{1}{r^{n}}\int\limits_{B}|g(x)|q\,d\mu\right)^{\frac{1}{q}}\right)\right)\]
\[\leq\phi(r)\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f(x )|^{p}\,d\mu\right)^{\frac{1}{p}}\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}} \int_{B}|g(x)|^{q}\,d\mu\right)^{\frac{1}{q}}\] \[=\phi(r)\|f\|_{\mathcal{M}^{p,\phi}}\|g\|_{\mathcal{M}^{q,\phi}}.\]
And so
\[\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f(x)g(x)|\,d\mu\right) \leq\phi(r)\|f\|_{\mathcal{M}^{p,\phi}}\|g\|_{\mathcal{M}^{q,\phi}}.\]
That is
\[\|fg\|_{\mathcal{M}^{1,\phi}}\leq\phi(r)\|f\|_{\mathcal{M}^{p,\phi}}\|g\|_{ \mathcal{M}^{q,\phi}},\]
and so the theorem is proved.
In the coming result we derive the Minkowski inequality, which in this case is independent of Theorem 2.1.
**Theorem 2.2**.: _If \(f\) and \(g\) belong to \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\), then \(f+g\) belongs to \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\). Moreover,_
\[\|f+g\|_{\mathcal{M}^{p,\phi}}\leq\|f\|_{\mathcal{M}^{p,\phi}}+\|g\|_{ \mathcal{M}^{p,\phi}}.\]
Proof.: Since \(f\) and \(g\) belong to \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\) by definition \(f\) and \(g\) belong to \(L_{p}(\mathds{R}^{d})\). Hence the Minkowski inequality holds. So
\[\left(\int_{B}|f+g|^{p}\,d\mu\right)^{\frac{1}{p}}\leq\left(\int_{B}|f|^{p}\,d \mu\right)^{\frac{1}{p}}+\left(\int_{B}|g|^{p}\,d\mu\right)^{\frac{1}{p}}.\]
Thus
\[\frac{1}{\phi(r)r^{\frac{n}{p}}}\left(\int_{B}|f+g|^{p}\,d\mu \right)^{\frac{1}{p}}\] \[\leq\frac{1}{\phi(r)r^{\frac{n}{p}}}\left(\int_{B}|f|^{p}\,d\mu \right)^{\frac{1}{p}}+\frac{1}{\phi(r)r^{\frac{n}{p}}}\left(\int_{B}|g|^{p}\, d\mu\right)^{\frac{1}{p}}\] \[=\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f|^{p}\,d\mu \right)^{\frac{1}{p}}+\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|g|^{p}\, d\mu\right)^{\frac{1}{p}}\] \[\leq\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f|^{p} \,d\mu\right)^{\frac{1}{p}}+\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}} \int_{B}|g|^{p}\,d\mu\right)^{\frac{1}{p}}\]
\[=\|f\|_{\mathcal{M}^{p,\phi}}+\|g\|_{\mathcal{M}^{p,\phi}}.\]
Finally
\[\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f+g|^{p}\,d\mu\right)^{\frac{1}{p }}\leq\|f\|_{\mathcal{M}^{p,\phi}}+\|g\|_{\mathcal{M}^{p,\phi}}\]
holds for any ball \(B\). Therefore
\[\sup_{B}\frac{1}{\phi(r)}\left(\frac{1}{r^{n}}\int_{B}|f+g|^{p}\,d\mu\right)^{ \frac{1}{p}}\leq\|f\|_{\mathcal{M}^{p,\phi}}+\|g\|_{\mathcal{M}^{p,\phi}}\]
and so
\[\|f+g\|_{\mathcal{M}^{p,\phi}}\leq\|f\|_{\mathcal{M}^{p,\phi}}+\|g\|_{\mathcal{ M}^{p,\phi}}.\]
Now, we are ready to prove the completeness of \(\mathcal{M}^{p,\phi}\) space.
**Theorem 2.3**.: \(\mathcal{M}^{p,\phi}\)_, equipped with the norm (2.1), is a Banach space._
Proof.: Let \(\{f_{n}\}_{n\in\mathds{N}}\) be a Cauchy sequence in \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\), since \(\mathcal{M}^{p,\phi}(\mathds{R}^{d})\subset L_{p}(\mathds{R}^{d})\), then \(\{f_{n}\}_{n\in\mathds{N}}\) is a Cauchy sequence in \(L_{p}(\mathds{R}^{d})\), therefore there exists an \(f\in L_{p}(\mathds{R}^{d})\) such that \(f_{n}\to f\) in \(L_{p}(\mathds{R}^{d})\).
Thus for \(\epsilon>0\) and \(r>0\) there exist \(n_{r}>0\) such that
\[\|f-f_{n}\|_{L_{p}}<\phi(r)r^{\frac{n}{p}}\epsilon\quad\text{if}\quad n\geq n _{r},\]
from this latter inequality we arrived at
\[\|f-f_{n}\|_{\mathcal{M}^{p,\phi}}<\epsilon\quad\text{if}\quad n\geq n_{r}. \tag{2.5}\]
Applying Theorem 2.2 and (2.5) we get
\[\|f\|_{\mathcal{M}^{p,\phi}}\leq \|f-f_{n}\|_{\mathcal{M}^{p,\phi}}+\|f_{n_{r}}\|_{\mathcal{M}^{p, \phi}}\] \[< \epsilon+\|f_{n_{r}}\|\leq C.\]
And so \(f\in\mathcal{M}^{p,\phi}(\mathds{R}^{d})\) which end the proof of Theorem 2.3
We end this section by proving a Hedberg-type inequality. Before doing so, let us recall two well-known definitions.
The first one is the maximal operator. Given a function \(f\in L_{1,loc}(\mathds{R}^{d})\) the Hardy-Littlewood maximal function for \(x\in\mathds{R}^{d}\) it is defined as
\[Mf(x)=\sup_{r>0}\frac{1}{\mu(B(x,r))}\int_{B(x,r)}|f(y)|\,d\mu. \tag{2.6}\]
The second one is the so-called Riesz potential operator which is defined as
\[I_{\alpha}f(x)=\int\limits_{\mathds{R}^{d}}\frac{f(y)}{|x-y|^{n-\alpha}} \tag{2.7}\]
for \(0\leq\alpha<n\leq d\). Now, we are ready to establish our announced result.
**Theorem 2.4**.: _Suppose that for some \(0\leq\lambda<n-1\) we have_
\[\int_{r}^{\infty}\phi(t)\,dt\leq Cr^{\lambda+1-n},\quad r>0.\]
_Then, for any \(f\in\mathcal{M}^{p,\phi}\), we have the following pointwise inequality_
\[|I_{1}f(x)|\leq C|Mf(x)|^{1-\frac{1}{n-\lambda}}\|f\|_{\mathcal{M}^{p,\phi}}^{ \frac{1}{n-\lambda}}\]
_for \(x\in\mathds{R}^{d}\)._
Proof.: Let \(f\in\mathcal{M}^{p,\phi}\) and \(x\in\mathds{R}^{d}\). For every \(r>0\) we have
\[|I_{1}f(x)| \leq\int\limits_{|x-y|<r}\frac{|f(y)|d\mu}{|x-y|^{n-1}}+\int\limits _{|x-y|\geq r}\frac{|f(y)|d\mu}{|x-y|^{n-1}}\] \[= A+B.\]
Observe that for the first integral we obtain
\[A =\int\limits_{|x-y|<r}\frac{|f(y)|d\mu}{|x-y|^{n-1}}\] \[=\sum\limits_{j=-\infty_{2^{j}r<|x-y|\leq 2^{j+1}r}}^{-1}\int\limits _{|x-y|\leq 2^{j+1}r}\frac{|f(y)|d\mu}{|x-y|^{n-1}}\] \[\leq\sum\limits_{j=-\infty}^{-1}\frac{1}{(2^{j}r)^{n-1}}\int \limits_{|x-y|\leq 2^{j+1}r}|f(y)|\,d\mu\] \[=\sum\limits_{j=-\infty}^{-1}2^{n}(2^{j}r)\frac{1}{(2^{j+1}r)^{n }}\int\limits_{B(x,2^{j+1}r)}|f(y)|\,d\mu\] \[\leq 2^{n}rMf(x)\sum\limits_{j=-\infty}^{-1}2^{j}\] \[\leq CrMf(f). \tag{2.8}\]
Meanwhile, for the second integral, we have the following estimate
\[B =\int\limits_{|x-y|\geq r}\frac{|f(y)|}{|x-y|^{n-1}}\,d\mu\] \[= \sum\limits_{j=0}^{\infty}\int\limits_{2^{j}r<|x-y|\leq 2^{j+1}r} \frac{|f(y)|d\mu}{|x-y|^{n-1}}\] \[\leq \sum\limits_{j=0}^{\infty}\frac{1}{(2^{j}r)^{n-1}}\int\limits_{| x-y|\leq 2^{j+1}r}|f(y)|\,d\mu\] \[= \sum\limits_{j=0}^{\infty}2^{n}(2^{j}r)\frac{1}{(2^{j+1}r)^{n}} \int\limits_{B(x,2^{j+1}r)}|f(y)|\,d\mu\] \[\leq \sum\limits_{j=0}^{\infty}2^{n}(2^{j}r)\frac{1}{(2^{j+1}r)^{n}} \left(\int\limits_{\mathcal{B}(x,2^{j+1}r)}|f(y)|^{p}\,d\mu\right)^{\frac{1}{ p}}\left(\frac{1}{(2^{j+1}r)^{n}}\mu(B(x,2^{j+1}r))\right)^{\frac{1}{q}}\] \[\leq C\sum\limits_{j=0}^{\infty}2^{n}(2^{j}r)\|f\|_{\mathcal{M}^{p, \phi}}\left(\frac{1}{(2^{j+1}r)^{n}}(2^{j+1}r)^{n}\right)^{\frac{1}{q}}\phi(2 ^{j+1}r)\] \[= C\left[\sum\limits_{j=0}^{\infty}2^{n}(2^{j}r)\phi(2^{j+1}r) \right]\|f\|_{\mathcal{M}^{p,\phi}}.\]
Since \(\phi\) is almost decreasing, we observe that for \(j=0,1,2,\dots\)
\[(2^{j}r)\phi(2^{j+1}r)\leq C\int_{2^{j}r}^{2^{j+1}r}\phi(t)\,dt.\]
this last inequality and our assumption then lead us to
\[B\leq C\|f\|_{\mathcal{M}^{p,\phi}}\sum\limits_{j=0}^{\infty}\int_{2^{j}r}^{ 2^{j+1}r}\phi(t)\,dt\] \[\leq C\|f\|_{\mathcal{M}^{p,\phi}}\int_{r}^{\infty}\phi(t)\,dt\] \[\leq Cr^{\lambda+1-n}\|f\|_{\mathcal{M}^{p,\phi}}.\]
Now, by choosing
\[r=\left(\frac{Mf(x)}{\|f\|_{\mathcal{M}^{p,\phi}}}\right)^{\frac{1}{\lambda-n }},\]
we obtain
\[|I_{1}f(x)|\leq Cr\left(Mf(x)+r^{\lambda-n}\|f\|_{\mathcal{M}^{p,\phi}}\right)\]
\[\leq C[Mf(x)]^{1-\frac{1}{n-\lambda}}\|f\|_{\mathcal{M}^{p,\phi}}^{\frac{1}{ \lambda-n}},\]
this completes the proof.
## 3. Fefferman inequality and its realtion with the Poisson equation.
Let us consider the following problem (Dirichlet problem)
\[\left\{\begin{aligned} -\Delta z=& V& \text{ on }B\\ z=& 0&\text{ on }\partial B.\end{aligned}\right. \tag{3.1}\]
The equation \(-\Delta z=V\) is known as the Poisson equation. It is well known that the solution of the above problem is given by the convolution
\[z(x)=\int\limits_{\mathds{R}^{d}}\phi(x-y)V(y)\,dy,\]
where \(\phi\) is the fundamental solution of the Laplace equation, and if \(V\in C_{c}^{2}(\mathds{R}^{d})\) it is clear that \(z\in C_{c}^{2}(\mathds{R}^{d})\), (see [8] for details).
Furthermore, \(z\) may be written as
\[z(x)=\frac{1}{\omega_{n-1}}\int\limits_{\mathds{R}^{d}}\frac{\nabla z(y)\cdot (x-y)}{|x-y|^{n}}\,dy\]
where \(\omega_{n-1}\) represents the \((n-1)\)-dimensional measure of the sphere \(S^{n-1}\). Then
\[\nabla z(x)=\frac{1}{\omega_{n-1}}\int\limits_{\mathds{R}^{d}}\frac{\nabla^{2 }z(y)\cdot(x-y)}{|x-y|^{n}}\,dy.\]
Thus
\[|\nabla z(x)|=\frac{1}{\omega_{n-1}}\int\limits_{\mathds{R}^{d}}\frac{|\nabla ^{2}z(y)|}{|x-y|^{n-1}}\,dy\]
it is known that \(\nabla^{2}z=\Delta z\).
_Definition 3.1_.: A function \(\omega(x)\geq 1\) is said to be \(A_{1}\)-class if
\[M\omega(x)\leq C_{1}\omega(x)\]
for almost \(x\in\mathds{R}^{d}\) and for some constant \(C_{1}>0\).
Our next task is to state and prove the Fefferman's inequality, assuming that \(V\) belongs to
\[A_{1}\cap\mathcal{M}^{p,\phi}(\mathds{R}^{d})\cap C_{c}^{2}(\mathds{R}^{d})\]
with \(1\leq p<q\leq\frac{n-\lambda}{n-\lambda-1}\).
Fefferman's inequality has been shown in different settings. For instance, in [12], Zamboni proved the Fefferman inequality allowing \(V\) to be in a generalized Kato class. See also [4].
On the other hand, Castillo, Ramos and Rojas proved the Fefferman's inequality allowing \(V\) to belong to the Kato class with \(p=2\), as well as Castillo, Rafeiro and Rojas proved the Fefferman's inequality allowing \(V\) to be in \(L_{\frac{n}{p}}(\Omega)\) (see [7]). For definitions and details, see [3, 4, 5, 6, 12]
**Theorem 3.1** (Fefferman's inequality on \(\mathcal{M}^{p,\phi}\)).: _Suppose that for some \(0\leq\lambda<n-1\) we have_
\[\int_{r}^{\infty}\phi(t)\,dt\leq Cr^{\lambda+1-n}\qquad r>0.\]
_Let_
\[1<p\leq q\leq\frac{n-\lambda}{n-\lambda-1}\quad\text{and}\quad V\in A_{1} \cap\mathcal{M}^{p,\phi}\cap C_{c}^{2}(\mathds{R}^{d}).\]
_Then_
\[\int\limits_{\mathds{R}^{d}}|u(x)|^{p}V(x)\,d\mu\leq C(n,p)\|V\|_{\mathcal{M} ^{p,\phi}}^{\frac{p}{n-\lambda}}\int\limits_{\mathds{R}^{d}}|\nabla u(x)|^{p} \,d\mu.\]
Proof.: For any \(u\in C_{c}^{2}(\mathds{R}^{d})\), let us consider a ball such that \(u\in C_{c}^{2}(B)\) and consider the solution \(z\) of the Poisson equation
\[\begin{cases}-\Delta z=&V\quad\text{on }B\\ &z=&0\quad\text{on }\partial B.\end{cases}\]
Then, by using Theorem 2.4 and Holder's inequality, we have
\[\int\limits_{\mathds{R}^{d}}|u(x)|^{p}V(x)\,d\mu=-\int\limits_{B} |u(x)|^{p}\Delta z(x)\,d\mu(x)\] \[=\int\limits_{B}\nabla|u(x)|^{p}\nabla z(z)\,d\mu(x)\] \[=p\int\limits_{B}|u(x)|^{p-1}|\nabla u(x)|\nabla z(x)\,d\mu(x)\] \[\leq p\int\limits_{B}|u(x)|^{p-1}|\nabla u(x)||\nabla z(x)|\,d\mu(x)\] \[\leq\frac{p}{\omega_{n-1}}\int\limits_{B}|u(x)|^{p-1}|\nabla u(x )|\int\limits_{B}\frac{|\nabla^{2}z(y)|}{|x-y|^{n-1}}\,d\mu(y)d\mu(x)\]
\[=\frac{p}{\omega_{n-1}}\int_{B}|u(x)|^{p-1}|\nabla u(x)|\int_{B}\frac{| \Delta z(y)|}{|x-y|^{n-1}}\,d\mu(y)d\mu(x)\] \[=\frac{p}{\omega_{n-1}}\int_{B}|u(x)|^{p-1}|\nabla u(x)|\int_{B} \frac{|V(y)|}{|x-y|^{n-1}}\,d\mu(y)d\mu(x)\] \[\leq\frac{Cp}{\omega_{n-1}}\int_{B}|u(x)|^{p-1}|\nabla u(x)||MV(x) |^{1-\frac{1}{n-\lambda}}\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{1}{n-\lambda}}\,d \mu(x)\] \[\leq\frac{Cp}{\omega_{n-1}}\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{1} {n-\lambda}}\left[\int_{B}|u(x)|^{p}[MV(x)]^{q\left(1-\frac{1}{n-\lambda} \right)}\,d\mu(x)\right]^{\frac{1}{q}}\left(\int_{B}|\nabla u(x)|^{p}\,d\mu \right)\] \[\leq\frac{Cp}{\omega_{n-1}}\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{1} {n-\lambda}}\left(\int_{B}|u(x)|^{p}V(x)\,d\mu(x)\right)^{\frac{1}{q}}\left( \int_{B}|\nabla u(x)|^{p}\right)^{\frac{1}{p}}.\]
And so
\[\int_{B}|u(x)|^{p}V(x)\,d\mu\leq\left(\frac{Cp}{\omega_{n-1}}\right)^{p}\|V\|_ {\mathcal{M}^{p,\phi}}^{\frac{p}{n-\lambda}}\int_{B}|\nabla u(x)|^{p}\,d\mu.\]
This completes the proof.
From Fefferman's inequality we easily derive the following corollary, which can be obtained through a standard partition of the unity.
**Corollary 3.1**.: _Let \(V\in\mathcal{M}^{p,\phi}(\mathds{R}^{d})\) and let \(\Omega\) be a bounded subset of \(\mathds{R}^{d}\), \(\operatorname{Supp}V\subseteq\Omega\). Then, for any \(\sigma>0\), there exists a positive constant \(K\) depending on \(\sigma\), such that_
\[\int_{\Omega}|u(x)|^{p}V(x)\,d\mu\leq\sigma\int_{\Omega}|\nabla u(x)|^{p}\,d \mu+K(\sigma)\int_{\Omega}\int_{\Omega}|u(x)|^{p}\,d\mu\]
_for all \(u\in C_{0}^{\infty}(\Omega)\)._
Proof.: Let \(\sigma>0\). Let \(r\) be a positive number that will be chosen later. Let \(\{\alpha_{k}^{p}\}\), \(k=1,2,\ldots,N(r)\), be a finite partition of the unity of \(\overline{\Omega}\), such that \(\operatorname{Supp}V\subseteq B(x_{k},r)\) with \(x_{k}\in\overline{\Omega}\). We apply Theorem 3.1 to the functions \(\alpha_{k}\) and we get
\[\int_{\Omega}|u(x)|^{p}V(x)\,d\mu\] \[=\int_{\Omega}V(x)|u(x)|^{p}\sum_{k=1}^{N(r)}\alpha_{k}^{p}(x)\,d\mu\]
\[=\sum_{k=1}^{N(r)}\int_{\Omega}V(x)|u(x)\alpha_{k}(x)|^{p}\,d\mu\] \[\leq\sum_{k=1}^{N(r)}C\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{p}{n- \lambda}}\left(\int_{\Omega}|\nabla u(x)|^{p}\alpha_{k}^{p}(x)\,d\mu+\int_{ \Omega}|\nabla u(x)|^{p}||u(x)|^{p}\,d\mu\right)\] \[\leq C\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{p}{n-\lambda}}\left(\int _{\Omega}|\nabla u(x)|^{p}\,d\mu+\frac{N(r)}{r^{p}}\int_{\Omega}|u(x)|^{p}\,d\mu\right) \tag{3.2}\]
Finally, to obtain the result, it is sufficient to choose \(r\) such that
\[C\|V\|_{\mathcal{M}^{p,\phi}}^{\frac{p}{n-\lambda}}=\sigma.\]
After that, we note that \(N(r)\sim r^{-n}\) and the corollary follows.
**Lemma 3.1**.: _Let \(B_{r}\) and \(B_{2r}\) be two concentric balls contained in \(\Omega\). Then_
\[\int_{B_{r}}|\nabla u(x)|^{p}\,d\mu\leq\frac{C}{r^{p}}\int_{B_{2r}}|u(x)|^{p} \,d\mu, \tag{3.3}\]
_where the constant \(C\) does not depend on \(r\), and \(u\in C_{0}^{\infty}(B_{r})\)._
Proof.: Take \(\varphi\in C_{0}^{\infty}(\Omega)\), with \(\operatorname{Supp}\varphi\subset B_{2r}\), \(\varphi(x)=1\) for \(x\in B_{r}\), and \(|\nabla\varphi|\leq\frac{C}{r}\). By using \(\varphi^{p}\) as a test function in (1.1); we have
\[\int_{B_{2r}}-\mathrm{div}(|\nabla u|^{p-2}\nabla u)\varphi^{p}(u)+\int_{B_{2 r}}V|u|^{p-2}u\varphi^{p}u=0. \tag{3.4}\]
Thus
\[\int_{B_{2r}}|\nabla u|^{p}\varphi^{p}=-p\int_{B_{2r}}|\nabla u|^{p-2}\varphi ^{p-2}\nabla u\cdot\nabla\varphi(\varphi u)-\int_{B_{2r}}V|\varphi u|^{p}. \tag{3.5}\]
With the help of Young's inequality for \(\frac{p-1}{p}+\frac{1}{p}=1\), we can estimate the first integral in the right-hand side of (3.5) by
\[(p-1)\epsilon^{\frac{p}{p-1}}\int_{B_{2r}}|\nabla u|^{p}\varphi^{p}+\epsilon^ {-p}\int_{B_{2r}}|\nabla\varphi|^{p}|u|^{p}. \tag{3.6}\]
Also, by Corollay 3.1 we can estimate the second integral in the right-hand side of (3.5) by
\[\epsilon\int_{B_{2r}}|\nabla(\varphi u)|^{p}+C_{\epsilon}\int_{B_{2r}}|\varphi u |^{p}. \tag{3.7}\]
Using this estimates in (3.5) we arrive at
\[\int\limits_{B_{2r}}|\nabla u|^{p}\varphi^{p}\leq \left((o-1)\epsilon^{\frac{p}{(p-1)}}+\epsilon\right)\int\limits_ {B_{2r}}|\nabla u|^{p}\varphi^{p}\] \[+(\epsilon^{p}+\epsilon)\int\limits_{B_{2r}}|u|^{p}|\nabla u|^{p} +C_{\epsilon}\int\limits_{B_{2r}}|\nabla u|^{p}|\varphi|^{p}.\]
Using the fact that \(|\nabla\varphi|\leq\frac{C}{v}\), \(|\varphi|\leq\frac{C}{v}\) and \(\varphi=1\) in \(B_{r}\), we immediately get inequality (3.3).
**Lemma 3.2**.: _Let \(u\in C_{0}^{\infty}(\Omega)\) where \(B_{r}\) is the ball of radius \(r\) in \(\mathds{R}^{d}\) and let \(E=\{x\in B_{r}:u(x)=0\}\). Then there exits a constant \(\beta\) (depending only on \(n\)) such that_
\[\int\limits_{A}|u|\leq\beta\frac{r^{n}}{\mu(E)}\,(\mathrm{m}(E))^{\frac{1}{n} }\int\limits_{B_{2r}}|\nabla u|, \tag{3.8}\]
_for all ball \(B_{r}\), \(u\) as above and all measurable sets \(A\subset B_{r}\)._
Proof.: Let us begin by choosing \(r\) such that \(\mathrm{m}(A)=\mathrm{m}(B)\) where \(m\) is the Lebesgue measure, next, for \(x\in B_{r}\) and \(t\in E\) we obtain
\[-u(x)=u(t)-u(x)=\int_{0}^{|x-y|}\frac{du(x-s\omega)}{d\mu(s)}\,d\mu(s) \tag{3.9}\]
where \(\omega=\frac{y-x}{|y-x|}\). Now let us integrate equation (3.9) with respect to \(t\in E\), once it is done, we have
\[-\mu(E)u(x)=\int\limits_{E}du(t)\int_{0}^{|x-y|}\frac{du}{d\mu(s)}(x+s\omega) \,d\mu(s). \tag{3.10}\]
Now, we need to find a bound for (3.10). By using the Fubini's Theorem,
\[|-\mu(E)u(x)|=\left|\int\limits_{E}d\mu(t)\int_{0}^{|x-y|}\frac{ du}{d\mu(s)}(x+s\omega)\,d\mu(s)\right|\] \[\leq\int\limits_{E}du(t)\int_{0}^{|x-y|}\left|\frac{du}{d\mu(s)}( x+s\omega)\right|\,d\mu(s)\] \[\leq\int\limits_{B_{r}}du(t)\int_{0}^{|x-y|}\left|\frac{du}{d\mu( s)}(x+s\omega)\right|\,d\mu(s)\]
\[\leq\int\limits_{B_{2r}}du(t)\int_{0}^{|x-y|}\left|\frac{du}{d\mu(s)} (x+s\omega)\right|\,d\mu(s)\] \[=\mu(B_{2r})\int_{0}^{|x-y|}\left|\frac{du}{d\mu(s)}(x+s\omega) \right|\,d\mu(s)\] \[\leq\frac{C(2r)^{n}}{\mathrm{m}(B(0,1))}\mathrm{m}(B(0,1))\int_{ 0}^{|x-y|}\left|\frac{du}{d\mu(s)}(x+s\omega)\right|\,d\mu(s)\] \[=\frac{C(2r)^{n}n\mathrm{m}(B(0,1))}{n\mathrm{m}(B(0,1))}\int_{0 }^{|x-y|}\left|\frac{du}{d\mu(s)}(x+s\omega)\right|\,d\mu(s)\] \[=\frac{C}{\mathrm{m}(\partial B(0,1))}\int\limits_{\partial B(0, 1)}d\omega\int_{0}^{2r}t^{n-1}\,dt\int_{0}^{|x-y|}\left|\frac{du}{d\mu(s)}(x+ s\omega)\right|\,d\mu(s)\] \[=\frac{C}{\mathrm{m}(\partial B(0,1))}\int_{0}^{2r}t^{n-1}\,dt \int\limits_{\partial B(0,1)}d\omega\int_{0}^{|x-y|}\frac{1}{s^{n-1}}\left| \frac{du}{d\mu(s)}(x+s\omega)\right|s^{n}\,d\mu(s)\] \[\leq\frac{C}{\mathrm{m}(\partial B(0,1))}\int_{0}^{2r}t^{n-1}\,dt \int\limits_{\partial B(0,1)}d\omega\int_{0}^{r}\frac{|\nabla u(y)|}{|x-y|^{n- 1}}s^{n-1}\,d\mu(s)\] \[=\frac{C(2r)^{n}}{n\mathrm{m}(\partial B(0,1))}\int\limits_{ \partial B(0,1)}\int_{0}^{r}\frac{|\nabla u(y)|}{|x-y|^{n-1}}s^{n-1}\,d\mu(s)d\omega\] \[=\frac{C(2r)^{n}}{n\mathrm{m}(\partial B(0,1))}\int\limits_{B(x, r)}\frac{|\nabla u(y)|}{|x-y|^{n-1}}s^{n-1}\,d\mu(y).\]
Taking into account that we just write \(y=x+r\omega\), thus \(r\omega=y-x\) and \(r=|x-y|\) since \(|\omega|=1\). Up to now, we have obtained that
\[\mu(E)u(x)\leq\frac{C_{1}(2r^{n})}{n}\int\limits_{B(x,r)}\frac{|\nabla u(y)|}{ |x-y|^{n-1}}\,d\mu(y) \tag{3.11}\]
where \(C_{1}=\frac{C}{\mathrm{m}(\partial B(0,1))}\).
Hence, let us integrate both sides of (3.11) with respect to \(x\) on the set \(A\). Indeed
\[\mu(E) \int\limits_{A}|u(x)|\,d\mu(x)\leq\frac{C(2r)^{n}}{n}\int\limits_ {A}\int\limits_{B(x,r)}\frac{|\nabla u(y)|}{|x-y|^{n-1}}\,d\mu(y)d\mu(A)\] \[\leq\frac{C(2r)^{n}}{n}\int\limits_{B(x,r)}\frac{d\mu(x)}{|x-y|^{ n-1}}\int\limits_{B(x,r)}|\nabla u(y)|d\mu(y)\]
\[=\frac{C(2r)^{n}}{n}n\mathrm{m}(B(0,1))\int_{0}^{r}t^{1-r}t^{n-1}\,dt \int\limits_{B(x,r)}|\nabla u(y)|d\mu(y)\] \[=C(2r^{n})\mathrm{m}(B(0,1))r\int\limits_{B(x,r)}|\nabla u(y)|\,d \mu(y)\] \[=C(2r)^{n}(\mathrm{m}(B(0,1)))^{1-\frac{1}{n}}(r^{n}\mathrm{m}(B( 0,1)))^{\frac{1}{n}}\int\limits_{B(x,r)}|\nabla u(y)|d\mu(y)\] \[=C(2r)^{n}(\mathrm{m}(B(0,1)))^{1-\frac{1}{n}}(r^{n}\mathrm{m}(B) )^{\frac{1}{n}}.\]
Finally
\[\int\limits_{A}|u(x)|\,d\mu(x)\leq\frac{\beta r^{n}(\mathrm{m}(A))^{\frac{1}{ n}}}{\mu(E)}\int\limits_{B(x,2r)}|\nabla u(y)|\,d\mu(y)\]
where
\[\beta=2^{n}(\mathrm{m}(B(0,1)))^{1-\frac{1}{n}}.\]
## 4. Strong Unique Continuation
In this section, we proceed to establish the strong unique continuation property of the eigenfunction for the \(p\)-Laplacian operator in the case \(V\in\mathcal{M}^{p,\phi}(\mathds{R}^{d})\).
**Theorem 4.1**.: _Let \(u\in C_{0}^{\infty}(\Omega)\) be a solution of \((\ref{eq:1})\). If \(u=0\) on a set \(E\) of positive measure, then \(u\) has zero of infinite orden in \(p\)-mean._
Proof.: Let \(E=\{x\in B_{r}:u(x)=0\}\) with \(0<r<1\) and let \(E^{c}\) the complement of \(E\).
Taking \(r_{0}\) smaller if necessary, we can assume that \(B_{r_{0}}\subset\Omega\). Since \(u=0\) on \(E\) by Lemma 3.2 we have
\[\int\limits_{B_{r}}|u|^{p}= \int\limits_{B_{r}\cap E^{c}}|u|^{p}\leq\frac{\beta r^{n}( \mathrm{m}(E^{c}\cap B_{r}))}{\mu(E\cap B_{r})}\int\limits_{B_{r}}|\nabla|u|^{p}|\] \[= C_{n}\int\limits_{B_{r}}|u|^{p-1}|\nabla u|\]
where
\[C_{n}=\frac{p\beta(\mathrm{m}(B(0,1)))^{\frac{1}{n}}}{\mu(E)}.\]
By Holder's inequality
\[\int\limits_{B_{r}}|u|^{p}\leq C_{n}\left(\int\limits_{B_{r}}|\nabla u|^{p} \right)^{\frac{1}{p}}\left(\int\limits_{B_{r}}|u|^{p}\right)^{\frac{p-1}{p}} \tag{4.1}\]
and by using Young's inequality, we get
\[\int\limits_{B_{r}}|u|^{p}\leq C_{n}^{r}\left(r^{p-1}\int\limits_{B_{r}}| \nabla u|^{p}+\frac{p-1}{r}\int\limits_{B_{r}}|u|^{p}\right). \tag{4.2}\]
Finally by Lemma (3.1), we have
\[\int\limits_{B_{r}}|u|^{p}\leq C_{n}\int\limits_{B_{2r}}|u|^{p}. \tag{4.3}\]
Now, let us introduce the following function
\[f(r)=\int\limits_{B_{r}}|u|^{p}. \tag{4.4}\]
Observe that consequently \(r_{0}\) depends on \(n\). Then (4.3) can be written as
\[f(r)\leq M_{n}2^{-n}f(2r)\quad\text{for }r\leq r_{0} \tag{4.5}\]
where \(M_{n}=2^{-n}C_{n}\).
Integrating (4.5), we get
\[-f(\rho)\leq M_{n}2^{-kn}f(2^{n}\rho)\quad\text{if }2^{k-1}\rho\leq r_{0} \tag{4.6}\]
Now given that \(0<r<r_{0}(n)\), choose \(k\in\mathds{N}\) such that
\[2^{-k}r_{0}\leq r\leq 2^{-k+1}r_{0}. \tag{4.7}\]
From (4.6), we obtain
\[f(r)\leq M_{n}2^{-kn}f(2^{k}r)\leq M_{n}2^{-kn}f(2r). \tag{4.8}\]
Since \(2^{-k}\leq\frac{r}{r_{0}}\), we finally obtain
\[f(r)\leq M_{n}\left(\frac{r}{r_{0}}\right)^{n}f(2r_{0}). \tag{4.9}\]
And thus, we have
\[\int\limits_{B_{r}(x_{0})}|u(y)|^{p}\,d\mu(y)\leq M_{n}\left(\frac{r}{r_{0}} \right)^{n}f(2r_{0}) \tag{4.10}\]
and this shows that (1.2) holds, which means that \(u\) has zero of infinite order in \(p\)-mean at \(x_{0}\)
**Corollary 4.1**.: _Equation (1.1) has strong unique continuation property._
|
2306.05823 | The Use of Covariate Adjustment in Randomized Controlled Trials: An
Overview | There has been a growing interest in covariate adjustment in the analysis of
randomized controlled trials in past years. For instance, the U.S. Food and
Drug Administration recently issued guidance that emphasizes the importance of
distinguishing between conditional and marginal treatment effects. Although
these effects coincide in linear models, this is not typically the case in
other settings, and this distinction is often overlooked in clinical trial
practice. Considering these developments, this paper provides a review of when
and how to utilize covariate adjustment to enhance precision in randomized
controlled trials. We describe the differences between conditional and marginal
estimands and stress the necessity of aligning statistical analysis methods
with the chosen estimand. Additionally, we highlight the potential misalignment
of current practices in estimating marginal treatment effects. Instead, we
advocate for the utilization of standardization, which can improve efficiency
by leveraging the information contained in baseline covariates while remaining
robust to model misspecification. Finally, we present practical considerations
that have arisen in our respective consultations to further clarify the
advantages and limitations of covariate adjustment. | Kelly Van Lancker, Frank Bretz, Oliver Dukes | 2023-06-09T11:48:56Z | http://arxiv.org/abs/2306.05823v1 | # The Use of Covariate Adjustment in Randomized Controlled Trials: An Overview
###### Abstract
There has been a growing interest in covariate adjustment in the analysis of randomized controlled trials in past years. For instance, the U.S. Food and Drug Administration recently issued guidance that emphasizes the importance of distinguishing between conditional and marginal treatment effects. Although these effects coincide in linear models, this is not typically the case in other settings, and this distinction is often overlooked in clinical trial practice. Considering these developments, this paper provides a review of when and how to utilize covariate adjustment to enhance precision in randomized controlled trials. We describe the differences between conditional and marginal estimators and stress the necessity of aligning statistical analysis methods with the chosen estimand. Additionally, we highlight the potential misalignment of current practices in estimating marginal treatment effects. Instead, we advocate for the utilization of standardization, which can improve efficiency by leveraging the information contained in baseline covariates while remaining robust to model misspecification. Finally, we present practical considerations that have arisen in our respective consultations to further clarify the advantages and limitations of covariate adjustment.
baseline covariates, covariate adjustment, efficiency gain, estimands +
Footnote †: journal: Journal Tche
1
Footnote 1: Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium
2
Footnote 2: Nowartis Pharma AG, Basel, Switzerland
3
Footnote 3: Section for Medical Statistics, Centre for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna, Vienna, Austria
## Introduction
Recent guidance from the U.S. Food and Drug Administration (FDA, 2023) has led to increased interest in covariate adjustment in the analysis of randomized controlled trials for drugs and biologics. Here, covariate adjustment refers to pre-planned adjustment for prognostic baseline variables (i.e., demographic factors, disease characteristics, or other information collected from patients at the time of randomization) when interest lies in estimating an average treatment effect. This has the potential to improve precision and reduce the required sample size in many clinical trials (Tsiatis et al., 2008; Benkeser et al., 2020). While covariate adjustment has been considered already in previous regulatory guidelines (ICH, 1998; EMA, 2015), the recent FDA guidance specifically raises the importance of distinguishing between conditional and marginal treatment effects. The distinction between these treatment effects is often overlooked in clinical trial practice because they happen to coincide in linear models. However, covariate adjustment with nonlinear models (e.g., generalized linear models with nonlinear link functions) is often used in the analysis of clinical trial data when the primary outcome of interest is not measured on a continuous scale or is right censored (e.g., binary, ordinal, count, or time-to-event outcomes) and a conditional treatment effect can differ from the marginal treatment effect in these settings (Gail et al., 1984). Understanding similarities and differences of alternative treatment effects is also in line with the recent ICH E9 adendum on estimands and sensitivity analysis (ICH, 2019), which calls for a precise description of treatment effect (i.e., estimand) reflecting the clinical question posed by a given clinical trial objective.
In view of the changing regulatory landscape, it therefore becomes important for clinical teams to understand the differences between conditional and marginal estimands and the implication on the statistical analysis approaches that ought to be aligned with the chosen estimand. This raises important questions like: Should a conditional or a marginal estimand be used for a given clinical trial? Will the selection of covariates lead to different conditional estimands? How to estimate a conditional estimand (e.g., a conditional odds ratio) when fitting a logistic regression with covariates for a binary outcome? How to estimate a marginal estimand (e.g., a marginal odds ratio) and how is it linked to the estimation of a conditional estimand? How is the common practice of fitting a logistic regression (with covariate adjustment) as primary analysis for hypothesis testing related to these estimand considerations?
In light of these questions, we review in this paper when and how to use covariate adjustment to improve precision in randomized controlled trials. Firstly, we describe conditional and marginal treatment effect estimands on the difference, ratio and odds ratio scale. Importantly, these estimands are model-free and their interpretation does not rely on
correct specification of models. Here, the term "model-free estimand" refers to an estimand that is not derived from a specific statistical model or a parametric assumption. Secondly, we touch on the possible misalignment of current practices of treatment effect estimation and discuss in detail the estimation of marginal effect estimands. Thirdly, we address practical considerations that have arisen in our respective consultations on covariate adjustment.
When discussing the estimation of marginal effect estimands, we provide a standardization estimator, closely related to the one in FDA (2023), that (1) exploits information in baseline covariates to gain efficiency and (2) is robust to model misspecification. Model misspecification refers to the situation where the assumed statistical model used to estimate the treatment of interest does not accurately capture the true underlying data-generating process. It implies that the chosen model does not adequately represent the relationship between the variables under study; e.g., by making incorrect assumptions about the linearity, interaction effects, or distributional properties of the variables. We also discuss the connection of the standardization estimator with other estimators proposed in the literature.
## Treatment Effect Estimands
Let \(Y\) denote an outcome of interest (which may be continuous or discrete) and \(Z\) a binary, randomised intervention such that \(Z=1\) if a patient is assigned to the treatment of interest and \(Z=0\) if assigned to the control group (e.g., placebo or an active control). Further, let \(X\) denote a collection of variables measured at randomisation. These may include baseline outcome measurements, or (baseline) factors that are known or assumed to be prognostic of the outcome. We assume (for the moment) that there is no loss to follow up, i.e., all randomized patients are assumed to remain in the trial until its end. We also assume that the patients in the trial are a random sample from a larger patient population.
The ICH E9 addendum emphasizes the need for a precise description of the treatment effect (known as an estimand) that aligns with the clinical question of interest in a clinical trial. According to ICH (2019), an estimand "_summarises at a population level what the outcomes would be in the same patients under different treatment conditions being compared._" In what follows, we focus on the effects of a treatment as assigned, rather than the treatment as taken. This is often referred to as the _intention-to-treat_ principle. More formally, we adopt in the following the potential outcomes framework widely used in the causal inference literature to define an estimand of interest (Hernan and Robins, 2020; Lipkovich et al., 2020). Let \(Y^{\text{\tiny{2}}}\) denote the potential outcome that would have been observed has someone been assigned to treatment \(Z=z\). A given patient has therefore two potential outcomes \(Y^{1}\) and \(Y^{0}\). In practice, one of \(Y^{1}\) or \(Y^{0}\) is unobservable and is called counterfactual. This prohibits learning about treatment effects at an individual patient level, but other causal contrasts can be defined that are of clinical interest and much less ambitious to infer.
_Marginal_ (or population-averaged) causal estimands reflect the effect of treatment in the population defined - pragmatically speaking - by the trial's inclusion / exclusion criteria (Van Lancker et al., 2022). Examples include the average treatment effect on the difference scale
\[E(Y^{1}-Y^{0}), \tag{1}\]
the ratio scale (for positive outcomes)
\[E(Y^{1})/E(Y^{0}), \tag{2}\]
and the odds ratio scale (for binary outcomes)
\[\frac{E(Y^{1})/\{1-E(Y^{1})\}}{E(Y^{0})/\{1-E(Y^{0})\}}, \tag{3}\]
where \(E\) denotes expectation with respect to the distribution of potential outcomes for the population of interest. In a trial with no loss to follow up, each of these effects can be identified by virtue of randomisation. Here, identification refers to the translation of a causal estimand into a quantity involving only the observed data so that we can estimate a treatment effect of interest under transparent - but possibly unverifiable - assumptions. For example, (1) is identified as \(E(Y|Z=1)-E(Y|Z=0)\) under randomization. Intuitively, this is because both treatment arms are balanced due to randomization (at least in large samples) with respect to factors that are prognostic of the outcome.
One may also consider _conditional_ causal estimands where covariates \(X\) play an explicit role in the definition of the treatment effect. For example, there may be interest in whether the effect on a certain scale differs between males (\(X=0\)) and females (\(X=1\)). Then the causal effect in females on the mean difference scale is
\[E(Y^{1}-Y^{0}|X=1)\]
and the effect in males can be defined similarly. Alternatively, the effect in females on the odds scale is
\[\frac{E(Y^{1}|X=1)/\{1-E(Y^{1}|X=1)\}}{E(Y^{0}|X=1)/\{1-E(Y^{0}|X=1)\}}.\]
Note that the estimand definitions above are completely model-free. In practice, treatment effects are often encoded as parameters in, for example, a generalised linear model
\[g\{E(Y|Z,X)\}=\beta_{0}+\beta_{1}Z+\beta_{2}X, \tag{4}\]
where \(g(\cdot)\) is a pre-specified link function. For example, with continuous outcomes, one may choose \(g(\cdot)\) to be the identity link function and fit the model
\[E(Y|Z,X)=\beta_{0}+\beta_{1}Z+\beta_{2}X. \tag{5}\]
This formula encodes the information that there is no interaction between \(Z\) and \(X\) on the linear scale. This is a statistical modelling assumption not implied by randomisation. Because the estimand is encoded as a parameter in a specific statistical model, we describe the resulting estimand as'model-based'. If the assumption of no interactions holds, then the model-based estimand \(\beta_{1}\) carries an interpretation as _both_ a conditional causal effect \(E(Y^{1}-Y^{0}|X=x)\) and a marginal causal effect (1), since they coincide in the case of a linear model without interactions.
One might also fit a model with an interaction,
\[E(Y|Z,X)=\beta_{0}+\beta_{1}Z+\beta_{2}X+\beta_{3}ZX.\]
Here, \(\beta_{1}\) and \(\beta_{3}\) encode the conditional estimand (on the linear scale), where \(\beta_{1}\) is the effect in those with \(X=0\), and \(\beta_{1}+\beta_{3}x\) is the effect in those with \(X=x\). By including this interaction term in the model, \(\beta_{1}\) and \(\beta_{3}\) typically lose their marginal interpretation, unless \(X\) is appropriately centered (Ye et al., 2022). However, we can use these models to obtain marginal treatment effect estimates by averaging across the observed distribution of baseline covariates, as described in Step 3 in the next section.
For a binary outcome \(Y\), the logistic regression model
\[logit\{E(Y|Z,X)\}=\beta_{0}+\beta_{1}Z+\beta_{2}X \tag{6}\]
is commonly used, where \(logit(p)=ln(p/(1-p))\) for \(p\in(0,1)\). If this model reflects the truth, then the effect of treatment does not differ between, say, females and males on the relevant scale (VanderWeele and Knol, 2014). Note that this does not imply that there is no interaction on another scale (e.g., difference or ratio scale). However, unlike in the linear case, \(exp(\beta_{1})\) would _only_ retain an interpretation as a conditional effect,
\[\frac{E(Y^{1}|X=x)/\{1-E(Y^{1}|X=x)\}}{E(Y^{0}|X=x)/\{1-E(Y^{0}|X=x)\}}, \tag{7}\]
which may differ from the marginal causal odds ratio (3). Therefore, standard practice based on a logistic regression adjusted for covariates typically does not target a marginal causal effect. This phenomenon occurs due to the _non-collapsibility_ of the logit link function and is not unique to logistic regression as it also arises, in for example, proportional hazards models (Daniel et al., 2021). Note that when model (6) is misspecified, it may not be clear what standard likelihood-based estimators of \(\beta\) are estimating; in particular, they may not generally target either (3) or (7).
To conclude, a marginal estimand is the treatment effect had all patients in the population been assigned to treatment (\(Z=1\)) compared with had all patients been assigned to control (\(Z=0\)). In contrast, a conditional estimand is the treatment effect had a subset of patients with \(X=x\) been assigned to treatment compared with had patients with \(X=x\) been assigned to control. It is not our intention to wade into the debate as to whether to report marginal or conditional effects for categorical or time-to-event outcomes (Harrell and Senn, 2021; Remiro-Azocar, 2022). For an intention-to-treat analysis, marginal estimands can often be inferred by virtue of randomisation alone (assuming no loss to follow up). Further, they arguably generalise more straightforwardly to complex settings with post-randomization events.However, estimands like (1)-(3) average over the distribution of \(X\) in the larger patient population (from which our sample is a random sample), and should be carefully interpreted if the target population where the treatment would be applied differs greatly in terms of \(X\) from the patients in the trial (Dahabreh et al., 2020; Van Lancker et al., 2022). In such cases, model-based conditional effects are sometimes argued to transport better to external populations or generalise to broader populations (Vansteelandt and Keiding, 2011; Remiro-Azocar, 2022), although this requires that the regression model (4) is approximately correct. Estimation of these effects is typically done via fitting regression models using maximum likelihood estimation. Note, however, that the conditional estimand changes if the model changes, e.g. with the inclusion of a prognostic covariate, despite the absence of confounding and large sample size. A marginal estimand remains the same despite the model changing but will typically vary across subgroups of the population. In what follows, we will focus on the estimation of marginal effect estimands as these are usually inferred implicitly in trials if one regresses outcome on treatment alone, or considers the unadjusted difference in means between treatment groups.
## Estimation of Marginal Effect Estimands
Estimation of marginal effect estimands through an _undadjusted_ estimator without covariates is a simple analysis method for the marginal intention-to-treat estimand and leads to valid inference. This raises the question what can be gained from a more complicated estimation approach that includes covariates. To focus ideas, consider estimating \(\mu_{1}=E(Y^{1})\). Its unadjusted estimator, \(\sum_{i=1}^{n}Z_{i}Y_{i}/\sum_{i=1}^{n}Z_{i}\), ignores the information in baseline covariates \(X\). The information in the available data is therefore not efficiently used. In contrast, certain _adjusted_ estimators familiar from the causal inference literature that use covariate information can lead to a gain in power, without increasing the Type I error rate or introducing bias (Tsiatis et al., 2008; Benkeser et al., 2020). Importantly, they rely on randomization and avoid making any modelling assumptions beyond what is assumed for the unadjusted estimator. In this article, we follow FDA (2023) and describe one specific approach for covariate adjustment, namely standardization (also referred to as G-computation). It is straightforward to implement and easily explained to collaborators. Alternative covariate-adjusted estimators are discussed in the next section. The proposed standardization estimator for estimating \(\mu_{1}=E(Y^{1})\) can be constructed as follows (Tsiatis et al., 2008):
**Step 1: Model fitting**: Fit a generalized linear regression model (e.g., logistic, linear,...) with a canonical link (e.g., logit, identity,...) via maximum likelihood that regresses the outcome \(Y\) on pre-specified baseline covariates \(X\) among the treated patients (\(Z=1\)). This outcome working model should include an intercept term. For example, we could model \(E(Y|Z=1,X)\) by \(h_{1}(X;\boldsymbol{\gamma})=logit^{-1}(\gamma_{0}+\gamma_{1}X)\) for a binary endpoint \(Y\). Note that one can also include transformations of the variables in \(X\) (e.g., higher order terms and interactions).
**Step 2: Predicting**: For each patient, use the fitted outcome working model in Step 1 to compute a prediction of the response under \(Z=1\), using the patient's specific baseline covariates. In our example, \(h_{1}(X;\boldsymbol{\hat{\gamma}})=logit^{-1}(\hat{\gamma}_{0}+\hat{\gamma}_ {1}X)\).
* [leftmargin=*,noitemsep,topsep=0pt]
* **Step 3: Averaging** Take the average of these predicted responses to obtain an estimator for the average response under \(Z=1\). That is, \(\hat{\mu}_{1}=n^{-1}\sum_{i=1}^{n}h_{1}(X_{i};\hat{\mathbf{\gamma}})\) in our example.
Intuitively, although we do not have access to the potential outcomes \(Y^{1}\) for untreated patients, we can make (on average) a guess based on their baseline covariates. Using a generalized linear model with a canonical link and including an intercept allows us to also use predictions for the treated participants as the mean of their predictions equals the mean of their observed outcomes.
A covariate-adjusted estimator \(\hat{\mu}_{0}\) for \(\mu_{0}\) can be obtained similarly by \(\hat{\mu}_{0}=n^{-1}\sum_{i=1}^{n}h_{0}(X_{i};\hat{\mathbf{\eta}})\), where the predicted values \(h_{0}(X_{i},\hat{\mathbf{\eta}})\) for each patient are obtained via regression of \(Y\) on \(X\) among the patients in the control arm (\(Z=0\)) using a canonical generalized linear working model \(h_{0}(X,\mathbf{\eta})\) for \(E(Y|Z=0,X)\). In the binary endpoint example above, we could then specify \(h_{0}(X,\mathbf{\eta})=logit^{-1}(\eta_{0}+\eta_{1}X)\), in which case \(\hat{\mathbf{\eta}}=(\hat{\eta}_{0},\hat{\eta}_{1})^{\prime}\). Finally, we construct a covariate-adjusted estimator for (1) as \(\hat{\mu}_{1}-\hat{\mu}_{0}\), for (2) as \(\hat{\mu}_{1}/\hat{\mu}_{0}\) and for (3) as \(\{\hat{\mu}_{1}/(1-\hat{\mu}_{1})\}/\left(\hat{\mu}_{0}/(1-\hat{\mu}_{0}) \right)\).
To calculate the variance of the estimated treatment effect, it is crucial to take into account that the predictions are derived from specific outcome regression models. Consequently, it is inappropriate to simply compute the sample variance of the difference in predicted (counterfactual) outcomes. In particular, the variance of the estimators can be obtained via a sandwich estimator using the Delta method (Stefanski and Boos, 2002; Tsiatis et al., 2008; Ye et al., 2023); for \(\hat{\mu}_{1}-\hat{\mu}_{0}\), it can be calculated as \(1/n\) times the sample variances of the values
\[\frac{Z_{i}}{\hat{\pi}}(Y_{i}-h_{1}(X_{i};\hat{\mathbf{\gamma}}))+h_{ 1}(X_{i};\hat{\mathbf{\gamma}})\] \[-\left[\frac{1-Z_{i}}{1-\hat{\pi}}(Y_{i}-h_{0}(X_{i};\hat{\mathbf{ \eta}}))+h_{0}(X_{i};\hat{\mathbf{\eta}})\right],\]
with \(\hat{\pi}=(\sum_{i=1}^{n}Z_{i})/n\) the empirical randomization probability. Alternatively, the variance can be estimated via the non-parametric bootstrap (Efron and Tibshirani, 1994). Although the nonparametric bootstrap is only justified under simple randomization, Shao et al. (2010) provide a modification for covariate-adaptive randomization.
Due to randomization, the proposed standardization estimator has the appealing feature that misspecification of the outcome working models \(h_{1}(X,\mathbf{\gamma})\) and \(h_{0}(X,\mathbf{\eta})\) does not introduce bias in large samples. We therefore do not need to assume that the link function and the functional form of the covariates (i.e., the algebraic form of the relationship between outcome and covariates) are correctly specified. In addition, the estimator has the smallest large-sample variance in the class of estimators that are unbiased under randomization alone (including the unadjusted estimator) when the outcome working models are correctly specified. We refer to such estimators as being efficient in large samples.
The description above is just one specific implementation of the standardization estimator for continuous and binary endpoints. For example, FDA (2023) suggested standardization with one outcome working model that regresses the outcome on treatment assignments and pre-specified baseline covariates. In the description above, we followed Tsiatis et al. (2008) and fitted two separate outcome working models as this supports an objective incorporation of baseline covariates by separating the modeling of these relationships from the evaluation of the treatment effect. In addition, such an approach automatically allows for interactions between treatment and covariates, which might lead to additional efficiency benefits.
Similar standardization estimators have also been introduced in other settings. For example, developments have been made for the Mann-Whitney estimand (Vermeulen et al., 2015), the log-odds ratio for ordinal outcomes (Diaz et al., 2016), restricted mean survival time (Chen and Tsiatis, 2001; Diaz et al., 2019), the survival probability difference (Wahed and Tsiatis, 2004; Lu and Tsiatis, 2008), and the relative risk of time-to-event outcomes; see also Moore and van der Lann (2009), Benkeser et al. (2018) and Benkeser et al. (2019). Recently, Benkeser et al. (2020) found precision gains for this type of covariate-adjusted estimators in simulated trials for COVID-19 treatments that led to 4-18% reductions in the required sample size to achieve a desired power. Similarly, a simulation study based on a stroke trial by Van Lacker et al. (2022) has shown reductions in sample size of more than 20% due to covariate adjustment.
## Practical Considerations
Although covariate adjustment offers several benefits as described above, it is not yet routinely used in the analysis of clinical trials. In this section we address practical considerations that have arisen in our respective consultations to further clarify the advantages and limitations of covariate adjustment.
### How to decide which variables to include?
This touches on the broader question whether known prognostic covariates (and their functional form) should be pre-specified at the trial design stage or whether they could be selected adaptively based on observed data. Regulatory guidelines usually recommend a pre-specification of covariates and the mathematical form of the covariate-adjusted estimator (FDA and EMA, 1998; EMA, 2015; FDA, 2023). Ideally, the selection is guided by historical trial data in the same disease area and in a similar patient population. As a complete pre-specification of the most prognostic variables along with their functional form is a difficult - if not impossible - task in many trials, data-adaptive variable and model selection might give one an opportunity to explore covariate-outcome relationships. In particular, the flexibility allowed in the use of modeling methods and expertise makes it possible to make best use of the covariate information to obtain the most efficient estimator. This leads to a tension because for an estimator to be efficient in large samples, the outcome working model must typically be correctly specified, yet choosing this model based on the data is often thought of as complicating or invalidating statistical inference (Pocock et al., 2002; Tsiatis et al., 2008).
Recent research, however, allows one to use more flexible data-adaptive methods whilst still obtaining valid tests and confidence intervals. This underpins developments in targeting learning (Van der Laan et al., 2004). By conditioning on them, one can induce a spurious association between treatment and unmeasured causes of the outcome; in turn, this can lead to biased estimation of the marginal treatment effect. Furthermore, in the primary analysis of a trial that is not subject to missing data, post-baseline covariates cannot generally be leveraged to improve efficiency. Large sample theory shows that the most efficient estimator of the marginal treatment effect discards information on post-baseline covariate data; see e.g. Lemma 4 in Cheng et al. (2021).
Post-baseline covariates are nevertheless useful in settings where outcomes are subject to missingness or censoring, both in terms of efficiency and for making the missing data assumptions more plausible (see the Question 'How to deal with missing data?' below). To avoid selection bias, however, they must be adjusted for via a careful sequential standardization procedure (Bang and Robins, 2005). Post-baseline covariates can also be beneficial in an interim analysis because estimating a treatment effect before the trial end might be viewed as a missing data problem as not all recruited patients will have their primary endpoint observed. Qian et al. (2016) and Van Lancker et al. (2020) proposed an interim estimator similar to the standardization approach described in the previous section which exploits the information in baseline and post-baseline covariates, without relying on modelling assumptions. Recently, Tsiatis and Davidian (2022) extended this work to treatment effect estimators that account for time-lagged outcomes. These approaches lead to stronger evidence for early stopping than standard, unadjusted approaches, without sacrificing validity or power of the procedure. This is the case even when the adopted outcome working models are misspecified.
### Can covariate adjustment be harmful?
The covariate-adjusted standardization estimator is typically efficient in large samples only when the outcome working model is correctly specified. However, there exist specific modelling and fitting strategies for the outcome working model (i.e., Step 1 of the algorithm in the previous section) to ensure that the (true) asymptotic variance for the standardization estimator is at least as small as that of the unadjusted estimator, even under model misspecification (Rubin and van der Laan, 2008; Diaz et al., 2016, 2019). This is most simply implemented in the context of linear models by including all treatment-by-covariate interactions (Tsiatis et al., 2008; Lin, 2013; Ye et al., 2022). More generally, Empirical Efficiency Maximisation is a specific implementation of standardization which is protected against precision loss under misspecification in large samples (Rubin and van der Laan, 2008). Interestingly, an implication of the results in Tsiatis et al. (2008) is that covariate adjustment cannot harm large sample efficiency (relative to the unadjusted estimator) in the common scenario with linear outcome models, two treatment groups and 1:1 randomization. Additionally, in this setting there is no efficiency gain from including treatment-by-covariate interactions in the model.
In finite samples, adjusting for covariates with a very weak or null association with the outcome may not lead to efficiency gains and can even result in a precision loss (Kahan et al., 2014; Tackney et al., 2022). It is therefore important that adjustment is performed for covariates that are anticipated to be strongly associated with the outcome. An additional concern is that adjustment may lead to inflated Type I error rates or poor coverage probabilities of confidence intervals when the number of covariates is large relative to the sample size because all variance estimation methods rely on asymptotics. Although there is no formal rule of thumb, a conservative approach would be to include a small number of key covariates, in line with regulatory guidelines (EMA 2015; FDA 2023). Ideally, clinical considerations should be taken into account. Variable selection may also be useful in this context. Research on how many covariates can safely be adjusted for is ongoing.
In addition, a small sample size correction can be used for the fact that the variance might be underestimated for finite
sample sizes. For example, Tsiatis et al. (2008) multiplied the variance estimator by a small-sample correction factor. For the standardization estimator described above, one can use the correction factor
\[\frac{(n_{0}-p_{0}-1)^{-1}+(n_{1}-p_{1}-1)^{-1}}{(n_{0}-1)^{-1}+(n_{1}-1)^{-1}},\]
where \(n_{j}(>p_{j})\) for \(j=0,1\) are the number of patients used to fit the outcome working model in treatment arm \(j\) and \(p_{j}\) the numbers of parameters fitted in these models, exclusive of intercepts. The (nonparametric) bias-corrected and accelerated (BCa; Efron and Tibshirani 1994) bootstrap has been shown to improve performance for adjusted estimators (Benkeser et al. 2020; Van Lancker et al. 2022a)
### Are the benefits lost at larger sample sizes?
Although there is often a concern about the benefits (or rather the loss thereof) in finite sample sizes, sometimes there exists also some confusion about the benefits of covariate adjustment in larger samples. Specifically, the inclusion of covariate adjustments in the trial design is often seen as a desirable but not crucial element, as it is believed that random imbalances tend to diminish with larger sample sizes. As explained in the Question "Should one (only) include variables with imbalance?" above, the latter is not a justification for ignoring baseline covariates. Senn (1989) pointed out that covariate imbalance remains a significant concern in large studies, just as it is in small ones. This is because while absolute differences in baseline covariates (i.e., absolute imbalance) may decrease with larger sample sizes, standardized differences, which have an impact on precision, do not decrease.
In fact, the justification for the efficiency gain of covariate adjustment usually derives from large-sample theory. Specifically, the unadjusted and adjusted estimators have different asymptotic distributions. They are both unbiased and asymptotically normal but have different variances, with the variance of the adjusted estimator being generally smaller when the outcome working model is correctly specified. Therefore, one may expect to see improved precision even at large sample sizes.
### How to decide on the sample size?
At the design stage of a clinical trial, there is uncertainty about the amount of precision gain and corresponding sample size reduction due to covariate adjustment. Determining the required sample size when using covariate-adjusted estimators can be done in at least two ways. One approach is to assume conservatively that covariate adjustment will not lead to a precision gain (in which case any actual precision gains would increase power). Another approach is to consider how much precision can be gained based on external (trial) data when calculating the sample size (Li et al. 2021). An incorrect projection of a covariate's prognostic value, however, may still lead to an over- or underpowered future trial. To overcome this, Tsiatis and Davidian (2022) and Van Lancker et al. (2022a) suggest combining covariate adjustment with information adaptive designs (also known as information monitoring) to take advantage of the precision gain from covariate adjustment. Such an approach would start with setting the sample size at the design stage using one of the two ways described above, and then monitoring the actual precision gain (which is directly related to the statistical information) to determine the timing of the analysis. In particular, this approach will convert the gains into sample size reductions while controlling the Type I error rate and providing the desired power. For instance, when an adjusted estimator is more efficient than an unadjusted estimator, the sample size (and the duration of the trial) will be automatically adapted to the faster information accrual and covariate adjustment will in particular lead to shorter trials.
### Is there added value from covariate-adaptive randomization?
Besides adjusting for covariates at the analysis stage, covariates can also be (complementary) accounted for within the randomization process. Examples of covariate-adaptive randomization, which refers to randomization procedures that take baseline covariates into account, include stratified permuted-block randomization, Pocock-Simon's minimization, stratified un designs, and stratified biased coin designs (Sverdlov 2015). Such designs create balance of treatment arms with respect to the stratified variables in order to improve efficiency of the treatment effect estimate. For example, under the stratified permuted block or biased coin randomization, the standardization estimator described above is asymptotically at least as efficient as the same estimator under simple randomization (Wang et al. 2021). As a special case, Wang et al. (2021) have shown that the standardization estimator (with separate outcome working models for both arms) has the same asymptotic distribution regardless of whether simple, stratified or biased-coin randomisation is used when using outcome working models including indicators for the randomization strata. Especially the addition of baseline variables beyond those used for stratified randomization can lead to substantial precision gains (Wang et al. 2021). Ye et al. (2022) have shown that similar results hold for minimisation with a specific implementation of covariate adjustment; future research is needed to investigate whether this can be generalised. Importantly, to obtain the full benefit of covariate-adaptive randomization one should take the randomization procedure into account in the variance estimator (FDA 2019, 2023).
### Is there a connection between different covariate-adjusted estimators?
A plethora of options is available to adjust for covariates in the estimation of marginal treatment effects besides the standardization approach discussed in the previous section. For example, Tackney et al. (2022) compared standardization, Inverse Probability Weighting (IPW), Augmented IPW (AIPW), Targeted Maximum Likelihood Estimation (TMLE) with standard ANalysis of COVAriance (ANCOVA) via simulation. Ye et al. (2022) recently discussed the ANalysis of HEterogeneous COVAriance (ANHECOVA) procedure, a simple extension of ANCOVA that incorporates treatment-by-covariate interactions; see also Yang and Tsiatis (2001) and Lin (2013). This poses a potentially challenge to the trialist, who has to choose an approach for the data at hand.
The competing estimators can be viewed as members of a general class. As described in Tsiatis et al. (2008), all (reasonable) consistent and asymptotically normal estimators of the marginal risk difference can either be exactly written in the form
\[n^{-1}\sum_{i=1}^{n} \frac{Z_{i}Y_{i}}{\hat{\pi}}-\frac{(1-Z_{i})Y_{i}}{(1-\hat{\pi})}\] \[-\frac{Z_{i}-\hat{\pi}}{\hat{\pi}(1-\hat{\pi})}\left\{(1-\hat{\pi} )h_{1}(X_{i};\hat{\mathbf{\gamma}})+\hat{\pi}h_{0}(X_{i};\hat{\mathbf{\eta}})\right\} \tag{8}\]
or are approximately equivalent to an expression based on this form. For example, if one chooses \(h_{1}(X;\hat{\mathbf{\gamma}})=h_{0}(X;\hat{\mathbf{\eta}})=0\), then the above reduces to the unadjusted difference-in-means estimator.
The different proposals for covariate adjustment are thus connected and can in certain cases produce identical point estimates. Suppose we fit a linear model without treatment-by-covariate interactions using ordinary least squares as in (5), then (8) reduces exactly to the ANCOVA estimator. If one instead includes a full set of interactions, then (8) is equal to the ANHECOVA estimator. The IPW estimator described e.g. in Tackney et al. (2022) does not explicitly involve modelling the outcome, but instead postulates a logistic regression model for the probability of treatment given covariates. Nevertheless, it follows from (Shen et al., 2014) that if this model is fit using maximum likelihood, then the IPW and ANHECOVA estimators share the same asymptotic distribution. This is partly because under a simple randomisation scheme, treatment is in truth independent of covariates. If \(h_{1}(X;\hat{\mathbf{\gamma}})\) and \(h_{0}(X;\hat{\mathbf{\eta}})\) are obtained by fitting a generalised linear model with a canonical link function (via maximum likelihood), then (8) reduces to the standardization estimator described in this paper. AIPW is directly constructed using (8) and can incorporate both standard parametric and data-adaptive models/estimators for \(h_{1}(X;\hat{\mathbf{\gamma}})\) and \(h_{0}(X;\hat{\mathbf{\eta}})\). It might reduce (exactly or approximately) to one or more of the previously mentioned estimators, based on the choice of outcome model.
When choosing between the different approaches, large-sample theory suggests that the optimal choices of \(h_{1}(X;\hat{\mathbf{\gamma}})\) and \(h_{0}(X;\hat{\mathbf{\eta}})\) are \(E(Y|Z=1,X)\) and \(E(Y|Z=0,X)\); that is, an efficient estimator will use the true conditional means of the outcome. Although these are unknown, it motivates estimating them based on a working model that ideally is (approximately) correctly specified. In particular, one might tailor a model to a given \(Y\) (e.g. a logistic outcome working model for a binary outcome). All estimators mentioned in this section are unbiased in large samples when the outcome working model is misspecified; however, under misspecification the estimator with the smallest large sample variance will depend more delicately on how \(\mathbf{\gamma}\) and \(\mathbf{\eta}\) are estimated (see also the Question 'Can adjustment be harmful?" above).
### Is there an added value from'super-covariates'?
There has been a recent interest in using'super-covariates' as a form of adjustment, where a prognostic score is derived from historical data and used either in addition to or instead of standard baseline covariates. For example, PROgnostic COvariate Adjustment (PROCOVA) (Schuler et al., 2021) leverages historical data (from control arms of clinical trials and from observational studies) along with prognostic modeling to decrease the uncertainty in treatment effect estimates from randomized controlled trials measuring continuous responses. Being a special case of ANCOVA, PROCOVA preserves the Type I error rate, and is simple to use. In order for it to be fully efficient, the association between outcome and covariates must be equal (or proportional) across the different populations. In addition, it assumes that there are no treatment-by-covariate interactions. Although it will be unbiased when this assumption fails to hold, it may be less efficient than approaches that allow for interactions. In contrast, standardization, AIPW and TMLE are more generally optimal and many of these methods do not require these additional conditions for optimality. To make best use of the data in the trial, it seems safer to also include (a few) other baseline covariates (e.g., baseline measurement of outcome) instead of using only a single super-covariate.
### How to deal with missing data?
In many clinical trials, data may be missing on the baseline covariates or the outcomes. Restricting the analysis to the complete cases, however, can lead to an efficiency loss and possibly biased results.
Handling missing baseline covariate data in a trial where outcome data is fully observed is often easier than other missing data problems as long as the treatment assignment is independent of the baseline covariates and the missingness mechanism. This is usually expected to hold by virtue of randomisation, allowing the use of simple missing indicator and mean imputation procedures (White and Thompson, 2005; Chang et al., 2022; Zhao and Ding, 2022). These imputed covariates and missing indicators can then be thought of as a different set of baseline covariates. As covariate adjustment is reliable no matter which pre-specified covariates are used, this approach will remain valid. Nevertheless, less accurate imputation may lead to lower efficiency gain due to the lower prognostic values of the covariates. More complex imputation procedures can also be used, although one should be careful to prevent the imputation model from depending on treatment and the outcome. This is because one must preserve the independence between treatment and covariates in the imputed data. Benkeser et al. (2020) recommend that baseline covariates with large amounts of missingness should be excluded from the analysis as they will temper the precision gains.
If missing covariates are excluded rather than imputed then randomization still guarantees that the trial can estimate a causal effect, but there are two drawbacks to consider. Firstly, a smaller sample size in the study leads to reduced statistical power. Secondly, the treatment effect can only be generalized to a modified population that includes individuals with non-missing baseline covariates.
When outcome data are missing or time-to-event endpoints are censored, baseline covariates can be utilised not just for efficiency gains, but also to relax missing data
assumptions. Specifically, one can assume that outcomes are Missing At Random (MAR) rather than Missing Completely At Random (MCAR), meaning that missingness should be independent of the outcomes conditional on treatment and covariates. To perform an analysis, one can then tweak the aforementioned standardization procedure to make predictions for all patients in the dataset, rather than just for the complete cases. This is sometimes known as regression imputation or single imputation (Little and Rubin, 2019). The validity of the resulting treatment effect estimator now relies on a correctly specified outcome working model for the complete cases. Multiple imputation is an alternative approach which relies on the same modelling assumptions (Rubin, 2004; Schafer, 1997), and uses 'Rubin's Rules' to account for uncertainty in the imputations. By also incorporating (in the regression imputation approach) a model for the missing data mechanism, one can construct an estimator that is doubly robust; that is, unbiased if either the outcome model or the missingness model is correctly specified. This can be accomplished by using a weighted outcome regression in Step 1 of the algorithm in the previous section for the complete cases, with weights one divided by the probability of being a complete case (i.e., one minus probability of having a missing outcome) given baseline covariates. These methods can also accommodate time-varying prognostic covariates of the outcome that are associated with missingness under a sequential MAR assumption, although this requires a more involved implementation (Bang and Robins, 2005).
In the case that both outcome and covariate data are missing, the mean imputation and missing indicator strategies for covariates may no longer suffice as MAR may not hold conditional on these imputed covariates. How to best proceed in the general setting remains a topic for future research.
### What are the implications for hypothesis testing?
For the sake of simplicity and to be consistent with current practice, we focus on fitting one model including an indicator for treatment and baseline covariates. Let \(\hat{\beta}=(\hat{\beta}_{0},\hat{\beta}_{1},\hat{\beta}_{2})\) denote the estimated coefficients when fitting a generalised linear model with pre-specified canonical link function (4) using maximum likelihood estimation. The corresponding standardized estimator of the marginal risk difference, \(\hat{\mu}_{1}-\hat{\mu}_{0}\), is then defined as:
\[n^{-1}\sum_{i=1}^{n}g^{-1}(\hat{\beta}_{0}+\hat{\beta}_{1}+\hat{\beta}_{2}X_{i })-n^{-1}\sum_{i=1}^{n}g^{-1}(\hat{\beta}_{0}+\hat{\beta}_{2}X_{i}).\]
To test the null hypothesis of no marginal treatment effect \(H_{0}:E(Y^{1})=E(Y^{0})\), one can either test \(\beta_{1}=0\) or \(\mu_{1}-\mu_{0}=0\). It turns out that the resulting Wald tests based on the respective estimates \(\hat{\beta}_{1}\) and \(\hat{\mu}_{1}-\hat{\mu}_{0}\) both control the Type I error rate and are equally powerful in large samples (Rosenblum and Steingrimsson, 2016). Note that this also holds under arbitrary model misspecification of the outcome (working) model. Rosenblum and Steingrimsson (2016) also conducted a simulation study that shows that these results approximately hold for finite samples. Thus, the misalignment of current practices for marginal treatment effect estimation does not hold for testing in large samples as long as one does not include interactions. Nevertheless, enhanced standardization estimators (e.g., by fitting separate outcome working models or by including a model for randomization) have the potential for greater efficiency gains when the outcome working models are misspecified.
### Which software should I use?
To promote the use of covariate adjustment in practice, it is necessary that easy-to-use, well tested, open-source software is accessible to researchers. The R package _RobinCar_ includes covariate adjustment for the common outcome types (continuous, discrete, and time-to-event). It includes the standardization approach as well as the ANHECOVA methodology of Ye et al. (2022), and can also be used in conjunction with covariate-adaptive randomization schemes. In terms of a brief survey of some of the available software in R, the package _speff2trial_ performs estimation and testing of the treatment effect in a 2-group randomized clinical trial with a quantitative, dichotomous, or right-censored time-to-event endpoint; _drord_ implements the efficient covariate-adjusted estimators described in Benkeser et al. (2020) for establishing the effects of treatments on ordinal outcomes; _survtrmle_ performs targeted estimation of marginal cumulative incidence for time-to-event endpoints with and without competing risks as described in Benkeser et al. (2018); _adjrct_ implements efficient estimators for the restricted mean survival time and survival probability for time-to-event outcomes and the average log odds ratio and Mann-Whitney estimand for ordinal outcomes without proportional hazards and odds assumptions (see Vermeulen et al., 2015; Diaz et al., 2016, 2019). Betz et al. (2023) developed tutorials that are meant to help trials make use of covariate adjustment across the lifespan of a randomized trial, from pre-trial preparations to interim analyses and final reporting of results. The purpose of these tutorials is to emulate practical settings and datasets commonly encountered in real-world scenarios. One area for further software development is empirical efficiency maximisation (Rubin and van der Laan, 2008), since these methods may help to assauge concerns regarding efficiency loss in the presence of model misspecification.
## Discussion
In this paper, we discuss the use of covariate adjustment in randomized controlled trials to improve precision, which has become increasingly important due to changing regulatory requirements (ICH, 2019; FDA, 2023). We highlight the distinction between conditional and marginal estimands and the need to align statistical analysis approaches with the chosen estimand. We also point out the potential misalignment of current practices for marginal treatment effect estimation. Instead, we advocate using standardization, which can increase efficiency by exploiting the information in baseline covariates while remaining robust to model misspecification. Finally, we offer practical considerations related to covariate adjustment for clinical trialists to keep in mind. Although we have focused on marginal estimands in this paper, we note that similar developments could in principal be made for conditional
estimands, along the lines of Vansteelandt and Dukes (2022); this warrants further study.
In a recent call toward better practice of covariate adjustment in analyzing randomized clinical trials, Ye et al. (2022) presented three practical requirements for covariate adjustment via model-assisted approaches: (a) no (asymptotic) efficiency loss compared to an estimator that does not adjust for covariates; (b) applicable to commonly used randomization schemes; and (c) valid inference based on robust standard errors, even under model misspecification. We agree with their considerations and believe that besides ANHECOVA (Ye et al., 2022), which is most naturally suited to continuous outcomes, the standardization approach discussed in this paper can also satisfy these requirements (see the Questions 'Can covariate adjustment be harmful?', 'Is there added value from covariate-adaptive randomization?' and 'Is there a connection between different covariate-adjusted estimators?' in the previous section).
In this article, we mainly focused on the use of covariate adjustment to define causal conditional estimands (e.g., subgroup effects) and to improve the precision when estimating marginal causal effects, as these were the most significant contribution in (FDA, 2023). Baseline covariates are also valuable to be used as stratification factors to ensure balance of treatments across covariates. There is also a growing recognition among researchers of the importance of accounting for intercurrent events to ensure the validity of results (ICH, 2019). In this article we have focused on the intention-to-treat estimand which considers the occurrence of intercurrent events irrelevant in defining the effect of the (assigned) treatment. However, additional strategies have been considered which explicitly account for intercurrent events (ICH, 2019). Unlike treatment assignment at baseline, the occurrence of an intercurrent event is not randomized. Many of the estimands referred to in the ICH E9 addendum are therefore only identified under stronger assumptions, even in the absence of loss to follow up. In particular, identification of a causal effect when the intercurrent event forms part of the intervention often requires measurement of baseline as well as post-baseline covariates. For example, estimands defined under a hypothetical strategy would usually require measurement of time-varying confounders (Olarte Parra et al., 2022; Stensrud and Dukes, 2022). This is similar to observational studies in which baseline covariates also play a vital role to control for confounding. By including baseline covariates in the analysis, we can control for these confounding factors and obtain a more accurate estimate of the exposure-outcome relationship.
We believe that incorporating covariate adjustment can enhance the quality and efficiency of clinical trials. To encourage wider adoption of this approach, it would be beneficial to have multiple estimators available within a single R package. Combining this with guidelines on how to specify the considered estimand and covariate-adjusted estimator in a study protocol may prompt statisticians to more carefully consider appropriate estimands and corresponding estimators, ultimately leading to an increased and more thoughtful use of covariate adjustment in clinical trials.
## Acknowledgements
We would like to thank Mouna Akacha, Mark Baillie, Bjorn Bornkamp, Christopher Jennison, Hege Michiels, Dan Rubin, Stijn Vansteelandt, Jiawei Wei, and Ting Ye for helpful comments on an earlier version of this paper.
|
2310.09727 | Provably Fast Convergence of Independent Natural Policy Gradient for
Markov Potential Games | This work studies an independent natural policy gradient (NPG) algorithm for
the multi-agent reinforcement learning problem in Markov potential games. It is
shown that, under mild technical assumptions and the introduction of the
\textit{suboptimality gap}, the independent NPG method with an oracle providing
exact policy evaluation asymptotically reaches an $\epsilon$-Nash Equilibrium
(NE) within $\mathcal{O}(1/\epsilon)$ iterations. This improves upon the
previous best result of $\mathcal{O}(1/\epsilon^2)$ iterations and is of the
same order, $\mathcal{O}(1/\epsilon)$, that is achievable for the single-agent
case. Empirical results for a synthetic potential game and a congestion game
are presented to verify the theoretical bounds. | Youbang Sun, Tao Liu, Ruida Zhou, P. R. Kumar, Shahin Shahrampour | 2023-10-15T04:10:44Z | http://arxiv.org/abs/2310.09727v2 | # Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games
###### Abstract
This work studies an independent natural policy gradient (NPG) algorithm for the multi-agent reinforcement learning problem in Markov potential games. It is shown that, under mild technical assumptions and the introduction of the _suboptimality gap_, the independent NPG method with an oracle providing exact policy evaluation asymptotically reaches an \(\epsilon\)-Nash Equilibrium (NE) within \(\mathcal{O}(1/\epsilon)\) iterations. This improves upon the previous best result of \(\mathcal{O}(1/\epsilon^{2})\) iterations and is of the same order, \(\mathcal{O}(1/\epsilon)\), that is achievable for the single-agent case. Empirical results for a synthetic potential game and a congestion game are presented to verify the theoretical bounds.
## 1 Introduction
Reinforcement learning (RL) is often impacted by the presence and interactions of several agents in a multi-agent system. This challenge has motivated recent studies of multi-agent reinforcement learning (MARL) in stochastic games [37; 3]. Applications of MARL include robotics [30], modern production systems [2], economic decision making [33], and autonomous driving [31]. Among the various types of stochastic games, we focus on a commonly studied model for MARL, known as Markov Potential Games (MPGs). MPGs are seen as a generalization of canonical Markov Decision Processes (MDPs) in the multi-agent setting. In MPGs, there exists a potential function that can track the value changes of all agents. Unlike single-agent systems, where the goal is to find the optimal policy, the objective in this paper is to find a global policy, formed by the joint product of a set of local policies, that leads the system to reach a Nash equilibrium (NE) [29], which is precisely defined in Section 2.
A major challenge in the analysis of multi-agent systems is the restriction on joint policies of agents. For single-agent RL, policy updates are designed to increase the probability of selecting the action with the highest reward. However, in multi-agent systems, the global policy is constructed by taking the product of local agents' policies, which makes MARL algorithms suffer a greater risk of being trapped near undesirable stationary points. Consequently, finding a NE in MARL is more challenging than finding the global optimum in the single-agent case, and it is therefore difficult for MPGs to recover the convergence rates of single-agent Markov decision processes (MDPs).
Additionally, the global action space in MPGs scales exponentially with the number of agents within the system, making it crucial to find an algorithm that scales well for a large number of agents. Recent
studies [8; 14] addressed the issue by an approach called independent learning, where each agent performs policy update based on local information without regard to policies of the other agents. Independent learning algorithms scale only linearly with respect to the number of agents and are therefore preferred for large-scale multi-agent problems.
Using algorithms such as policy gradient (PG) and natural policy gradient (NPG), single-agent RL can provably converge to the global optimal policy [1]. However, extending these algorithms from single-agent to multi-agent settings presents natural challenges as discussed above. Multiple recent works have analyzed PG and NPG in multi-agent systems. However, due to the unique geometry of the problem and complex relationships among the agents, the theoretical understanding of MARL is still limited, with most works showing slower convergence rates when compared to their single-agent counterparts (see Table 1).
ContributionsWe study the independent NPG algorithm in multi-agent systems and provide a novel technical analysis that guarantees a provably fast convergence rate. We start our analysis with potential games in Section 3.1 and then generalize the findings and provide a convergence guarantee for Markov potential games in Section 3.2. We show that under mild assumptions, the ergodic (i.e., temporal average of) NE-gap converges with iteration complexity of \(\mathcal{O}(1/\epsilon)\) after a finite threshold (Theorem 3.6). This result provides a substantial improvement over the best known rate of \(\mathcal{O}\big{(}1/\epsilon^{2}\big{)}\) in [38]. Our main theorem also reveals mild or improved dependence on multiple critical factors, including the number of agents \(n\), the initialization dependent factor \(c\), the distribution mismatch coefficient \(M\), and the discount factor \(\gamma\), discussed in Section 3. We dedicate Section 3.3 to discuss the impact of the asymptotic suboptimality gap \(\delta^{*}\), which is a new factor in this work.
In addition to our theoretical results, two numerical experiments are also conducted in Section 4 for verification of the analysis. We consider a synthetic potential game similar to [38] and a congestion game from [19]. The omitted proofs of our theoretical results can be found in the appendix.
### Related Literature
Markov Potential GamesSince many properties of single-agent RL do not hold in MARL, the analysis of MARL presents several challenges. Various settings have been addressed for MARL in recent works. A major distinction between these works stems from whether the agents are competitive or cooperative [11]. In this paper we consider MPGs introduced in stochastic control [7]. MPGs are a generalized formulation of identical-reward cooperative games. Markov cooperative games have been studied in the early work of [34], and more recently by [23; 24; 32; 10]. The work [36]
\begin{table}
\begin{tabular}{c|c} \hline \hline Algorithm & Iteration Complexity 2 \\ \hline PG + direct[39; 19] & \(\mathcal{O}\Big{(}\frac{\sum_{i=1}^{n}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{4} \epsilon^{2}}\Big{)}\) \\ PG + softmax[38] & \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{4} \epsilon^{2}}\Big{)}\) \\ NPG + softmax[38] & \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{4} \epsilon^{2}}\Big{)}\) \\ NPG + softmax + log-barrier reg.[38] & \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{4} \epsilon^{2}}\Big{)}\) \\ NPG + softmax + log-barrier reg.[38] & \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{4} \epsilon^{2}}\Big{)}\) \\ Projected Q ascent[8] & \(\mathcal{O}\Big{(}\frac{n^{2}\max_{i}|\mathcal{A}_{i}|M^{2}}{(1-\gamma)^{7} \epsilon^{4}}\Big{)}\) or \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M^{4}}{(1-\gamma)^{2} \epsilon^{2}}\Big{)}\) \\ Projected Q ascent (fully coop)[8] & \(\mathcal{O}\Big{(}\frac{n\max_{i}|\mathcal{A}_{i}|M}{(1-\gamma)^{3}\epsilon^{2} }\Big{)}\) \\ \hline (Ours) NPG + softmax & \(\mathcal{O}\Big{(}\frac{\sqrt{nM}}{(1-\gamma)^{2}\epsilon^{3}\epsilon^{2}} \Big{)}\)3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Convergence rate results for policy gradient-based methods in Markov potential games. Some results have been modified to ensure better comparison.
also offered extensive empirical results for cooperative games using multi-agent proximal policy optimization. The work by [27] established polynomial convergence for MPGs under both Q-update as well as actor-critic update. [25] studied an independent Q-update in MPGs with perturbation, which converges to a stationary point with probability one. Conversely, [17; 13] studied potential games, a special static case of simplified MPGs with no state transition.
Policy Gradient in GamesPolicy gradient methods for centralized MDPs have drawn much attention thanks to recent advancements in RL theory [1; 26]. The extension of PG methods to multi-agent settings is quite natural. [6; 35; 5] studied two-player zero-sum competitive games. The general-sum linear-quadratic game was studied in [12]. [14] studied general-sum Markov games and provided convergence of V-learning in two-player zero-sum games.
Of particular relevance to our work are the works [19; 39; 38; 8] which focus on the MPG setting and propose adaptations of PG and NPG-based methods from single-agent problems to the MARL setting. Table 1 provides a detailed comparison between these works. The previous theoretical results in multi-agent systems have provided convergence rates dependent on different parameters of the system. However, the best-known iteration complexity to reach an \(\epsilon\)-NE in MARL is \(\mathcal{O}\big{(}1/\epsilon^{2}\big{)}\). Therefore, there still exists a rate discrepancy between MARL methods where \(\mathcal{O}(1/\epsilon)\) complexity has been established in centralized RL algorithms [1]. Our main contribution is to close this gap by establishing an iteration complexity of \(\mathcal{O}(1/\epsilon)\) in this work.
## 2 Problem Formulation
We consider a stochastic game \(\mathcal{M}=(n,\mathcal{S},\mathcal{A},P,\{r_{i}\}_{i\in[n]},\gamma,\rho)\) consisting of \(n\) agents denoted by a set \([n]=\{1,...,n\}\). The global action space \(\mathcal{A}=\mathcal{A}_{1}\times...\times\mathcal{A}_{n}\) is the product of individual action spaces, with the global action defined as \(\mathbf{a}:=(a_{1},...,a_{n})\). The global state space is represented by \(\mathcal{S}\), and the system transition model is captured by \(P:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\). Furthermore, each agent is equipped with an individual reward function \(r_{i}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). We use \(\gamma\in(0,1)\) to denote the discount factor and \(\rho\in\Delta(\mathcal{S})\) to denote the initial state distribution.
The system policy is denoted by \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A}_{1})\times\cdots\times\Delta( \mathcal{A}_{n})\subset\Delta(\mathcal{A})\), where \(\Delta(\mathcal{A})\) is the probability simplex over the global action space. In the multi-agent setting, all agents make decisions independently given the observed state, often referred to as a _decentralized_ stochastic policy [38]. Under this setup, we have \(\pi(\mathbf{a}|s)=\prod_{i\in[n]}\pi_{i}(a_{i}|s)\), where \(\pi_{i}:\mathcal{S}\rightarrow\Delta(\mathcal{A}_{i})\) is the local policy for agent \(i\). For the ease of notation, we denote the joint policy over the set \([n]\backslash\{i\}\) by \(\pi_{-i}=\prod_{j\in[n]\backslash\{i\}}\pi_{j}\) and use the notation \(a_{-i}\) analogously.
We define the state value function \(V_{i}^{\pi}(s)\) with respect to the reward \(r_{i}(s,\mathbf{a})\) as \(V_{i}^{\pi}(s):=\mathbb{E}^{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r_{i}(s^{t},\bm {a}^{t})|s^{0}=s]\), where \((s^{t},\mathbf{a}^{t})\) denotes the global state-action pair at time \(t\), and we denote the expected value of the state value function over the initial state distribution \(\rho\) as \(V_{i}^{\pi}(\rho):=\mathbb{E}_{s\sim\rho}[V_{i}^{\pi}(s)]\). We can similarly define the state visitation distribution under \(\rho\) as \(d_{\rho}^{\pi}(s):=(1-\gamma)\mathbb{E}^{\pi}\left[\sum_{t=0}^{\infty}\gamma^ {t}\mathbb{1}(s_{t}=s)|s_{0}\sim\rho\right]\), where \(\mathbb{1}\) is the indicator function. The state-action value function and advantage function are, respectively, given by
\[Q_{i}^{\pi}(s,\mathbf{a})=\mathbb{E}^{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r_{i}(s^{t },\mathbf{a}^{t})|s^{0}=s,\mathbf{a}^{0}=\mathbf{a}],\ \ A_{i}^{\pi}(s,\mathbf{a})=Q_{i}^{\pi}(s,\mathbf{a})-V_{i}^{\pi}(s). \tag{1}\]
For the sake of analysis, we further define the marginalized Q-function and advantage function \(\bar{Q}_{i}:\mathcal{S}\times\mathcal{A}_{i}\rightarrow\mathbb{R}\) and \(\bar{A}_{i}:\mathcal{S}\times\mathcal{A}_{i}\rightarrow\mathbb{R}\) as:
\[\bar{Q}_{i}^{\pi}(s,a_{i}):=\sum_{a_{-i}}\pi_{-i}(a_{-i}|s)Q_{i}^{\pi}(s,a_{i},a_{-i}),\ \bar{A}_{i}^{\pi}(s,a_{i}):=\sum_{a_{-i}}\pi_{-i}(a_{-i}|s)A_{i}^{\pi}(s,a_{i},a _{-i}). \tag{2}\]
**Definition 2.1** ([39]).: _The stochastic game \(\mathcal{M}\) is a Markov potential game if there exists a bounded potential function \(\phi:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) such that for any agent \(i\), initial state \(s\) and any set of policies \(\pi_{i},\pi_{i}^{\prime},\pi_{-i}\):_
\[V_{i}^{\pi_{i}^{\prime},\pi_{-i}}(s)-V_{i}^{\pi_{i},\pi_{-i}}(s)=\Phi^{\pi_{i}^ {\prime},\pi_{-i}}(s)-\Phi^{\pi_{i},\pi_{-i}}(s),\]
_where \(\Phi^{\pi}(s):=\mathbb{E}^{\pi}[\sum_{k=0}^{\infty}\gamma^{k}\phi(s^{k},\mathbf{a}^ {k})|s^{0}=s]\)._
We assume that an upper bound exists for the potential function, i.e., \(0\leq\phi(s,\mathbf{a})\leq\phi_{max},\forall s\in\mathcal{S},\mathbf{a}\in\mathcal{A}\), and consequently, \(\Phi^{\pi}(s)\leq\frac{\phi_{max}}{1-\gamma}\).
It is common in policy optimization to parameterize the policy for easier computations. In this paper, we focus on the widely used softmax parameterization [1; 26], where a global policy \(\pi(\mathbf{a}|s)=\prod_{i\in[n]}\pi_{i}(a_{i}|s)\) is parameterized by a set of parameters \(\{\theta_{1},...,\theta_{n}\},\theta_{i}\in\mathbb{R}^{|\mathcal{S}|\times| \mathcal{A}_{i}|}\) in the following form
\[\pi_{i}(a_{i}|s)=\frac{\exp\{[\theta_{i}]_{s,a_{i}}\}}{\sum_{a_{j}\in\mathcal{ A}_{i}}\exp\{[\theta_{i}]_{s,a_{j}}\}},\;\forall(s,a_{i})\in\mathcal{S}\times \mathcal{A}_{i}.\]
### Optimality Conditions
In the MPG setting, there may exist multiple stationary points, a set of policies that has zero policy gradients, for the same problem; therefore, we need to introduce notions of solutions to evaluate policies. The term _Nash equilibrium_ is used to define a measure of "stationarity" in strategic games.
**Definition 2.2**.: _A joint policy \(\pi^{*}\) is called a Nash equilibrium if for all \(i\in[n]\), we have_
\[V_{i}^{\pi_{i}^{*},\pi_{-i}^{*}}(\rho)\geq V_{i}^{\pi_{i}^{\prime},\pi_{-i}^{* }}(\rho)\;\;\;\text{for all $\pi_{i}^{\prime}$}.\]
For any given joint policy that does not necessarily satisfy the definition of NE, we provide the definition of NE-gap as follows [39]:
\[\text{NE-gap}(\pi):=\max_{i\in[n],\pi_{i}^{\prime}\in\Delta(\mathcal{A}_{i})} \left[V_{i}^{\pi_{i}^{\prime},\pi_{-i}}(\rho)-V_{i}^{\pi_{i},\pi_{-i}}(\rho) \right].\]
Furthermore, we refer to a joint policy \(\pi\) as \(\epsilon\)-NE when its NE-gap\((\pi)\leq\epsilon\). The NE-gap satisfies the following inequalities based on the performance difference lemma [15; 38],
\[\text{NE-gap}(\pi)\leq\frac{1}{1-\gamma}\sum_{i,s,a_{i}}d_{\rho}^{\pi_{i}^{*},\pi_{-i}}(s)\pi_{i}^{*}(a_{i}|s)\bar{A}_{i}^{\pi}(s,a_{i})\leq\frac{1}{1- \gamma}\sum_{i,s}d_{\rho}^{\pi_{i}^{*},\pi_{-i}}(s)\max_{a_{i}}\bar{A}_{i}^{ \pi}(s,a_{i}).\]
In the tabular single-agent RL, most works consider the optimality gap as the difference between the expectations of the value functions of the current policy and the optimal policy, defined as \(V^{\pi^{k}}(\rho)-V^{\pi^{*}}(\rho)\). However, this notion does not extend to multi-agent systems. Even in a fully cooperative MPG where all agents share the same reward, the optimal policy of one agent is dependent on the joint policies of other agents. As a result, it is common for the system to have multiple "best" policy combinations (or stationary points), which all constitute Nash equilibria. Additionally, it has also been addressed by previous works that any NE point in an MPG is first order stable [39]. Given that this work addresses a MARL problem, we focus our analysis on the NE-gap.
## 3 Main Results
### Warm-Up: Potential Games
In this section, we first consider the instructive case of static potential games, where the state does not change with time. Potential games are an important class of games that admit a potential function \(\phi\) to capture differences in each agent's reward function caused by unilateral bias [28; 13], which is defined as
\[r_{i}(a_{i},a_{-i})-r_{i}(a_{i}^{\prime},a_{-i})=\phi(a_{i},a_{-i})-\phi(a_{i} ^{\prime},a_{-i}),\quad\forall a_{i},a_{i}^{\prime},a_{-i}. \tag{3}\]
Algorithm UpdateIn the potential games setting, the policy update using natural policy gradient is [4]:
\[\pi_{i}^{k+1}(a_{i})\propto\pi_{i}^{k}(a_{i})\exp\bigl{(}\eta\bar{r}_{i}^{k}( a_{i})\bigr{)}, \tag{4}\]
where the exact independent gradient over policy \(\pi_{i}\), also referred to as oracle, is captured by the marginalized reward \(\bar{r}_{i}(a_{i})=\mathbb{E}_{a_{-i}\sim\pi_{-i}}[r_{i}(a_{i},a_{-i})]\). By definition, the NE-gap for potential games
is calculated as \(\max_{i\in[n]}\langle\pi_{i}^{*k}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle\), where \(\pi_{i}^{*k}\in\arg\max_{\pi_{i}}V_{i}^{\pi_{i},\pi_{-i}^{k}}\) is the optimal solution for agent \(i\) when the rest of the agents use the joint policy \(\pi_{-i}^{k}\).
The local marginalized reward \(\bar{r}_{i}(a_{i})\) is calculated based on other agents' policies; hence, for any two sets of policies, the difference in marginalized reward can be bounded using the total variation distance of the two probability measures [4]. Using this property, we can also show that there is a "smooth" relationship between the marginalized rewards and their respective policies. We note that this relationship holds for stochastic games in general. It does not depend on the nature of the policy update or the potential game assumption.
We now introduce a lemma that is specific to the potential game formulation and the NPG update:
**Lemma 3.1**.: _Given policy \(\pi^{k}\) and marginalized reward \(\bar{r}_{i}^{k}(a_{i})\), for \(\pi^{k+1}\) generated using an NPG update in (4), we have the following inequality for any \(\eta<\frac{1}{\sqrt{n}}\),_
\[\phi(\pi^{k+1})-\phi(\pi^{k})\geq(1-\sqrt{n}\eta)\sum_{i=1}^{n} \langle\pi_{i}^{k+1}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle.\]
Lemma 3.1 provides a lower bound on the difference of potential functions between two consecutive steps. This implies that at each time step, the potential function value is guaranteed to be monotonically increasing, as long as the learning rate satisfies \(\eta<\frac{1}{\sqrt{n}}\).
Note that the lower bound of Lemma 3.1 involves \(\langle\pi_{i}^{k+1}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle\), which resembles the form of NE-gap \(\langle\pi_{i}^{*k}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle\). Assuming we can establish a lower bound for the right-hand side of Lemma 3.1 using NE-gap, the next step is to show that the sum of all NE-gap iterations is upper bounded by a telescoping sum of the potential function, thus obtaining an upper bound on the NE-gap.
We start by introducing a function \(f^{k}:\mathbb{R}\rightarrow\mathbb{R}\) defined as follows,
\[f^{k}(\alpha)= \sum_{i=1}^{n}\langle\pi_{i,\alpha}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle \text{, where }\pi_{i,\alpha}(\cdot)\propto\pi_{i}^{k}(\cdot)\exp\bigl{\{} \alpha\bar{r}_{i}^{k}(\cdot)\bigr{\}}. \tag{5}\]
It is obvious that \(f^{k}(0)=0\), \(f^{k}(\eta)=\sum_{i}\langle\pi_{i}^{k+1}-\pi_{i}^{k},\bar{r}_{i}^{k}\rangle\geq 0\), and \(\lim_{\alpha\rightarrow\infty}f^{k}(\alpha)=\sum_{i}\langle\pi_{i}^{*k}-\pi_{ i}^{k},\bar{r}_{i}^{k}\rangle\).
Without loss of generality, for agent \(i\) at iteration \(k\), define \(a_{i_{p}}^{k}\in\arg\max_{a_{j}\in\mathcal{A}_{i}}\bar{r}_{i}^{k}(a_{j})=: \mathcal{A}_{i_{p}}^{k}\) and \(a_{i_{q}}^{k}\in\arg\max_{a_{j}\in\mathcal{A}_{i}\backslash\mathcal{A}_{i_{p}} ^{k}}\bar{r}_{i}^{k}(a_{j})\), where \(\mathcal{A}_{i_{p}}^{k}\) denotes the set of the best possible actions for agent \(i\) at iteration \(k\). Similar to [38], we define
\[c^{k}:=\min_{i\in[n]}\sum_{a_{j}\in\mathcal{A}_{i_{p}}^{k}}\pi_{ i}^{k}(a_{j})\in(0,1),\quad\delta^{k}:=\min_{i\in[n]}[\bar{r}_{i}^{k}(a_{i_{p}}^{ k})-\bar{r}_{i}^{k}(a_{i_{q}}^{k})]\in(0,1). \tag{6}\]
Additionally, we denote \(c_{K}:=\min_{k\in[K]}c^{k};c:=\inf_{K}c_{K}>0;\delta_{K}:=\min_{k\in[K]}\delta ^{k}\).
We provide the following lemma on the relationship between \(f^{k}(\alpha)\) and \(f^{k}(\infty):=\lim_{\alpha\rightarrow\infty}f^{k}(\alpha)\), which lays the foundation to obtain sharper results than those in the existing work.
**Lemma 3.2**.: _For function \(f^{k}(\alpha)\) defined in (5) and any \(\alpha>0\), we have the following inequality._
\[f^{k}(\alpha)\geq f^{k}(\infty)\left[1-\frac{1}{c(\exp(\alpha \delta_{K})-1)+1}\right].\]
We refer to \(\delta_{K}\) as the minimal suboptimality gap of the system for the first \(K\) iterations, the effect of which will be discussed later in Section 3.3. Using the two lemmas above and the definitions of \(c\) and \(\delta_{K}\), we establish the following theorem on the convergence of the NPG algorithm in potential games.
**Theorem 3.3**.: _Consider a potential game with NPG update using (4). For any \(K\geq 1\), choosing \(\eta=\frac{1}{2\sqrt{n}}\), we have_
\[\frac{1}{K}\sum_{k=0}^{K-1}\text{NE-gap}(\pi^{k})\leq\frac{2\phi _{max}}{K}(1+\frac{2\sqrt{n}}{c\delta_{K}}).\]
Proof.: Choose \(\alpha=\eta=\frac{1}{2\sqrt{n}}\) in Lemma 3.2,
\[\text{NE-gap}(\pi^{k}) \leq\lim_{\alpha\to\infty}f^{k}(\alpha)\leq\frac{1}{1-\frac{1}{c( \exp(\delta_{K}\eta)-1)+1}}f^{k}(\eta)\leq\frac{1}{1-\frac{1}{c\delta_{K}\eta+1 }}f^{k}(\eta)\] \[\leq\frac{1}{(1-\sqrt{n}\eta)\frac{c\delta_{K}\eta}{c\delta_{K} \eta+1}}[\phi(\pi^{k+1})-\phi(\pi^{k})]=2[\phi(\pi^{k+1})-\phi(\pi^{k})](1+ \frac{2\sqrt{n}}{c\delta_{K}}).\]
Then, we have \(\frac{1}{K}\sum_{k=0}^{K-1}\text{NE-gap}(\pi^{k})\leq\frac{2(\phi(\pi^{K})- \phi(\pi^{0}))}{K}(1+\frac{2\sqrt{n}}{c\delta_{K}})\leq\frac{2\phi_{\text{max} }}{K}(1+\frac{2\sqrt{n}}{c\delta_{K}})\).
One challenge in MARL is that the NE-gap is not monotonic, so we must seek ergodic convergence results, which characterize the behavior of the temporal average of the NE-gap. Theorem 3.3 shows that for potential games with NPG policy update, the ergodic NE-gap of the system converges to zero with a rate \(\mathcal{O}(1/(K\delta_{K}))\). When \(\delta_{K}\) is uniformly lower bounded, Theorem 3.3 provides a significant speed up compared to previous convergence results. Apart from the iteration complexity, the NE-gap is also dependent linearly on \(1/c\) and \(1/\delta_{K}\). We address the effect of \(c\) in the analysis here and defer the discussion of \(\delta_{K}\) to Section 3.3. Under some mild assumptions, we can show that the system converges with a rate of \(\mathcal{O}(1/K)\).
The Effect of \(c\)The convergence rate given by Theorem 3.3 scales with \(1/c\), where \(c\) might potentially be arbitrarily small. A small value for \(c\) generally describes a policy that is stuck at some regions far from a NE, yet the policy gradient is small. It has been shown in [20] that these ill-conditioned problems could take exponential time to solve even in single-agent settings for policy gradient methods. The same issue also occurs to NPG in MARL, since the local Fisher information matrix can not cancel the occupancy measure and action probability of other agents. A similar problem has also been reported in the analysis of [38; 26] for the MPG setting. [38] proposed the addition of a log-barrier regularization to mitigate this issue. However, that comes at the cost of an \(\mathcal{O}(1/(\lambda K))\) convergence rate to a \(\lambda\)-neighborhood solution, which is only reduced to the exact convergence rate of \(\mathcal{O}(1/\sqrt{K})\) when \(\lambda=1/\sqrt{K}\). Therefore, this limitation may not be effectively avoided without impacting the convergence rate.
### General Markov Potential Games
We now extend our analysis to MPGs. The analysis mainly follows a similar framework as potential games. However, the introduction of state transitions and the discount factor \(\gamma\) add an additional layer of complexity to the problem, making it far from trivial. As pointed out in [19], we can construct MDPs that are potential games for every state, yet the entire system is not a MPG. Thus, the analysis of potential games does not directly apply to MPGs.
We first provide the independent NPG update for MPGs.
Algorithm UpdateFor MPGs at iteration \(k\), the independent NPG updates the policy as follows [38]:
\[\pi_{i}^{k+1}(a_{i}|s)\propto\pi_{i}^{k}(a_{i}|s)\exp\!\left(\frac{\eta\bar{ A}_{i}^{\pi^{k}}(s,a_{i})}{1-\gamma}\right), \tag{7}\]
where \(\bar{A}_{i}\) is defined in (2).
Different agents in a MPG do not share reward functions in general, which makes it difficult to compare evaluations of gradients across agents. However, with the introduction of Lemma 3.4, we find that MPGs have similar properties as fully cooperative games with a shared reward function. This enables us to establish relationships between policy updates of all agents. We first define \(h_{i}(s,\mathbf{a}):=r_{i}(s,\mathbf{a})-\phi(s,\mathbf{a})\), which implies \(V_{i}^{\pi}(s)=\Phi^{\pi}(s)+V_{h_{i}}^{\pi}(s)\).
**Lemma 3.4**.: _Define \(\bar{A}_{h_{i}}^{\pi}(s,a_{i})\) with respect to \(h_{i}\) similar to (2). We then have_
\[\sum_{a_{i}}(\pi_{i}^{\prime}(a_{i}|s)-\pi_{i}(a_{i}|s))\bar{A}_{h_{i}}^{\pi}( s,a_{i})=0,\quad\forall s\in\mathcal{S},i\in[n],\pi_{i}^{\prime},\pi_{i}\in \Delta(\mathcal{A}_{i}).\]
Lemma 3.4 shows a unique property of function \(h_{i}\), where the expectation of the marginalized advantage function over every local policy \(\pi^{\prime}_{i}\) yields the same effect. This property is directly associated with the MPG problem structure and is later used in Lemma 3.5. Next, we introduce the following assumption on the state visitation distribution, which is crucial and standard for studying the Markov dynamics of the system.
**Assumption 3.1** ([1, 38, 8]).: _The Markov potential game \(\mathcal{M}\) satisfies: \(\inf_{\pi}\min_{s}d^{\pi}_{\rho}(s)>0\)._
Similar to potential games, when the potential function \(\phi\) of a MPG is bounded, the marginalized advantage function \(\bar{A}_{i}\) for two policies can be bounded by the total variation between the policies.
Additionally, similar to Lemma 3.1, we present a lower bound in the following lemma for the potential function difference in two consecutive rounds.
**Lemma 3.5**.: _Given policy \(\pi^{k}\) and marginalized advantage function \(\bar{A}^{\pi^{k}}_{i}(s,a_{i})\), for \(\pi^{k+1}\) generated using NPG update in (7), we have the following inequality,_
\[\Phi^{\pi^{k+1}}(\rho)-\Phi^{\pi^{k}}(\rho)\geq(\frac{1}{1-\gamma}-\frac{ \sqrt{n}\phi_{max}\eta}{(1-\gamma)^{3}})\sum_{s}d^{\pi^{k+1}}_{\rho}(s)\sum_{ i=1}^{n}\langle\pi^{k+1}_{i}(\cdot|s),\bar{A}^{\pi^{k}}_{i}(s,\cdot)\rangle.\]
Thus, using a function \(f\) adapted from (5) as a connection, we are able to establish the convergence of NE-gap for MPGs in the following theorem.
**Theorem 3.6**.: _Consider a MPG with isolated stationary points and the policy update following NPG update (7). For any \(K\geq 1\), choosing \(\eta=\frac{(1-\gamma)^{2}}{2\sqrt{n}\phi_{max}}\), we have_
\[\frac{1}{K}\sum_{k=0}^{K-1}\text{NE-gap}(\pi^{k})\leq\frac{2M\phi_{max}}{K(1- \gamma)}(1+\frac{2\sqrt{n}\phi_{max}}{c\delta_{K}(1-\gamma)}),\]
_where_
\[M :=\sup_{\pi}\max_{s}\frac{1}{d^{\pi}_{\rho}(s)},\] \[c :=\inf_{i\in[n],s\in\mathcal{S},k\geq 0}\bigg{(}\sum_{a_{j}\in \mathcal{A}^{k}_{i_{p}}}\pi^{k}_{i}(a_{j}|s)\bigg{)}\in(0,1),\] \[\delta^{k} :=\min_{i\in[n],s\in\mathcal{S}}\Big{[}\bar{A}^{\pi^{k}}_{i}(s,a^ {k}_{i_{p}})-\bar{A}^{\pi^{k}}_{i}(s,a^{k}_{i_{q}})\Big{]},\ \ \text{and}\ \ \delta_{K}=\min_{0\leq k\leq K-1}\delta^{k}\in(0,1),\]
_similar to (6)._
Here, _isolated_ implies that no other stationary points exist in any sufficiently small open neighborhood of any stationary point. This convergence result is similar to that provided in Theorem 3.3 for potential games. We note that our theorem also applies to an alternate definition for MPGs in works such as [8], which we discuss in Appendix C. Compared to potential games, the major difference is the introduction of \(M\), which measures the distribution mismatch in the system. Generally, Assumption 3.1 implies a finite value for \(M\) in the MPG setup.
Discussion and Comparison on Convergence RateCompared to the iteration complexity of previous works listed in Table 1, the convergence rate in Theorem 3.6 presents multiple improvements. Most importantly, the theorem guarantees that the averaged NE-gap reaches \(\epsilon\) in \(\mathcal{O}(1/\epsilon)\) iterations, improving the best previously known result of \(\mathcal{O}\big{(}1/\epsilon^{2}\big{)}\) in Table 1. Furthermore, this rate of convergence does not depend on the size of action space \(|\mathcal{A}_{i}|\), and it has milder dependence on other system parameters, such as the distribution mismatch coefficient \(M\) and \((1-\gamma)\). Note that many of the parameters above could be arbitrarily large in practice (e.g., \(1-\gamma=0.01\) in our congestion game experiment). Theorem 3.6 indicates a tighter convergence bound with respect to the discussed factors in general.
### The Consideration of Suboptimality Gap
In Theorems 3.3 and 3.6, we established the ergodic convergence rates of NPG in potential game and Markov potential game settings, which depend on \(1/\delta_{K}\). In these results, \(\delta^{k}\) generally encapsulates
the difference between the gradients evaluated at the best and second best policies, and \(\delta_{K}\) is a lower bound on \(\delta^{k}\). We refer to \(\delta^{\tilde{k}}\) as the _suboptimality gap_ at iteration \(k\). In our analysis, the suboptimality gap provides a vehicle to establish the improvement of the potential function in two consecutive steps. In particular, it enables us to draw a connection between \(\Phi^{\pi^{k+1}}-\Phi^{\pi^{k}}\) and NE-gap(\(\pi^{k}\)) using Lemma 3.2 and Lemma B.3 in the appendix. On the contrary, most of the existing work does not rely on this approach and generally studies the relationship between \(\Phi^{\pi^{k+1}}-\Phi^{\pi^{k}}\) and the _squared_ NE-gap(\(\pi^{k}\)), which suffers a slower convergence rate of \(\mathcal{O}(1/\sqrt{K})\)[38].
The analysis leveraging the suboptimality gap, though has not been adopted in the MARL studies, was considered in the single-agent scenario. Khodadadian et al. [16] proved asymptotic geometric convergence of single-agent RL with the introduction of _optimal advantage function gap_\(\Delta^{k}\), which shares similar definition as the gap \(\delta^{k}\) studied in this work. Moreover, the notion of suboptimality gap is commonly used in the multi-armed bandit literature [18], so as to give the instance-dependent analysis.
While our convergence rate is sharper, a lower bound on the suboptimality gap is generally not guaranteed, and scenarios with zero optimality gap can be constructed in MPGs. In practice, we find that even if \(\delta^{k}\)_approaches_ zero in certain iterations, it may not _converge_ to zero, and in these scenarios the system still converges without a slowdown. We provide a numerical example in Section 4.1 (Fig. 0(a)) to support our claim. Nevertheless, in what follows, we further identify a sufficient condition that allows us to alleviate this problem altogether by focusing on the asymptotic profile of the sub-optimality gap.
Theoretical RelaxationThe following proposition guarantees the asymptotic results of independent NPG.
**Proposition 3.1** ([38]).: _Suppose that Assumption 3.1 holds and that the stationary policies are isolated. Then independent NPG with \(\eta\leq\frac{(1-\gamma)^{2}}{2\sqrt{n}\phi_{max}}\) guarantees that \(\lim_{k\to\infty}\pi^{k}=\pi^{\infty}\), where \(\pi^{\infty}\) is a Nash policy._
Since asymptotic convergence of policy is guaranteed by Proposition 3.1, the suboptimality gap \(\delta^{k}\) is also guaranteed to converge to some \(\delta^{*}\). We make the following assumption about the asymptotic suboptimality gap, which only depends on the property of the game itself.
**Assumption 3.2**.: _Assume that \(\lim_{k\to\infty}\delta^{k}=\delta^{*}>0\)._
Assumption 3.2 provides a relaxation in the sense that instead of requiring a lower bound \(\inf\delta^{k}\) for all \(k\), we only need \(\delta^{*}>0\), a lower bound on the limit as the agents approach some Nash policies. This will allow us to disregard the transition behavior of the system and focus on the rate for large enough \(k\).
By the definition of \(\delta^{*}\), we know that there exists finite \(K^{\prime}\) such that \(\forall k>K^{\prime},|\delta^{k}-\delta^{*}|\leq\frac{\delta^{*}}{2},\delta^{ k}\geq\frac{\delta^{*}}{2}\). Using these results, we can rework the proofs of Theorems 3.3 and 3.6 to get the following corollary.
**Corollary 3.6.1**.: _Consider a MPG that satisfies Assumption 3.2 with NPG update using algorithm 7. There exists \(K^{\prime}\), such that for any \(K\geq 1\), choosing \(\eta=\frac{(1-\gamma)^{2}}{2\sqrt{n}\phi_{max}}\), we have_
\[\frac{1}{K}\sum_{k=0}^{K-1}\text{NE-gap}(\pi^{k})\leq\frac{2M\phi_{max}}{K(1- \gamma)}(1+\frac{4\sqrt{n}\phi_{max}}{c\delta^{*}(1-\gamma)}+\frac{K^{\prime} }{2M}),\]
_where \(M,c\) are defined as in Theorem 3.6._
## 4 Experimental Results
In previous sections, we established the theoretical convergence of NPG in MPGs. In order to verify the results, we construct two experimental settings for the NPG update and compare our empirical results with existing algorithms. We consider a synthetic potential game scenario with randomly generated rewards and a congestion problem studied in [19] and [8]. We also provide the source code4 for all experiments.
### Synthetic Potential Games
We first construct a potential game with a fully cooperative objective, where the reward tensor \(r(\mathbf{a})\) is randomly generated from a uniform distribution. We set the agent number to \(n=3\), each with different action space \(|\mathcal{A}_{1}|=3,|\mathcal{A}_{2}|=4,|\mathcal{A}_{3}|=5\). At the start of the experiment, all agents are initialized with uniform policies. We note that the experimental setting in [38] falls under this broad setting, although the experiment therein was a two-player game with carefully designed rewards.
The results are shown in Figure 0(b). We compare the performance of independent NPG with other commonly studied policy updates, such as projected Q ascent [8], entropy regularized NPG [38] as well as log-barrier regularized NPG method [38]. We set the learning rate of all algorithms as \(\eta=0.1\). We also fine-tune regularization parameters to find the entropy regularization factor \(\tau=0.05\) and the log-barrier regularization factor \(\lambda=0.005\). As discussed in Section 3.1 (the effect of \(c\)), we observe in Figure 0(b) that the regularized algorithms fail to reach an exact Nash policy despite exhibiting good convergence performance at the start. The empirical results here align with our theoretical findings in Section 3.
Impact of \(c\) and \(\delta\)We demonstrate the impact of the initialization dependent factor \(c^{k}\) and suboptimality gap \(\delta^{k}\) in MPGs with the same experimental setup. Figure 0(a) depicts the change in value for \(c^{k},\delta^{k}\), and the NE-gap. We can see from the figure that as the algorithm updates over iterations, the value of \(c^{k}\) increases and approaches one, while \(\delta^{k}\) approaches some non-zero constant. Figure 0(a) shows that although \(\delta^{k}\) could be arbitrarily small in theory if we do not impose technical assumptions, it is not the case in general. Therefore, one should not automatically assume that the suboptimality gap diminishes with respect to iteration, as the limit entirely depends on the problem environment.
The effect of suboptimality gap \(\delta^{*}\) in PG is illustrated in Section E of the appendix under a set of carefully constructed numerical examples. We verify that a larger gap indicates a faster convergence, which corroborates our theory.
### Congestion Game
We now consider a class of MDPs where each state defines a congestion game. We borrow the specific settings for this experiment from [19][8].
For the congestion game experiments, we consider the agent number \(n=8\) with the number of facilities \(|\mathcal{A}_{i}|=4\), where \(\mathcal{A}_{i}=\{A,B,C,D\}\) as the corresponding individual action spaces. There are two states defined as \(\mathcal{S}=\{\textit{safe},\textit{distancing}\}\). In each state, all agents prefer to be taking the same action with as many agents as possible. The reward for an agent selecting action \(k\) is defined by predefined weights \(w^{k}_{s}\) multiplied by the number of other agents taking the same action. Additionally, we set \(w^{A}_{s}<w^{B}_{s}<w^{C}_{s}<w^{D}_{s}\) and the reward in _distancing_ state is reduced by some constant compared to the _safe_ state. The state transition depends on the joint actions of all agents. If more than half of all agents take the same action, the system enters a _distancing_ state with lower rewards. If the agents are evenly distributed over all actions, the system enters _safe_ state with higher rewards.
We use episodic updates with \(T=20\) steps and collect \(20\) trajectories in each mini-batch and estimate the value function and Q-functions as well as the discounted visitation distribution. We use a discount factor of \(\gamma=0.99\). We adopt the same step size used in [19; 8] and determine optimal step-sizes of
Figure 1: (a) The suboptimality gap; (b) Learning curve in synthetic experiments; (c) Learning curve for congestion game.
softmax PG and NPG with grid-search. Since regularized methods in Section 4.1 generally do not converge to Nash policies, they are excluded in this experiment. To make the experiment results align with previous works, we provide the \(L_{1}\) distance between the current-iteration policies compared to Nash policies. We plot the mean and variance of \(L_{1}\) distance across multiple runs in Figure 0(c). Compared to the direct parameterized algorithms, the two softmax parameterized algorithms exhibit faster convergence, and softmax parameterized NPG has the best performance across all tested algorithms.
## 5 Conclusion and Discussion
In this paper, we studied Markov potential games in the context of multi-agent reinforcement learning. We focused on the independent natural policy gradient algorithm and studied its convergence rate to the Nash equilibrium. The main theorem of the paper shows that the convergence rate of NPG in MPGs is \(\mathcal{O}(1/K)\), which improves upon the previous results. Additionally, we provided detailed discussions on the impact of some problem factors (e.g., \(c\) and \(\delta\)) and compared our rate with the best known results with respect to these factors. Two empirical results were presented as a verification of our analysis.
Despite our newly proposed results, there are still many open problems that need to be addressed. One of the limitations of this work is the assumption of Markov potential games, the relaxation of which could extend our analysis to more general stochastic games. As a matter of fact, the gradient-based algorithm studied in this work will fail for a zero-sum game as simple as Tic-Tac-Toe. A similar analysis could also be applied to regularized games and potentially sharper bounds could be obtained. The agents are also assumed to receive gradient information from an oracle in this paper. When such oracle is unavailable, the gradient can be estimated via trajectory samples, which we leave as a future work. Other future directions are the convergence analysis of policy gradient-based algorithms in safe MARL and robust MARL, following the recent exploration of safe single-agent RL [9; 22; 41] and robust single-agent RL [21; 40].
## Acknowledgments and Disclosure of Funding
This material is based upon work partially supported by the US Army Contracting Command under W911NF-22-1-0151 and W911NF2120064, US National Science Foundation under CMMI-2038625, and US Office of Naval Research under N00014-21-1-2385. The views expressed herein and conclusions contained in this document are those of the authors and should not be interpreted as representing the views or official policies, either expressed or implied, of the U.S. NSF, ONR, ARO, or the United States Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. |
2310.16607 | On the Interplay between Fairness and Explainability | In order to build reliable and trustworthy NLP applications, models need to
be both fair across different demographics and explainable. Usually these two
objectives, fairness and explainability, are optimized and/or examined
independently of each other. Instead, we argue that forthcoming, trustworthy
NLP systems should consider both. In this work, we perform a first study to
understand how they influence each other: do fair(er) models rely on more
plausible rationales? and vice versa. To this end, we conduct experiments on
two English multi-class text classification datasets, BIOS and ECtHR, that
provide information on gender and nationality, respectively, as well as
human-annotated rationales. We fine-tune pre-trained language models with
several methods for (i) bias mitigation, which aims to improve fairness; (ii)
rationale extraction, which aims to produce plausible explanations. We find
that bias mitigation algorithms do not always lead to fairer models. Moreover,
we discover that empirical fairness and explainability are orthogonal. | Stephanie Brandl, Emanuele Bugliarello, Ilias Chalkidis | 2023-10-25T12:59:51Z | http://arxiv.org/abs/2310.16607v2 | # On the Interplay between Fairness and Explainability
###### Abstract
In order to build reliable and trustworthy NLP applications, models need to be both fair across different demographics and explainable. Usually these two objectives, _fairness_ and _explainability_, are optimized and/or examined independently of each other. Instead, we argue that forthcoming, trustworthy NLP systems should consider both. In this work, we perform a first study to understand how they influence each other: do _fair(er)_ models rely on _more plausible_ explanations? and vice versa. To this end, we conduct experiments on two English multi-class text classification datasets, BIOS and ECHR, that provide information on gender and nationality, respectively, as well as human-annotated rationales. We fine-tune pre-trained language models with several methods for (i) bias mitigation, which aims to improve fairness; (ii) rationale extraction, which aims to produce plausible explanations. We find that bias mitigation algorithms do not always lead to fairer models. Moreover, we discover that empirical fairness and explainability are orthogonal.
## 1 Introduction
Fairness and explainability are crucial factors when building trustworthy NLP applications. This is true in general, but even more so in critical and sensitive applications such as medical Gu et al. (2020) and legal Chalkidis et al. (2022) domains, as well as in algorithmic hiring processes Schumann et al. (2020). AI trustworthiness and governance are no longer wishful thinking since more and more legislatures introduce related regulations for the assessment of AI technologies, such as the EU Artificial Intelligence Act (2022), the US Algorithmic Accountability Act (2022), and the Chinese Measures on Generative AI (2023). Therefore, it is important to ask and answer challenging questions that can lead to safe and trustworthy AI systems, such as how fairness and explainability interplay when optimizing for either or both.
So far in the NLP literature, model explanations1 are used to detect and mitigate how fair or biased a model is Balkir et al. (2022) or to assess a user's perception of a model's fairness Zhou et al. (2022). Those are important use cases of explainability but we argue that we should further aim for improving one when optimizing for the other to promote trustworthiness holistically across both dimensions.
Footnote 1: We refer to both the feature attribution scores assigned by models (binary and continuous) and the binary annotations by humans as _rationales_ throughout the paper, and also use the term _(model) explanations_ for the former.
To analyze the interplay between fairness and explainability, we optimize neural classifiers for one or the other during fine-tuning, and then evaluate both afterwards (Figure 1). We do so across two English multi-class classification datasets. First, we analyze a subset of the BIOS dataset De-Arteaga et al. (2019). This dataset contains short biographies for occupation classification. We consider a subset of 5 medical professions that also
Figure 1: Interplay between _empirical fairness_, measured via worst-case performance, and _explainability_ measured via human/model alignment, of different methods (Section 4) optimizing for fairness (fair), explainability (ref), or none (baseline) on the ECHR dataset. All methods, including the baseline, are built upon fine-tuned RoBERTa models. The results here suggest that the two dimensions are independent.
includes human annotations on 100 biographies across this subset (Eberle et al., 2023). We evaluate model-based rationales extracted via (i) LRP (Ali et al., 2022) or (ii) rationale extraction frameworks (REFs; Lei et al. 2016), while also examining fairness with respect to gender. Second, we also conduct similar experiments with the ECHHR dataset (Chalkidis et al., 2021) for legal judgment forecasting on cases from the European Court of Human Rights (ECHR), both to evaluate paragraph-level rationales and to study fairness with respect to the nationality of the defendant state.
Contributions.Our main contributions in this work are the following: **(i)** We examine the _interplay_ between two crucial dimensions of trustworthiness: _fairness_ and _explainability_, by comparing models that were fine-tuned using five fairness-promoting techniques (Section 4.1) and three rationale extraction frameworks (Section 4.2) on two English multi-class classification dataset (BIOS and ECHHR). **(ii)** Our experiments on both datasets (a) confirm recent findings on the independence of bias mitigation and empirical fairness (Cabello et al., 2023), and (b) show that also empirical fairness and explainability are independent.
## 2 Related Work
Bias mitigation / fairness.The literature on inducing fairer models from biased data is rapidly growing (see Mehrabi et al. 2021; Makhlouf et al. 2021; Ding et al. 2021 for recent surveys). Fairness is often conflated with bias mitigation, although they have been shown to be orthogonal: reducing bias, such as representational bias, may not lead to a fairer model in terms of downstream task performance (Cabello et al., 2023). In this work, we follow the definition of Shen et al. (2022) and target _empirical fairness_ (performance parity) that may not align with _representational fairness_ (data parity). This means that we adopt a capability-centered approach to fairness and define fairness in terms of performance parity (Hashimoto et al., 2018) or equal risk (Donini et al., 2018). The fairness-promoting learning algorithms that we evaluate are discussed in detail in Section 4.
Explainable AI (XAI) for fairness.Explanations have been used to improve user's perception and judgement of fairness (Shulner-Tal et al., 2022; Zhou et al., 2022). Balkir et al. (2022) give an overview of the *ACL literature where explainability is applied to detect and mitigate bias. They predominantly find work on uncovering and investigating bias for late speech detection. Recently, also Ruder et al. (2022) call for more multi-dimensional NLP research where fairness, interpretability, multilinguality and efficiency are combined. The authors only find work by Vig et al. (2020) who use explainability to find specific parts of a model that are causally implicated in its behaviour. With this work, we want to extend this line of research from 'XAI for fairness' to 'XAI and Fairness'.
Post-hoc XAI.XAI methods commonly rely on saliency maps extracted post-hoc from a model using attention scores (Bahdanau et al., 2015; Abnar and Zuidema, 2020), gradients (Voita et al., 2019; Wallace et al., 2019; Ali et al., 2022), or perturbations (Ribeiro et al., 2016; Alvarez-Melis and Jaakkola, 2017; Murdoch et al., 2018) at inference time. These can be seen as an approximation of identifying which features (tokens) the model relied on to solve a given task for a specific example. Such methods do not necessarily lead to _faithful_ explanations (Jacovi and Goldberg, 2020). Following DeYoung et al. (2020), faithfulness can be defined as the combination of _sufficiency_--tokens with the highest scores correspond to a sufficient selection to reliably predict the correct label--_and comprehensiveness_--all indicative tokens get attributed relatively high scores.
Rationale extraction by design.Unlike post-hoc explanations, _rationale extraction_ frameworks _optimize_ for rationales that support a given classification task and are faithful by design, _i.e._, predictions are based on selected rationales by definition.
Lei et al. (2016) were the first to propose a framework to produce short coherent rationales that could replace the original full texts, while maintaining the model's predictive performance. The rationales are extracted by generating binary masks indicating which words should be selected; and two additional loss regularizers were introduced, which penalize long rationales and sparse masks that would select non-consecutive words.
Recently, several studies have proposed improved frameworks that rely mainly on complementary adversarial settings that aim to produce better (causal, complete) rationales and close the performance gap compared to models using the full input (Yu et al., 2019; Chang et al., 2019; Jain et al., 2020; Yu et al., 2021). The rationale extraction frameworks we evaluate are detailed in Section 4.
XAI _and_ fairness.Mathew et al. (2021) release a benchmark for hate speech detection where human annotations are used as input to the model to improve performance and fairness across demographics. They evaluate both faithfulness of post-hoc explanations as well as performance disparity across communities affected by hate speech. He et al. (2022) propose a new debiasing framework that consists of two steps. First, they apply the rationale extraction framework (REF) from Lei et al. (2016) to detect tokens indicative of a given _bias_ label, _e.g._, gender. In a second step, those rationales are used to minimize bias in the task prediction.
With this work, we aim to complement prior work by systematically evaluating the impact of optimizing for fairness on explainability and vice versa, considering many different proposed techniques (Section 4). Moreover, we consider both post-hoc explanations extracted from standard Transformer-based classifiers, as well as rationale extraction frameworks evaluating model-based explanations (rationales) in terms of faithfulness and alignment with human-annotated rationales.
## 3 Datasets
Bios.The BIOS dataset (De-Arteaga et al., 2019) comprises biographies labeled with occupations and binary gender in English. This is an occupation classification task, where bias with respect to gender can be studied. In our work, we consider a subset of 10,000 (8K train / 1K validation / 1K test) biographies targeting 5 medical occupations (_psychologist_, _surgeon_, _nurse_, _dentist_, _physician_) published by Eberle et al. (2023). For these occupations, as shown in Table 1, there is a clear gender imbalance, _e.g._, 91% of the nurses are female, while 85% of the surgeons are male. We also compare with human rationales provided for a subset of 100 biographies.
For control experiments on the effect of bias mitigation methods, we also create a synthetic extremely unbalanced (_biased_) version of the train and validation split of BIOS, we call this version BIOS\({}_{\text{biased}}\). Here, our aim is to amplify gender-based spurious correlations in the training subset by keeping only the biographies where all psychologists and nurses are female persons; and all surgeons, dentists, and physicians are male persons. Similarly, we create a synthetic balanced (_debiased_) version of the dataset which we call BIOS\({}_{\text{balanced}}\). Here, our objective is to mitigate gender-based spurious correlations by downsampling the majority group per medical profession. By doing so, in BIOS\({}_{\text{balanced}}\), both genders are equally represented per profession.
Ectr.The ECtHR dataset (Chalkidis et al., 2021) contains 11K cases from the European Court of Human Rights (ECHR) written in English. The Court hears allegations that a European state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of _factual_ paragraphs (facts) from the case description. Each case is mapped to _articles_ of the ECHR that were violated (if any). The dataset also provides silver (automatically extracted) paragraph-level rationales. We consider a subset of 4,500 (3.5K train / 500 validation / 500 test) single-labeled cases for five well-supported ECHR articles and the _defendant state_ attribute. In practice, we use a binary categorization of the defendant states--Eastern European vs. the Rest--to assess fairness, similar to Chalkidis et al. (2022). Table 1 shows a clear defendant state imbalance across most of the ECHR articles except for Article 8.
## 4 Methodology
We fine-tune classifiers optimizing for either fairness (Section 4.1), explainability (Section 4.2), or none, alongside the main classification task objective (Figure 2). The baseline classifier uses an \(n\)-way classification head on top of the Transformer-based text encoder (Vaswani et al., 2017), and it is
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{**BIOS**} \\ \hline Occupation & Male & Female \\ \hline Psychologist & 822 (37\%) & 1378 (63\%) \\ Surgeon & 1090 (85\%) & 190 (15\%) \\ Nurse & 152 (09\%) & 1486 (91\%) \\ Dentist & 996 (65\%) & 537 (35\%) \\ Physician & 650 (48\%) & 699 (52\%) \\ _Total_ & 3710 (46\%) & 4290 (54\%) \\ \hline \multicolumn{3}{c}{**ECtHR**} \\ \hline ECHR Article & E. European & Rest \\ \hline
3 – Proh. Torture & 303 (88\%) & 42 (12\%) \\
5 – Liberty & 382 (88\%) & 51 (12\%) \\
6 – Fair Trial & 1776 (80\%) & 454 (20\%) \\
8 – Private Life & 129 (55\%) & 104 (45\%) \\ P1.1 – Property & 228 (88\%) & 31 (12\%) \\ _Total_ & 2818 (80\%) & 682 (20\%) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Label and demographic attribute distribution across the training sets of the BIOS and ECHR datasets.
optimized using the cross-entropy loss against the gold labels (Devlin et al., 2019).
### Optimizing for Fairness
We use a diverse set of 5 fairness-promoting algorithms that are connected to two different approaches: (a) mitigating _representational bias_ (fair-gp, fair-gn, fair-dro), and (b) penalizing _over-confident predictions_ (fair-sd, fair-dfl).
Representational bias (_e.g._, more data points for male vs. female surgeons) is considered a critical factor that may lead to performance disparity across demographic groups, as a model may rely on the protected attribute (_e.g._, gender) as an indicator for predicting the output class (_e.g._, occupation). We consider three methods to mitigate such effects:
1. [label=)]
2. **Group Parity** (fair-gp) where we over-sample the minority group examples per class up to the same level as the majority ones (Sun et al., 2009). For instance, by up-sampling biographies of male nurses and female surgeons in the BIOS dataset.
3. **Group Neutralization** (fair-gn), where we replace (normalize) attribute-related information. For instance, for gender in BIOS, we replace generated pronouns (_e.g._'he/him','she/her'), and titles (_e.g._'Mr', 'Mrs'), with gender-neutral equivalents, such as 'they/them' and 'Mx' (Brandl et al., 2022), while also replacing personal names with a placeholder name (Maudslay et al., 2019), such as 'Sarah Williams' with 'Joe Doe'.
4. **Group Robust Optimization** (fair-dro) where we use GroupDRO as proposed by Sagawa et al. (2020). In this case, we apply group parity (up-sampling) on the training set to have group-balanced batches, while the optimization loss during training accounts for group-wise performance disparities using adaptive group-wise weights.
Penalizing over-confidence _Over-confident_ model predictions are considered an indication of bias based on the intuition that all simple feature correlations--leading to high confidence--are spurious (Gardner et al., 2021). We consider two methods from this line of work:
1. [label=)]
2. **Spectral Decoupling** (fair-sd) where the \(L_{2}\) norm of the classification logits is used as a regularization penalty. The premise for this approach is that over-confidence reflects over-reliance to a limited number of relevant features, which leads to gradient starvation (Pezeshki et al., 2021).
3. **Debiased Focal Loss** (fair-dfl) where an additional task-agnostic classifier estimates if the model's prediction is going to be successful or not, and penalizes the model via focal loss (Karimi Mahabadi et al., 2020) in case a successful outcome is highly predictable (Orgad and Belinkov, 2023).
The first group of methods (representational bias) relies on demographic information, while the second group (penalizing over-confidency) is agnostic of demographic information, thus more easily applicable to different settings.
### Optimizing for Explainability
We consider three alternative rationale extraction frameworks (REFs), where the models generate _rationales_; _i.e._, a subset of the original tokens to predict the classification label. In these settings, the explanations (rationales) are _faithful_ by design, since the classifier (predictor) encodes only the rationales and has no access to the full text input, thus solve relies on those rationales at inference.
1. [label=)]
2. **Baseline** (ref-base) The baseline rationale extraction framework of Lei et al. (2016) relies on two sub-networks (Eqs. 1-4): the _rationale selector_ that selects relevant input tokens to predict the correct label (Eq. 1-2), and the _predictor_ (Eq. 3-4) that predicts the classification task outcome based on the rationale provided by the first module.
Figure 2: A short description / depiction of the _fairness-promoting_ (Section 4.1) and _explainability-promoting_ (Section 4.2) examined methods. The emojis represent male/female/neutral, and main, and adversarial modules.
ii) **3-player** (ref-3p) Yu et al. (2019) improved the aforementioned framework introducing a 3-player adversarial minimax game between the main predictor, the one using the rationale, and a newly introduced predictor using the complement of the rationale tokens. They found that this method improves classification performance, and the predicted rationales are more complete (_i.e._, they include a higher portion of the relevant information to solve the task) than the baseline framework.
iii) **Rationale to Attention** (ref-r2a) More recently, Yu et al. (2021) introduced a new 3-player version where, during training, they minimize the performance disparity between the main predictor (the one using the rationales) and a second one using soft attention scores. They find this to result in rationales that better align with human rationales.
For all examined rationale extraction frameworks, we use the latest implementations provided by Yu et al. (2021), which use a top-\(k\) token selector, instead of sparsity regularization Lei et al. (2016):
\[S = W^{H\times 1}*\mathrm{TokenScorer}(X) \tag{1}\] \[Z = \mathrm{TopK}(X,S,k)\] (2) \[R = Z*X\] (3) \[L = \mathrm{Predictor}(R) \tag{4}\]
where \(\mathrm{TokenScorer}\) and \(\mathrm{Predictor}\) are Transformer-based language models (encoders), \(X=[x_{1},x_{2},\cdots,x_{n}]\) are the input tokens, \(S\) are the token importance scores based on the \(\mathrm{TokenScorer}\) contextualized token representations, \(Z\) is a binary mask representing which input tokens are the top-\(k\) scored vs. the rest, \(R\) is the rationale (masked version of the input tokens), and \(L\) are the label logits. During training, the \(\mathrm{TopK}\) operator is detached--since it is not differentiable--and gradients pass _straight-through_Bengio et al. (2013) to the \(\mathrm{TokenScorer}\) to be updated. For ref-3p, there is an additional adversarial \(\mathrm{Predictor}\) (Eq. 4) which is fed with adversarial rationales (\(R_{adv}\)) based on the complement (ref-3p) of the original ones (\(R\)), while for ref-r2a, the adversarial predictor weighs the input tokens (\(X\)) given softmax-normalized scores (\(S\)).
## 5 Experiments
### Experimental Setup
We fine-tune classifiers based on RoBERTa-base Liu et al. (2019) for all examined methods. In the case of the ECtHR dataset, which consists of long documents, we build hierarchical RoBERTa-based classifiers similar to Chalkidis et al. (2022).2 We perform a grid search for the learning rate \(\in[1e-5,3e{-5},5e{-5}]\) with an initial warm-up of 10%, followed by cosine decay, using AdamW Loshchilov and Hutter (2019). We use a fixed batch size of 32 examples and fine-tune models up to 30 epochs with early stopping based on the validation performance. We fine-tune models with 5 different seeds and select the top-3 models (seeds) with the best overall validation performance (mF1) to report averaged results for all metrics.
Footnote 2: Similarly, rationales (Eq. 1-3) are computed based on paragraph-level, not token-level, representations.
For methods optimizing for fairness, we rely on the LRP framework Ali et al. (2022) to extract post-hoc explanations, similar to Eberle et al. (2023).
Evaluation metrics.Our main performance metric is macro-F1 (mF1); _i.e._, the F1 score macro-averaged across all classes, which better reflects the overall performance across classes regardless of their training support (robust to class imbalance).
Regarding _empirical fairness_ metrics, we report group-wise performances (_e.g._, male and female mF1 in BIOS, and E.E. and the Rest in ECtHR) and their absolute difference (group disparity). Ideally, a fair(er) model will improve the worst-case performance, i.e., the lower performance across both groups, while reducing the group disparity.
For _explainability_, we report Area Over the Perturbation Curve (AOPC) for _sufficiency_DeYoung et al. (2020) as a proxy to _faithfulness_Jacovi and Goldberg (2020); _i.e._, how much explanations reflect the true reasoning--as reflected by importance scores--of a model. We compute sufficiency for all models using as a reference (classifier) a large RoBERTa model to have a fair common ground. We also report token-level recall at human level (R@k), similar to Chalkidis et al. (2021), considering the top-\(k\) tokens, where \(k\) is the number of tokens annotated by humans,3 as a metric of alignment (agreement) between model-based explanations and human rationales.
Footnote 3: In this case, all models are compared in a fair manner using the number of the selected tokens in the human rationale as an oracle.
For estimating _bias_, we report the \(L_{2}\) norm of the classification logits, which is used as a regularization penalty by Spectral Decoupling Pezeshki et al. (2021) as a proxy for confidence. We also report gender accuracy, as a proxy for bias, by fine
tuning probing classifiers on the protected attribute examined (_e.g._, gender classifiers for BIOS) initialized by the models previously fine-tuned on the downstream task (Section 5.4)
### Results on Synthetic Data
In Table 2, we present results for all fairness-promoting methods in the artificially unbalanced (biased) and balanced (debiased) versions of the BIOS dataset: BIOS\({}_{\text{biased}}\) and BIOS\({}_{\text{balanced}}\), described in Section 3. These can be seen as control experiments, to assess methods in edge cases.
Fairness methods rely on biases in data.When training on BIOS\({}_{\text{biased}}\), we observe that all fairness-promoting methods outperform the baseline method in terms of our empirical fairness metrics: worst-group, i.e., female, performance and group disparity (difference in performance for male and female). We further see that almost all methods have mF1 scores of \(0\) when it comes to _male nurses_ and very low scores (\(15-49\)) for _female surgeons_. For both classes (_nurse_ and _surgeon_), there were only their female and male counterpart, respectively, in the training dataset of BIOS\({}_{\text{biased}}\). This result suggests that all but one fairness-promoting methods (namely fair-gn) heavily rely on gender information to solve the task when such a spurious correlation is present. Only fair-gn, where gender information is neutralized is able to solve the task reliably, including almost no group disparity and mF1 scores \(>60\) for male nurses and female surgeons. In Table 8 in the Appendix, we present the top-attributed words for both occupations per gender which support this finding. All methods, except fair-gn, attribute gendered words a high (positive or negative) score following the imbalance in training. Words such as'she','mrs.', and 'her' are positively attributed for females nurses, while 'he' is negatively attributed for male nurses; and symmetrically the opposite for surgeons (Table 8). The only exception is fair-gn, in which case gender information has been neutralized during training and testing and the model can no longer exploit such superficial spurious correlations, _e.g._, that females can only be nurses or psychologists. Concluding, all fairness-promoting methods _improve_ empirical fairness compared to the baseline, but in such extreme scenarios only a direct manual intervention on the data as in fair-gn reaches meaningful performance.
Data debiasing improves fairness methods.After downsampling the data to reach an equal number of males and females for all five professions for BIOS\({}_{\text{balanced}}\), we see almost equal performance across genders for baseline, fair-gn and fair-dro (_lower_ part of Table 2). Moreover, the performance for fair-gn and fair-dro is both higher and more equal across \(M\) and \(F\) than for baseline. Overall, the models show an mF1 score of around \(3\%\) lower than in the main results in Table 3, which is probably caused by down-sampling (fewer training samples), and to a smaller degree from not relying on gender bias.
### Main Results on Real Data
In Table 3, we present results for all examined methods for both datasets, BIOS and ECHR.
Overall performance.In the case of BIOS, we observe a drop in performance, in particular when optimizing for explainability where mF1 scores decrease from \(88\%\) down to \(85\%\) in comparison to the baseline. We also see an increase in group disparity for 3 out of 5 fairness-promoting methods and 2 out of 3 explainability-promoting methods. This is further supported by the results in Figure 3, where we show F1 scores for the two classes _surgeon_ and _nurse_ from the BIOS dataset (see Figure 4 in Appendix for results across all classes and metrics). We see a severe drop in performance for the two most underrepresented demographics, female surgeons and male nurses, of up to \(25\%\) in comparison to the overrepresented counterpart. In contrast, in the case of ECHR, fairness-promoting (bias mitigation) methods, have a beneficial effect, especially in the case of confidence-related methods fair-sd and fair-dfl where overall task perfor
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{Empirical Fairness (mF1)} \\ & \(\mathbf{M}\) / \(\mathbf{F}\) / \(\mathbf{Diff.}\) & \(\mathbf{\downarrow}\) & **Nurse (M)** \(\uparrow\) & **Surgeon (F)** \(\uparrow\) \\ \hline \multicolumn{5}{c}{BIOS\({}_{\text{biased}}\) _(Artificially Unbalanced)_} \\ \hline baseline & 45.9 / 34.6 / 11.3 & 0.0 & 14.8 \\ \hline fair-gn & 81.7 / 82.1 / 0.4 & 61.5 & 69.1 \\ fair-dro & 53.5 / 60.6 / 7.1 & 0.0 & 48.5 \\ fair-sd & 48.7 / 50.5 / 1.8 & 0.0 & 38.7 \\ fair-dfl & 45.7 / 45.7 / 1.8 & 0.0 & 14.8 \\ \hline \multicolumn{5}{c}{BIOS\({}_{\text{balanced}}\) _(Artificially Balanced)_} \\ \hline baseline & 83.6 / 84.4 / 0.8 & 76.9 & 73.9 \\ \hline fair-gn & 84.8 / 84.2 / 0.6 & 74.1 & 73.5 \\ fair-dro & 84.8 / 85.0 / 0.2 & 74.1 & 79.2 \\ fair-sd & 83.5 / 86.2 / 2.6 & 71.4 & 80.0 \\ fair-dfl & 82.6 / 85.8 / 3.2 & 74.1 & 76.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fairness-related metrics: macro-F1 (mF1) per group (male/female) and their absolute difference (Diff.), and worst-performing class (profession) per group, of fairness-promoting methods on the _ultra-biased_ or _debiased_ version of BIOS.
mance increase by \(0.8-1.4\%\) with respect to the baseline. We suspect that the positive impact in the case of ECHR is partly a side-effect of a higher class imbalance (label-wise disparity), e.g., there are many more cases tagged with Art. 6 compared to the rest of the labels, as demonstrated in Table 1 (lower part), similar to the findings of Chalkidis and Sogaard (2022) who showed that fair-sd works particularly well for high class imbalance.
Fairness-promoting methods.In the case of BIOS, we observe that only fair-sd can slightly improve empirical fairness, reflected through lower group disparity at the cost of a lower group performance for female (F), while the remaining fairness-promoting methods lead to a more or similar unfair performance. We observe similar results for ECHR, where only two out of four methods (fair-sd, fair-dfl) are able to improve the performance for both groups (EE, R), while increasing the group disparity, as all other methods.4 Concluding, we find that bias mitigation algorithms do not always lead to fairer models which is in line with Cabello et al. (2023). Considering explainability-related metrics--faithfulness and human-model alignment as measured by R@k--for the fairness-promoting (bias mitigation) methods, we observe that improved empirical fairness does not lead to _better_ model explanations, neither for faithfulness (AOPC) nor for plausibility (R@k) when comparing fair-sd and fair-dfl with the baseline.
Footnote 4: We do not consider fair-gn in ECHR, since there is no straightforward way to anonymize (neutralize) information relevant to the defendant state, which is potentially presented in the form of mentions to locations, organization, etc..
Rationale Extraction Frameworks (REFs).Considering the results for the rationale extraction frameworks (REFs, see Section 4.2) presented in the lower part of Table 3, we observe that the over
Figure 3: F1 and macro-F1 scores for the classes _surgeon_ and _nurse_ from the BIOS dataset for all methods per gender. Baseline is marked as \(\star\), fairness-promoting methods as \(\circ\), and REFs as \(\square\). We see a severe drop in performance for the underrepresented class (female surgeons and male nurses).
\begin{table}
\begin{tabular}{l c c c|c c c c} \hline \hline & \multicolumn{3}{c|}{**BIOS – Occupation Classification**} & \multicolumn{3}{c}{**ECtHR – ECHR Violation Prediction**} \\ \cline{2-9} \multirow{2}{*}{**Method**} & \multirow{2}{*}{**mF1**} & Empirical Fairness & Explainability & \multirow{2}{*}{**mF1**} & Empirical Fairness & Explainability \\ & & **mF1 (M / F / Diff.)** & & **AOPC** & **R@k** & **mF1 (EE / R / Diff.)** & **AOPC** & **R@k** \\ \hline baseline & **88.1** & 85.5 / **87.5** / 2.0 & 88.5 & **52.0** & 83.5 & 83.1 / 83.3 / **0.2** & 77.4 & 48.8 \\ \hline \multicolumn{9}{c}{_Optimizing for Fairness_} \\ \hline fair-gp & 87.8 & 83.8 / **87.5** / 3.7 & 88.0 & 47.8 & 83.9 & 83.5 / 81.8 / 2.5 & 77.0 & 50.5 \\ fair-gn & 87.8 & 82.5 / 86.8 / 4.2 & 88.0 & 48.7 & — & Not Applicable (N/A)4 & \multirow{2}{*}{**-**} \\ fair-dro & 87.6 & 84.2 / 86.4 / 2.2 & 88.4 & 48.8 & 83.9 / 80.6 / 3.0 & 77.9 & 49.8 \\ fair-sd & 87.9 & 85.6 / 86.6 / **1.0** & 88.5 & 49.4 & **84.9** & **84.2 / 87.1** / 2.9 & **78.8** & 49.9 \\ fair-dfl & 87.6 & 84.5 / 86.4 / 1.9 & 87.3 & 45.5 & 84.3 & 84.1 / 83.6 / 0.5 & 78.2 & 49.2 \\ \hline \multicolumn{9}{c}{_Optimizing for Explainability_} \\ \hline ref-base & 85.3 & 82.2 / 83.9 / 1.7 & 78.1 & 45.7 & 81.8 & 81.9 / 81.3 / 0.6 & 73.2 & **57.1** \\ ref-3p & 86.4 & 81.8 / 85.0 / 3.1 & 79.6 & 44.3 & 83.1 & 83.3 / 80.8 / 2.5 & 73.3 & 54.0 \\ ref-r2a & 86.1 & 82.4 / 85.4 / 3.0 & 82.9 & 50.7 & 82.8 & 82.6 / 83.4 / 0.8 & 74.5 & 50.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test Results for all examined methods. We report the overall macro-F1 (mF1), alongside fairness-related metrics: macro-F1 (mF1) per group and their absolute difference (Diff.), also referred to as group disparity; and explainability-related scores: AOPC for faithfulness and token R@k for human-model rationales alignment. The best scores across all models in the same group (fair-, ref-) are underlined, and the best scores overall are in **bold**. We present detailed validation and test results including standard deviations in Tables 5- 7.
all performance (mF1) decreases by 2-3% in the case of BIOS, and by 0.5-2% for ECtHR, since the models' predictor only considers a subset of the original input, the rationale. All REFs that aim to improve explainability compromise empirical fairness (_i.e._, performance disparity) in both datasets.
When it comes to explainability, the results are less clear. For BIOS, both scores--faithfulness and human-model alignment--, drop in comparison to the baseline, while all REF methods substantially improve human-model alignment (by 2-8%) in the case of ECtHR. For REFs, we also observe that an improvement in empirical fairness does not correlate with an improvement in explainability.
### Bias Mitigation \(\neq\) Empirical Fairness
Based on our findings in Section 5.3, we investigate the dynamics between bias mitigation and empirical fairness further. We examine the fairness-promoting methods on both datasets considering two indicators of bias: (a) the \(L2\) norm of the classification logits as a proxy for the model's over-confidence (also used as a penalty term by fair-sd), and (b) the accuracy of a probing classifier for predicting the attribute (gender/nationality). This probing classifier relies on a frozen encoder that was previously fine-tuned on the occupation/article classification task with a newly trained classification head. To avoid conflating bias with features learned for the main classification tasks, e.g., medical occupation classification for BIOS, we use new datasets, excluding documents with the original labeling, e.g., for BIOS we use 3K biographies for 5 non-medical professions to train the gender classifier. With this analysis, we want to quantify to what degree we can extract information on gender/nationality, from the biographies/court cases.
In Table 4, we report empirical fairness metrics and the bias indicators (proxies) for all examined methods together with F1 scores for _worst-case-scenario_ (WC) across all classes and the difference in mF1 between the two groups from Table 3. First of all, with respect to BIOS, we observe that all fairness-promoting algorithms, except fair-gn, show a high accuracy for gender classification (\(>95\%\)), thus, are more biased compared to the baseline with respect to gender classification accuracy. Furthermore, the least biased classifier (fair-gn), is outperformed by all other fairness-promoting methods in both empirical fairness metrics. In the case of ECtHR, we observe that 3 out of 4 fairness-promoting methods decrease bias, shown by lower group accuracy and lower conflency scores (L2 norm) for fair-sd and fair-dfl. This does not lead to improvements in empirical fairness, with the exception of worst-case performance for fair-sd and fair-dfl.
## 6 Conclusion
We systematically investigated the interplay between empirical fairness and explainability, two key desired properties required for trustworthy NLP systems. We did so by considering five fairness-promoting methods, and three rationale extraction frameworks, across two datasets for multi-class classification (BIOS and ECtHR). Based on our results, we see that improving either empirical fairness or explainability does _not_ improve the other. Interestingly, many fairness-promoting methods do not mitigate bias, nor promote fairness as intended, while we find that these two dimensions are also orthogonal (Figure 1). Furthermore, we see that (i) gender information is encoded to a high amount in the occupation classification task, and (ii) the only successful strategy to prevent this, seems to be the normalization across gender during training. We conclude that trustworthiness, reflected through empirical fairness and explainability, is still an open challenge. With this work, we hope to encourage more efforts that target a holistic investigation and the development of new algorithms that promote both crucial qualities.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{Fairness (mF1)} & \multicolumn{2}{c}{Bias Proxies} \\ & **WC**\(\uparrow\) & **Diff.**\(\downarrow\) & \(|L2|\downarrow\) & **Group Acc.**\(\downarrow\) \\ \hline \multicolumn{4}{c}{**BIOS – Occupation Classification**} \\ \hline baseline & 85.5 & 2.0 & 12.6 & 93.2 \\ \hline fair-gp & 83.8 & 3.7 & 18.6 & 96.6 \\ fair-gn & 82.5 & 4.2 & 11.6 & 65.4 \\ fair-dro & 84.2 & 2.2 & 21.2 & 98.2 \\ fair-sd & 85.6 & 1.0 & 00.7 & 96.0 \\ fair-dfl & 84.5 & 1.9 & 06.5 & 96.2 \\ \hline \multicolumn{4}{c}{**ECtHR – ECtHR Violation Prediction**} \\ \hline baseline & 83.1 & 0.2 & 10.7 & 75.0 \\ \hline fair-gp & 81.8 & 2.7 & 11.3 & 69.6 \\ fair-dro & 80.6 & 3.0 & 16.7 & 76.2 \\ fair-sd & 84.2 & 2.9 & 00.4 & 72.4 \\ fair-dfl & 83.6 & 0.5 & 04.5 & 63.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Fairness- and bias-related metrics. We show again downstream task performance for _Worst-Case_ (WC) and the group-wise difference as indicators for empirical fairness. We further add \(L2\) norm of the classification logits as an indicator for (over-)confidence and accuracy for group classification both as bias proxies.
### Limitations
Our analysis is limited to English text classification datasets. In order to make general conclusions about the interplay between fairness and explainability, one need to extend this analysis to other languages and downstream tasks.
Furthermore, we argue within the limited scope of specific definitions of fairness, bias and explainability for binary attributes. This analysis could be applied to further attributes. Also, we have not included human evaluation with respect to explainability, i.e., humans evaluating the rationales for usability and plausibility, see Brandl et al. (2022); Yin and Neubig (2022).
## Acknowledgements
We thank our colleagues at the CoAStaL NLP group for fruitful discussions in the beginning of the project. In particular, we would like to thank Daniel Hershcovich, Desmond Elliott and Laura Cabello for valuable comments on the manuscript.
EB is supported by the funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 801199. IC is funded by the Novo Nordisk Foundation (grant NNF 20SA0066568). SB receives funding from the European Union under the Grant Agreement no. 10106555, FairER. Views and opinions expressed are those of the author(s) only and do not necessarily reflect those of the European Union or European Research Executive Agency (REA). Neither the European Union nor REA can be held responsible for them.
|
2308.03726 | AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene
Segmentation | Segmentation is a fundamental problem in surgical scene analysis using
artificial intelligence. However, the inherent data scarcity in this domain
makes it challenging to adapt traditional segmentation techniques for this
task. To tackle this issue, current research employs pretrained models and
finetunes them on the given data. Even so, these require training deep networks
with millions of parameters every time new data becomes available. A recently
published foundation model, Segment-Anything (SAM), generalizes well to a large
variety of natural images, hence tackling this challenge to a reasonable
extent. However, SAM does not generalize well to the medical domain as is
without utilizing a large amount of compute resources for fine-tuning and using
task-specific prompts. Moreover, these prompts are in the form of
bounding-boxes or foreground/background points that need to be annotated
explicitly for every image, making this solution increasingly tedious with
higher data size. In this work, we propose AdaptiveSAM - an adaptive
modification of SAM that can adjust to new datasets quickly and efficiently,
while enabling text-prompted segmentation. For finetuning AdaptiveSAM, we
propose an approach called bias-tuning that requires a significantly smaller
number of trainable parameters than SAM (less than 2\%). At the same time,
AdaptiveSAM requires negligible expert intervention since it uses free-form
text as prompt and can segment the object of interest with just the label name
as prompt. Our experiments show that AdaptiveSAM outperforms current
state-of-the-art methods on various medical imaging datasets including surgery,
ultrasound and X-ray. Code is available at
https://github.com/JayParanjape/biastuning | Jay N. Paranjape, Nithin Gopalakrishnan Nair, Shameema Sikder, S. Swaroop Vedula, Vishal M. Patel | 2023-08-07T17:12:54Z | http://arxiv.org/abs/2308.03726v1 | # AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene Segmentation
###### Abstract
Segmentation is a fundamental problem in surgical scene analysis using artificial intelligence. However, the inherent data scarcity in this domain makes it challenging to adapt traditional segmentation techniques for this task. To tackle this issue, current research employs pretrained models and fine-tunings them on the given data. Even so, these require training deep networks with millions of parameters every time new data becomes available. A recently published foundation model, Segment-Anything (SAM), generalizes well to a large variety of natural images, hence tackling this challenge to a reasonable extent. However, SAM does not generalize well to the medical domain as is without utilizing a large amount of compute resources for fine-tuning and using task-specific prompts. Moreover, these prompts are the form of bounding-boxes or foreground/background points that need to be annotated explicitly for every image, making this solution increasingly tedious with higher data size. In this work, we propose AdaptiveSAM - an adaptive modification of SAM that can adjust to new datasets quickly and efficiently, while enabling text-prompted segmentation. For finetuning AdaptiveSAM, we propose an approach called bias-tuning that requires a significantly smaller number of trainable parameters than SAM (less than 2%). At the same time, AdaptiveSAM requires negligible expert intervention since it uses free-form text as prompt and can segment the object of interest with just the label name as prompt. Our experiments show that AdaptiveSAM outperforms current state-of-the-art methods on various medical imaging datasets including surgery, ultrasound and X-ray. Code is available at [https://github.com/JayParanjape/biastuning](https://github.com/JayParanjape/biastuning)
Foundational Models, Medical Imaging, Segment Anything, Surgical Scene Segmentation.
## I Introduction
Segmentation of surgical instruments, tissues and organs is an important task in analyzing and developing surgical systems. In human-guided surgeries, segmenting anatomical structures is useful for developing computer-assisted systems for surgeons [1]. Semantic segmentation also plays a pivotal role in robotic surgery, where it is used for instrument tracking through the surgical process [2]. Deep learning-based solutions are widely adopted for tackling this problem the most prominent among them are UNet [3] and its variants, which perform admirably well on surgical scene segmentation with proper training [4, 5, 6, 7]. However, these networks require substantial resources and time for every new dataset they are trained on. As shown in Figure 2, they generally comprise of an encoder and decoder that may have millions of parameters that need to be tuned for every new dataset. Hence, this limits the usage of these models.
A similar problem also occurs in non-medical vision-based application scenarios, which is mitigated to some extent with the advent of foundational models. Foundation models are trained on a large corpus of images across the internet on multiple GPUs, which make them generalizable across different datasets. An example is CLIP [8] and its variants [9, 10] which are used for zero-shot visual recognition tasks. CLIP provides rich, generalized embeddings for images and text, which are closer to each other if the object related to the text appears in the image. Hence, it generates embeddings for similar objects closer in the latent space. Similarly, Segment-Anything Model (SAM) was recently released as a foundational model for prompted segmentation [11]. Given a prompt in the form of a bounding box, points, mask or text, SAM can segment out the object that is related to the prompt. There have been multiple efforts to adapt SAM for different medical imagining-based applications by leveraging its generalized nature with minimal training [2, 12, 13, 14, 15]. We give a brief overview of these methods in Figure 2. While these finetuning-based methods reduce the training time and resources over general methods that train from scratch, the number of trainable parameters and the train time is still large due to inefficient freezing of SAM's layers or excessive architectural changes. To alleviate this, we propose a simple yet efficient strategy called bias-tuning, i.e. only tuning the bias parameters in the network. The effectiveness of bias has been widely studied in the Natural Language Processing (NLP) literature [16] for fine-tuning large language models. Through our experiments and analysis, we found that tuning just the bias parameters and the normalization layers helps in effectively adapting a foundational model for surgical images. Another major issue with existing finetuning-based adaptation methods is that they include an additional constraint i.e. the need for the correct prompts. For the existing adaptation meth
ods to work well with surgical datasets, expert intervention is needed to manually annotate the bounding boxes or points for the objects of interest. These prompts are required for all images during test time, making these approaches tedious to deploy when we have large data sizes.
In this work, we propose AdaptiveSAM - a solution that combines the best of both worlds. Just like general finetuning methods, it does not require expert intervention, and just like recent SAM-adaptation methods, it can be trained quickly and with fewer resources using bias-tuning. To achieve this, we introduce a trainable affine layer that can take as input a text prompt that is passed as input to the SAM model. We show this in Figure 2. The text prompt can simply be the label of interest and is required once for the entire dataset. Hence, our method does not require medical expertise unlike inputting per image bounding boxes as required by existing methods. As an example, if the label of interest is the 'Grasper', our method only expects the word itself as the input instead of a bounding box around the grasper or a foreground point on the grasper. While the latter has to be provided by an expert surgeon, our method does not need any expert intervention. AdaptiveSAM shows considerable improvements over SAM in terms of control as shown in Figure 1. SAM can be used without explicit bounding boxes or point prompts. However, as can be seen in Figure 1, it cannot associate a mask with a given label without a prompt. AdaptiveSAM overcomes this limitation by allowing text prompts as inputs. To summarize, this paper makes the following contributions:
1. We propose AdaptiveSAM - a text-prompted segmentation method for surgical datasets which requires no expert intervention and is able to produce highly precise results.
2. We develop a training strategy called bias-tuning that can adapt foundational models like SAM for specific tasks with minimal training and resources. By applying bias-tuning for AdaptiveSAM, we only finetune 2% of SAM's training parameters, making it trainable on a single GPU.
3. We achieve state-of-the-art (SOTA) results on three publicly available surgical scene segmentation datasets. We also show the generalizable nature of AdaptiveSAM with experiments on non-surgical datasets.
## 2 Related Work
### Surgical Scene Segmentation
Medical semantic segmentation recognizes and delineates different organs, tissues, or instruments in surgical images, enabling multiple substream tasks like tracking and classification. In general, medical image segmentation is challenging owing to various factors like similarity between different organs, spectral shadows cast by the operating instruments and so on [1]. Various Convolutional Neural Network (CNN) based approaches have been proposed to tackle this task [3, 4, 5, 6, 17, 18, 19]. Ronneberger et al. proposed the UNet [3] for this task which motivated further research into this field. Since then, various methods have been proposed which follow the structure of U-Net because of its simplicity and effectiveness [4, 5, 6, 7]. Other convolutional methods have also been proposed which first generate bounding boxes to identify the object of interest, followed by segmentation. Mask RCNN [17] and its variants [18, 19] are notable models of this type. With the advent of transformers [20], various networks have been proposed that improve the global representation of CNNs for surgical scene segmentation [21, 22]. SAM uses Vision Transformers for segmentation as well. However, all of the above approaches have a large number of parameters that need to be trained every time a new dataset is introduced. This limits the applicability of such methods to certain datasets or tasks.
### Adaptation of SAM for medical applications
A foundation model called SAM was recently released for the task of promptable segmentation [11]. SAM can segment images given prompts in the forms of bounding boxes, masks and points. However, SAM does not produce good results when used with medical images [23, 14, 24]. This is
Figure 1: Comparison of our method(AdaptiveSAM) with SAM. The text on the left denotes the prompt given to SAM and AdaptiveSAM. The datasets are indicated on the right. SAM, without any text prompts, produces good segmentation masks. However, we do not have control over segmenting specific objects without a manual prompt. If we add a text prompt to SAM as shown, we do not get good results, primarily because SAM is not trained with medical data or terminology. Our method produces good masks which can be controlled using text prompts. This makes it easier to use for surgical scene segmentation without the need for points/bounding boxes which must be manually given by experts.
because the training data of SAM primarily consists of natural images. However, multiple works have adapted SAM [12, 14, 15] for medical segmentation with correct modifications and training. MedSAM [14] trains only the decoder part of SAM while keeping the encoders frozen. Medical SAM Adapter [15] also adds low rank adaption layers to the image and prompt encoders which can be trained while keeping the original SAM weights frozen. However, this method is memory intensive since these additional layers are added to every transformer block in the network which adds an additional computational overhead since the gradients need to be computed for these parameters and branches as well. AutoSAM [12] keeps the encoder frozen but adds a separate trainable predictor head that is CNN based, while [13] adds trainable CNN-based prompt layers after every transformer layer while keeping the SAM weights frozen. However, all of these methods are limited to using bounding boxes or points as prompts. Hence, if these models are to be deployed, experts would have to provide such prompts for every image or video frame, making them infeasible. In our work, we overcome this limitation by efficiently adapting text-based capabilities of SAM for medical data.
## 3 Proposed Method
In this section, we introduce our approach for finetuning SAM for out-of-domain datasets. An overview of our approach can be found in Figure 2 (d).
### _Preliminaries: Architecture of SAM_
Segment Anything Model(SAM) [11] proposes a new task of prompted segmentation where given an image and a prompt, the output is a mask specific to the prompt. The prompt can be in the form of a point, bounding box, mask or text. The prompt-based conditioning is achieved by having a separate image encoder and a prompt encoder, followed by a decoder that fuses the outputs of the image and prompt encoders. The image encoder of SAM has a Vision Transformer [20] backbone and is pretrained with the Mask Auto-Encoder (MAE) strategy [25]. The prompt encoder supports four modalities for the prompts: point, bounding box, mask, and text. Point prompts are represented using positional embeddings and a foreground/background marker. Bounding box prompts are represented by the point embeddings of the top left and bottom right corners. Mask prompts are encoded using convolutions and text is encoded using CLIP pretraining [8]. The mask decoder is a lightweight transformer network, followed by a mask prediction head.
### _Architectural Changes_
Our goal is to adapt SAM to new surgical datasets for which SAM does not perform well. Finetuning based strategies [9, 14, 15] have previously been found effective and are shown to work better than training from scratch because of their good initialization of weights. In our work, we wish to use the capability of SAM to capture rich discriminative
Figure 2: Comparison of our method (d) with full finetuning(a) and other methods. (b) represents MedSAM [14], which finetunes only the decoder part of SAM. (c) represents Medical SAM Adapter [15], which also adds adapter layers inside the image encoder of SAM. Note that this method also adds similar adapter layers to the decoder instead of training all of its weights. This has not been shown in the figure for simplicity. Our method uses the SAM architecture as is but with trainable layer norms and biases. We also add a trainable Text Affine Layer(TLL), which transforms the text embeddings, helping SAM understand medical terminology.
features in the initial layers of the network. Hence, we use the same image encoder, prompt encoder and mask decoder as SAM and initialize their weights with the pretrained weights. Like SAM, text prompts are encoded using CLIP and images are passed to SAM's image encoder. However, CLIP fails to give meaningful text embeddings as CLIP is not trained with medical text terminology. Hence, to overcome this limitation we add a lightweight affine transformation layer that takes the pretrained CLIP embeddings as input and refines it to get medically relevant text embeddings as outputs. In Figure 2 (d), this is represented by the Text Affine Layer (TAL). The operation of TAL is defined as follows
\[y=BatchNorm(ReLU(W_{TAL}^{T}X+b_{TAL})), \tag{1}\]
where \(X\) denotes the input CLIP embeddings for the given text, and \(y\) represents the transformed embeddings. \(W_{TAL}\) and \(b_{TAL}\) are the weights and biases of the Text Affine Layer. To summarize, for a given image-text pair, the image is encoded using SAM's image encoder, text is encoded using CLIP, followed by the TAL, and is fed into SAM's prompt encoder. These two outputs are fused in SAM's mask decoder, producing the output mask. We refer to the final model after applying these modifications as AdaptiveSAM.
### Training Strategy
Existing approaches for adapting SAM for data-specific applications generally freeze the parameter-heavy encoders and finetune the lightweight decoder [12, 14]. Some approaches also try to perform low rank adaption by adding extra trainable layers in SAM's encoder [15]. In our training method, we propose a simpler yet effective method for finetuning SAM, called bias-tuning. As shown in Figure 3, the image encoder of SAM has \(N\) blocks, each of which consists of a self attention block, followed by a Layer Norm and an MLP block, with skip connections. Given an image, it is divided into patches and passed to the self attention block. Here, each patch undergoes an affine transformation as described as follows
\[qkv=(W_{qkv}^{n})^{T}x+b_{qkv}^{n}. \tag{2}\]
Here, \(x\) represents the input image patch, \(W\) and \(b\) represent the weights and biases of the corresponding affine transformation, respectively and \(n\) is used to index the layers in the encoder. The output is separated into three parts and are termed as query \((q)\), key \((k)\) and value \((v)\), which are used in the subsequent attention operation.
Note that during training such models, the weight matrices \(W_{qkv}\)s and their respective gradients are responsible for the majority of GPU memory usage. However, SAM is already trained with 1 Billion masks and hence can inherently segment general objects with its weight matrices. The task then, is to adapt it to the new context of surgical scenes, which can be achieved by adding a trainable shift parameter \(\bar{b}\) to the outputs of each affine transformation. This is described in Eq. (3) and (4) as follows
\[qkv=(W_{qkv}^{n})^{T}x+b_{k}^{n}+\bar{b}_{qkv}^{n}, \tag{3}\]
\[o_{n}=(W_{MLP}^{n})^{T}x+b_{MLP}^{n}+\bar{b}_{MLP}^{n}. \tag{4}\]
Here, the output of a block is denoted by \(o_{n}\). During training, all other weights and biases are frozen and only the shift parameters are trainable, as shown in Figure 3. These are initialized with zeros and trained with a small learning rate. We call this approach bias-tuning because we are essentially modifying only the biases of the encoder.
In addition, we also train the positional embeddings to facilitate segmentation on multiple scales as well as the Layer Norm layers since they tend to be biased towards SAM's training dataset. Following MedSAM [14], we let the decoder be fully trainable to learn how to fuse medical text prompts and surgical image embeddings correctly. Overall, the number of trainable parameters amounts to less than 2% of SAM's trainable parameters.
Furthermore, in the surgical and medical domains, it is essential to build highly precise models with low false positive rates. In other words, if the text is 'Forceps' and there is no forceps present in the image, the output should be a blank mask. This property is lost if the model is solely trained on image-text pairs that only have non-zero masks, as is typically done in general-vision tasks. Hence, we also train the model with 'blank labels'. For a given image in a dataset, suppose there are \(N\) possible objects of interest. Then, for every image \(I\), we generate \(N\) image-text pairs irrespective of whether the object is present in the ground truth or not. We keep the ground truth mask empty for the latter case. We observe that performing training in this manner gives a boost in the ability of the model to generate correct (completely empty) masks when the object is indeed not present in the image. However, the approach introduces a significant data imbalance during training due to the total amount of empty regions in the training set ground truth. To mitigate this effect, we employ the focal loss defined as
\[\mathbb{L}_{focal}=\frac{1}{B}\sum_{i=1}^{B}\sum_{j=1}^{HW}\mathbb{L}_{ij}, \tag{5}\]
where
\[\mathbb{L}_{ij}=\begin{cases}-\alpha(1-p)^{\gamma}\log(p),&\text{if }y=1\\ -(1-\alpha)(p)^{\gamma}\log(1-p),&\text{if }y=0.\end{cases} \tag{6}\]
Here, \(\mathbb{L}_{ij}\) denotes the loss corresponding to pixel \(j\) in the \(i^{th}\) image, \(p\) corresponds to the probability of the prediction being 1, and \(y\) denotes the true label of the pixel. Note that \(B\) represents the batch size, \(H\) represents the height, and \(W\) represents the width of the mask. The hyperparameters \(\alpha\) and \(\gamma\) vary among datasets and need to be set through validation. As can be seen from the above equations, this loss significantly reduces the weight given to trivial predictions during training, thereby preventing the model from always producing blank masks.
## 4 Experiments and Results
We evaluate the proposed method on three publicly available surgical scene segmentation datasets - Endovis17 [26], Endovis18 [27] and Cholec-Seg8k [28]. For each of these datasets, we calculate the instrument-wise Dice scores for
comparison. Following [29], we define the Dice Score (DSC) as follows
\[DSC=\begin{cases}\frac{2*|Y\cap\hat{Y}|}{|Y|+|\hat{Y}|},&\text{if }(|Y|+|\hat{Y}|)\neq 0 \\ 1,&\text{otherwise}.\end{cases} \tag{7}\]
We also calculate the Intersection over Union (IoU) Metric for comparison which is defined as
\[IoU=\begin{cases}\frac{|Y\cap\hat{Y}|}{|Y\cup\hat{Y}|},&\text{if }(Y\cup\hat{Y}) \neq\phi\\ 1,&\text{otherwise}\end{cases} \tag{8}\]
where \(Y\) represents the true segmentation map and \(\hat{Y}\) represents the predicted segmentation map. In what follows, we describe the implementation details and the results corresponding to each of these datasets. In addition, we show the generalization capability of our method on non-surgical scenes by conducting experiments on ultrasound and X-ray imaging datasets.
### Experimental Setup
In our experiments, we initialize the AdaptiveSAM image encoder with the pretrained ViT-base model for SAM. We augment the training images with random rotations up to an angle of 10 degrees, random saturation and brightness changes with scale factor 2. For Endovis 17 and Endovis 18, we set hyperparameters \(\alpha=0.75\) and \(\gamma=3\). For the Choice-Seg8k dataset, we set these as \(\alpha=0.5\) and \(\gamma=0\). For all datasets, we train with a batch size of 32 and training is performed on a single Nvidia Quadro RTX 8000 GPU and requires around 12 GB of memory. We use the AdamW optimizer with a fixed learning rate of 1e-4 for all cases.
### Datasets
The Endovis 17 (EV17) dataset [26] contains eight videos for training and ten videos for testing from the Da-Vinci robotic system. We use two videos out of the training videos for validation. This dataset has labels for six robotic instruments - Grasping Forceps, Bipolar Forceps, Large Needle Driver, Grasping Retractor, and Monopolar Curved Scissors. The Endovis 18 (EV18) dataset [27] is a robotic instrument clinical dataset that includes organs and surgical items. It has sixteen sequences for training and four sequences for testing. We use four sequences from the training set for validation. This dataset has annotations for ten objects including different organs, tissue and surgical instruments. Finally, the Choice-Seg8k dataset [28] has annotations for twelve objects including organs, tissues, and two surgical instruments. We follow the train, validation and test splits as described in [1].
### Results
**Results on Endovis 17 (EV17).** Results corresponding to EV17 are tabulated in Table 1. This dataset has a total of ten test sets. We average the classwise Dice scores and IoU scores across all ten test datasets. We compare the performance of our method with recent convolutional and transformer-based state-of-the-art segmentation methods [30, 31, 3]. We also use SAM with text prompt without any training (called Zero Shot performance of SAM or SAM-ZS) [11] for comparison. As can be seen from Table [26], UNet [3] and TransUNet [30] perform poorly on EV17. This is because of the severe class imbalance present in the dataset and convolutional methods like UNet are unable to tackle this due to limitations in the training loss which doesn't deal with class imbalances. TransUnet has a transformer-based encoder which requires a greater amount of data to produce good embeddings. However, EV17 has various classes that are only present in a few frames in a single sequence. This affects the training process for the method. Med-T [31] was introduced to overcome this weakness of requiring more data points. However, we observe that for EV17, Med-T tends towards producing blank masks even when the object is present in the image, showing its dependence on the label distribution of the data. In contrast, AdaptiveSAM makes use of the highly
Figure 3: An overview of bias-tuning. The first step of the attention module is the generation of keys, queries and values from the input. This step requires affine transformations as shown to the right. In bias-tuning, we keep all weights of SAM frozen. To account for the changed context from general images to surgical scene, we introduce trainable biases that are added to each affine transform output. This includes the multi-head attention as well as the MLP layer. Note that the latter is not shown in the figure explicitly for simplicity.
generalized nature of SAM to produce good results with low amount of data and training. We also observe a significant jump in performance over the zero-shot performance of SAM (68% jump in DSC), showcasing the effectiveness of our adaptation technique for this dataset.
Results on Endovis 18 (EV18).Results corresponding to EV18 are tabulated in Table 2. The dataset has 4 sequences for testing. We generate results for all of them for every object and report the average object-wise DICE Scores and IoU scores across the four test datasets. Among the current state-of-the-art methods, [32] proposes an interesting approach of grouping objects and calculating the groupwise metrics. For the 10 objects in the datasets, they propose grouping them into five classes - Organs, Robotic parts, Suturing, Other Tools and Background. Hence, in Table 2, we report the groupwise scores as well. In addition, we compare our approach with the winning method for Endovis 18 [27] challenge. Please note that all of these are methods that trained all the network parameters. AdaptiveSAM achieves significantly higher DSC and IoU scores over these methods, with an improvement of 8% in DSC and 14% in IoU Score. We also observe a significant improvement of 21% in the DSC over zero shot performance of SAM.
Results on Choice-Seg8k:Results corresponding to Choice-Seg8k are shown in Table 3. We follow the splits from previous research for reporting our scores [1]. Among the twelve classes of objects present in this dataset, there is limited representation of five classes - Blood, Hepatic Vein, Connective Tissue, Liver Ligament and Cystic Duct. Hence, these were grouped into one Miscellaneous category in previous research [1]. However, we keep these as separate entities so as to allow our Text Affine Module to distinguish between these categories and the background. On an average across all classes, we see an improvement of 2% over the SOTA method as seen in the table. Furthermore, AdaptiveSAM improves the Zero Shot performance of SAM by 60% in DSC.
Please note that we do not include the results from MedSAM [14] and MedSAMA adapter [15] since these methods require providing a point or bounding box input per image which must be manually done. Hence, it would not be fair to compare against methods that do not require intervention. We also observe that SAM-ZS does poorly in all the 3 datasets. SAM-ZS uses CLIP to generate prompt embeddings from the given text. However, CLIP is trained on general datasets and hence, we see that SAM-ZS does not perform well when used as is. AdaptiveSAM learns a lightweight transformation over the frozen CLIP embeddings that enables it to train well. SAM-ZS serves as a baseline as well as an ablation for AdaptiveSAM, thus showing the effectiveness of our method over zero-shot performance of SAM. For all of the above datasets, we observe that AdaptiveSAM significantly improves upon this. As mentioned earlier, AdaptiveSAM is able to use the generalized weights learnt by SAM while successfully shifting its biases to adapt to the data shift observed between natural and surgical scene images. Hence, with less amount of training time and resources, it is able to adapt SAM to perform better than existing SOTA methods.
Qualitative Evaluation of ResultsQualitative results corresponding to our experiments on surgical scene segmentation are shown in Figure 4. As can be seen from this figure, AdaptiveSAM is capable of producing highly precise masks for different types of text prompts. For other methods that do not require text prompts, the models output segmentation maps for each of the classes. We follow the following equation
\[\text{final mask}_{i}=\begin{cases}1,&\text{if \emph{argmax(M, dim=0)}}=i\\ 0,&\text{otherwise}\end{cases} \tag{9}\]
to get the final class-wise mask for comparison. Here, \(i\) represents the class index and \(M\) represents the output of the segmentation model. \(M\) is assumed to have a shape of \(CXHXW\), where \(C\) is the number of classes, \(H\) is the height and \(W\) is the width of the mask. AdaptiveSAM will correctly output a blank mask when a particular object is absent in the image. Furthermore, in comparison to other methods, it produces less noisy outputs. This can be attributed to the property of identifying complete 'objects' exhibited by the original SAM model. This property is also retained by AdativeSAM which allows it to produce closed masks with less amount of noise.
Location Learning Capabilities.AdaptiveSAM can also learn spatial correspondences between the text and the image. As shown in Figure 5 (a), there are two large needle drivers present in the image. With the input text as "Left Large Needle Driver", AdaptiveSAM only segments out the needle driver on the left side of the image. Similarly, as shown in Figure 5 (b), only the right needle driver is present in the image. Hence, for the text prompt "Left Large Needle Driver", a blank mask is returned. This capability can be learnt with the correct annotations during training and is made possible with the use of freeform text.
### Results on Non Surgical Datasets
To show the significance of our approach on non-surgical datasets, we apply AdaptiveSAM on the Abdominal Ultrasound Dataset [33] for the task of object segmentation. This dataset has real and synthetic images of abdominal ultrasounds with segmentation masks available for eight classes - Liver, Kidney, Pancreas, Vessels, Adrenals, Gallbladder, Bones and Spleen. While training data only consists of simulated ultrasound images, the test data has both real and synthetic data. We get an average DSC of 0.48 on the synthetic test set and 0.58 on the real test set, which is significantly greater than other methods, as shown in Table 4. Some qualitative results are shown in Figure 6 Rows 1 and 2. Note that ultrasound imaging is a separate modality from surgical images, thus leading to a significant domain shift in the images, even though the labels are similar. However, AdaptiveSAM is able to perform well on these images as well, producing precise and less noisy masks.
We also test our method on one more modality, namely X-ray images. We use the ChestXDet Dataset [34], which has
X-ray images and annotations for thirteen classes - Effusion, Nodule, Cardiomegaly, Fibrosis, Consolidation, Emphysema, Mass, Calcification, Pleural Thickening, Pneumothorax, Fracture, Atelectasis and Diffuse Node. We observe that AdaptiveSAM significantly outperforms other methods as shown in Table 5 since it can handle imbalance in the dataset better than other methods. Figure 6 Rows 3 and 4 show some qualitative results on this dataset.
## 5 Conclusion
In this paper, we introduced an adaptation of SAM that allows it to perform well on surgical data, called AdaptiveSAM. We also develop a novel and lightweight method called bias-tuning that allows efficient finetuning of large-scale models like SAM and make them more application specific with minimal training and resources. Further, we also introduce text-prompted segmentation for surgical datasets which addresses the problem of expert supervision required by other adaptation methods. AdaptiveSAM only requires the user to provide the object of interest once, using free-form text instead of specifying a foreground point/bounding box for every image, as required by existing methods. Hence, it is easier to use. Adding text as input also opens up the possibility of enhancing AdaptiveSAM to understand more complex queries in the future. We motivate this by showcasing its ability to identify spatial correspondences between text and image. We show that our method outperforms existing state-of-the-art methods on three widely used surgical datasets. Finally, we also show that bias-tuning and text prompted segmentation can be generalized to non-surgical datasets as well by comparing the performance of AdaptiveSAM with existing SOTA segmentation methods on Ultrasound and X-ray modalities.
|
2309.02711 | Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic
Extension | Symmetry, a fundamental concept to understand our environment, often
oversimplifies reality from a mathematical perspective. Humans are a prime
example, deviating from perfect symmetry in terms of appearance and cognitive
biases (e.g. having a dominant hand). Nevertheless, our brain can easily
overcome these imperfections and efficiently adapt to symmetrical tasks. The
driving motivation behind this work lies in capturing this ability through
reinforcement learning. To this end, we introduce Adaptive Symmetry Learning
(ASL) $\unicode{x2013}$ a model-minimization actor-critic extension that
addresses incomplete or inexact symmetry descriptions by adapting itself during
the learning process. ASL consists of a symmetry fitting component and a
modular loss function that enforces a common symmetric relation across all
states while adapting to the learned policy. The performance of ASL is compared
to existing symmetry-enhanced methods in a case study involving a four-legged
ant model for multidirectional locomotion tasks. The results demonstrate that
ASL is capable of recovering from large perturbations and generalizing
knowledge to hidden symmetric states. It achieves comparable or better
performance than alternative methods in most scenarios, making it a valuable
approach for leveraging model symmetry while compensating for inherent
perturbations. | Miguel Abreu, Luis Paulo Reis, Nuno Lau | 2023-09-06T04:47:46Z | http://arxiv.org/abs/2309.02711v1 | # Addressing Imperfect Symmetry: a Novel Symmetry-Learning Actor-Critic Extension
###### Abstract
Symmetry, a fundamental concept to understand our environment, often oversimplifies reality from a mathematical perspective. Humans are a prime example, deviating from perfect symmetry in terms of appearance and cognitive biases (e.g. having a dominant hand). Nevertheless, our brain can easily overcome these imperfections and efficiently adapt to symmetrical tasks. The driving motivation behind this work lies in capturing this ability through reinforcement learning. To this end, we introduce Adaptive Symmetry Learning (ASL) -- a model-minimization actor-critic extension that addresses incomplete or inexact symmetry descriptions by adapting itself during the learning process. ASL consists of a symmetry fitting component and a modular loss function that enforces a common symmetric relation across all states while adapting to the learned policy. The performance of ASL is compared to existing symmetry-enhanced methods in a case study involving a four-legged ant model for multidirectional locomotion tasks. The results demonstrate that ASL is capable of recovering from large perturbations and generalizing knowledge to hidden symmetric states. It achieves comparable or better performance than alternative methods in most scenarios, making it a valuable approach for leveraging model symmetry while compensating for inherent perturbations.
## I Introduction
Symmetry is a multidisciplinary concept that plays a fundamental role in how we perceive the world, from abstract beauty and harmony, up to precise definitions in physics, biology, mathematics, and many other disciplines. In the scope of this paper, symmetry is studied through geometry, and, more precisely, group theory. Weyl [1] takes a top-down approach to define symmetry, starting with a vague notion of proportion and balance, then delving into the intuitive notion of reflections and rotations, and ending with the characterization of invariance with respect to a group of automorphisms (i.e. mappings of a mathematical object onto itself that preserve the structure of space).
Weyl acknowledges that in reality, the actual group of automorphisms may be unknown, leading researchers to narrow the invariant conditions, for a precise analysis to be possible. In the context of this paper, let us start with a biology example that doubles as motivation. Consider the act of throwing a crumpled paper into a bin at a reasonable distance. Assume a bilaterally symmetrical non-ambidextrous human. The dominant arm/hand will easily find several trajectories for the paper to reach the bin and, with enough training, the other arm will be able to find equivalent mirrored trajectories.
This means that the human brain is able to learn how to compensate for internal imbalances to produce a symmetric outcome. Internal imbalances include differences in arm strength, weight and other physical attributes, as well as handedness and other cognitive biases. Additionally, the brain may also have to compensate for external factors, e.g. wearing a thick glove on the left hand.
### _Symmetry Perturbations_
These symmetry perturbations do not imply an asymmetric relation between both actions. Assume the human's initial state \(s_{0}\) is a bilaterally symmetrical pose. The motor cortex generates muscle contractions by producing neural activity patterns. The patterns that result in throwing with the dominant arm/hand represent action \(a_{d}\), while the patterns that deal with the other arm represent action \(a_{z}\). There might be a map \(g_{s_{0}}:\mathcal{A}_{s_{0}}\rightarrow\mathcal{A}_{s_{0}}\), where, in state \(s_{0}\), for every action \(a_{d}\in\mathcal{A}_{s_{0}}\) there is an action \(a_{z}\in\mathcal{A}_{s_{0}}\) that results in a symmetric outcome, where \(\mathcal{A}_{s_{0}}\) is a group of symmetrizable actions in state \(s_{0}\). In this context, actions are not symmetrizable when, for instance, a shorter arm cannot perform the same role as a longer arm.
When the map's estimate \(\hat{g}_{s_{0}}\) is a simplification of the ground truth, we say that the symmetry description it provides is incomplete or inexact. Symmetry perturbation, hereinafter referred to simply as perturbation, is any factor that stands between \(\hat{g}_{s_{0}}\) and \(g_{s_{0}}\), affecting the mapping between pairs of symmetric actions, i.e., actions that lead to two symmetric states when applied to the same initial state. Note that this analysis only makes sense for state-action pairs, as the invariance that characterizes symmetric actions concerns the state that follows. These notions will be formalized in Section II, while defining Markov Decision Processes and their homomorphisms.
Despite the search of symmetry for aesthetic or functional purposes, our brain has evolved to be asymmetric [2], both in terms of specialized regions and motor skills. About 9.33% of humans are mixed-handed, while only around 1% or 2% are ambidextrous, depending on the classification criteria [3]. Symmetry might not always mean better performance, and this uncertainty must be taken into account. Translating this concept to reinforcement learning (RL), it would mean that symmetry should guide the learning process, but never to the detriment of the primary optimization objective.
### _Model Minimization_
This paper presents a case study of a four-legged ant model with a total order of symmetry of 8, used to learn multi-directional locomotion tasks under the influence of multiple symmetry perturbations. The robot's symmetry group includes 4 planes of symmetry and 4-fold rotational symmetry, where one rotation is an identity operation, resulting in 7 non-identity transformations. This is a challenging task, where finding a good policy in an efficient way heavily depends on how the learning algorithm explores the symmetry group.
Most symmetry leveraging techniques fall within temporal or spatial domains. The former, which pertains to invariability under temporal transformations, such as scaling or inversion, is relatively uncommon in robotic tasks. Techniques centered around spatial symmetry are more prevalent, and can be divided into multiple areas, as described in following paragraphs.
#### I-B1 Relabeling states and actions
The first spatial symmetry technique concerns the permutation of roles of equivariant parts in the state and action spaces. For instance, consider a bilaterally symmetrical robot that throws a ball with one arm, while stabilizing itself with the other. The policy could learn independent actions for a left throw and a right throw. However, it could abstract the side by just considering a _throwing arm_ and a _stabilizing arm_. The user just has to relabel both arms when assigning the state, depending on the preferred throwing side for the next episode. After the policy computes an action, the user has to relabel them again to assign the resulting action correctly to both arms. This approach can be implemented directly in the simulator without requiring modifications to the optimization algorithm.
#### I-B2 Data augmentation
Data augmentation is commonly used in RL to enhance sample efficiency and stability, being experience replay, introduced by Lin [4], one of its main precursors. In the scope of this work, data augmentation involves the creation of symmetric copies of real experiences. Using the last example, after the robot threw the ball with its left arm, the RL algorithm would also receive artificially created samples of the same experience, but executed with the right arm. Ultimately, there is twice as much information to learn from.
#### I-B3 Symmetric Networks
This approach enforces symmetry constraints directly on the policy, by modifying the network architecture. In domains where symmetry is a requirement of optimal solutions, this is a robust method. However, this assumption cannot be made for many robotic tasks, including locomotion, where it is unclear whether perfect symmetry is mechanically preferable. Real robots exhibit imperfections like humans, who learn to compensate for asymmetries and perceive the resulting gaits as normal or unimpaired [5, 6].
#### I-B4 Symmetry Loss Function
In optimization problems, a loss function quantifies the cost of an event. In RL, this function is responsible for maximizing a cumulative reward over time, while also sustainably guiding the learning process. In the ball throwing robot example, the reward could measure how close the ball is to the target. Also, if symmetry was desired, it could be added to the reward as a scalar value, but there is a better alternative.
A reward is useful when evaluating a metric that requires interaction with the environment. Symmetry does not have such limitation, as its compliance can be measured through analytical functions that take the policy as input. By incorporating this knowledge into the primary loss function, the algorithm is able to steer the policy's parameters in the right direction through a gradient.
Despite not providing symmetry guarantees, like the relabeling method or symmetric networks, a loss function is the most flexible approach, both at design and runtime stages. It allows users to dynamically regulate the importance of symmetry and fine-tune the learning process. However, current contributions fall short in addressing symmetry perturbations. To bridge this gap, our work introduces a novel approach to leverage model symmetry, while learning and compensating for symmetry perturbations in the system.
The main contributions of this work can be summarized as:
1. We introduce Adaptive Symmetry Learning (ASL) 1 -- a model-minimization actor-critic extension that is able to handle incomplete or inexact symmetry descriptions by adapting itself during the learning process;
2. ASL is composed of two parts -- symmetry fitting, described above; and a new loss function, capable of applying the learned descriptions based on their importance, while actively avoiding neutral states and disadvantageous updates;
3. We propose modifications to two existing symmetry loss functions -- MSL [30] and PSL [31] -- extending them with value losses and the capacity to handle involutory transformations, such as most rotations;
4. We present the case study of an ant robot, where we compare the performance of MSL, PSL, ASL, and vanilla PPO [39] in a set of ant locomotion scenarios. We test controlled and realistic symmetry perturbations, as well as partial goal observation, where the policy must discover the best symmetric gait without exploring the symmetric side, while acknowledging the existence of perturbations.
Footnote 1: [https://github.com/m-abr/Adaptive-Symmetry-Learning](https://github.com/m-abr/Adaptive-Symmetry-Learning)
## II Preliminaries
Typical reinforcement learning problems can be described as a Markov Decision Process (MDP) - a tuple \(\langle\mathcal{S},\mathcal{A},\Psi,p,r\rangle\), with a set of states \(\mathcal{S}\), a set of actions \(\mathcal{A}\), a set of possible state-action pairs \(\Psi\subseteq\mathcal{S}\times\mathcal{A}\), a transition function \(p(s,a,s^{\prime}):\Psi\times\mathcal{S}\rightarrow[0,1]\), and a reward function \(r(s,a):\Psi\rightarrow\mathrm{I\!R}\).
### _MDP transformations_
Model reduction allows the exploitation of redundant or symmetric features. To this end, Ravindran and Barto [7] proposed a mathematical formalism to describe MDP homomorphisms -- a transformation that groups equivalent states and actions. An MDP homomorphism \(h\) from \(M=\langle\mathcal{S},\mathcal{A},\Psi,p,r\rangle\) to \(\hat{M}=\langle\hat{\mathcal{S}},\hat{\mathcal{A}},\hat{\Psi},\hat{p},\hat{r}\rangle\) can be defined as a surjection \(h:\Psi\rightarrow\hat{\Psi}\), which is itself defined by a tuple of surjections \(\langle f,\{g_{s}|s\in\mathcal{S}\}\rangle\). In other words, equivalent state-action pairs
in \(M\) are mapped by \(h\) to the same abstract state-action pair in \(\check{M}\). For \((s,a)\in\Psi\), the surjective function \(h((s,a))=(f(s),g_{s}(a))\), where \(f:\mathcal{S}\rightarrow\tilde{\mathcal{S}}\) and \(g_{s}:\mathcal{A}_{s}\rightarrow\check{\mathcal{A}}_{f(s)}\) for \(s\in\mathcal{S}\), satisfies:
\[\check{p}(f(s),g_{s}(a),f(s^{\prime})) =\sum_{s^{\prime\prime}\in[s^{\prime}]_{B}}p(s,a,s^{\prime\prime}), \tag{1}\] \[\forall s,s^{\prime}\in\mathcal{S},a\in\mathcal{A}_{s},\] \[\mathrm{and}\quad\check{r}(f(s),g_{s}(a)) =r(s,a),\quad\forall s\in\mathcal{S},a\in\mathcal{A}_{s}, \tag{2}\]
where \(B\) is a partition of \(\mathcal{S}\) into equivalence classes, and \([s^{\prime}]_{B}\) denotes the block of partition \(B\) to which state \(s^{\prime}\) belongs.
MDP symmetries constitute a specialization of the described framework, where \(f\) and \(g_{s},s\in\mathcal{S}\) are bijective functions and, consequently, the homomorphism \(h=\langle f,\{g_{s}|s\in\mathcal{S}\}\rangle\) from \(M\) to \(\check{M}\) is an isomorphism. Additionally, since symmetries can be characterized as MDP isomorphisms from and to the same MDP, they are automorphisms, which simplifies the homomorphism conditions (1) and (2):
\[p(f(s),g_{s}(a),f(s^{\prime})) =p(s,a,s^{\prime}),\quad\forall s,s^{\prime}\in\mathcal{S},a\in \mathcal{A}_{s}, \tag{3}\] \[\mathrm{and}\quad r(f(s),g_{s}(a)) =r(s,a),\quad\forall s\in\mathcal{S},a\in\mathcal{A}_{s}. \tag{4}\]
When referring to \(g_{s}(a)\), the state \(s\) corresponding to the state-action pair \((s,a)\in\Psi\) is implicit, and will thus be omitted in the remainder of this document, being the action transformation denoted as \(g(a)\). As MDP automorphisms, symmetries can be defined in a single MDP, \(M\). In the aforementioned ball throwing robot example, assume that starting with the ball on the left hand (state \(s\)), the robot will perform an optimal left throw (action \(a\)). In a mirrored initial state \(f(s)\), a mirrored action \(g(a)\) is guaranteed to be optimal if all states in \(\Psi\) are symmetrizable. Equation (3) holds in this scenario because the probability of ending in an optimal state by performing action \(a\) in \(s\) is the same as performing \(g(a)\) in \(f(s)\). As for (4), it also holds, since the reward in both cases should be the same, otherwise it would contain an asymmetric bias.
### _Policy notation_
When optimizing a policy \(\pi\) parameterized by \(\theta\), the notion of symmetric action has two possible interpretations: \(g(a)\), which is the result of mapping \(a\) through function \(g\); and \(\pi_{\theta}(f(s_{t}))\), which represents the action chosen by the policy for the symmetric state. Frequently, the former is used as a target value for the latter. Stochastic policies that follow a normal distribution \(\mathcal{N}(\mu,\sigma)\) are characterized by a mean \(\mu\), usually parameterized in RL by a neural network, and a standard deviation \(\sigma\), optimized independently for exploration. The parameterized mean value \(\mu_{\theta}\) is particularly useful for symmetry losses, as it represents the current policy without exploration noise, or, more formally, \(\mu_{\theta}(\cdot)=\pi_{\theta}(\cdot\mid\sigma=0)\). Hereinafter, \(a_{t}\) denotes an action sampled from a stochastic policy \(\pi_{\theta}\) at time \(t\), while \(\overline{a}_{t}\) represents the mean of the distribution when \(a_{t}\) was sampled. Extending this notation to the already introduced symmetry mapping yields:
\[\overline{a}_{t} =\mu_{\theta_{\text{old}}}(s_{t}), \tag{5}\] \[\overline{a}^{\prime}_{t} =\mu_{\theta_{\text{old}}}(f(s_{t})), \tag{6}\]
where \(\theta_{\text{old}}\) means that \(\overline{a}_{t}\) and \(\overline{a}^{\prime}_{t}\) where obtained from a snapshot of \(\theta\) and thus are not a function of the current \(\theta\). In other words, variables obtained from \(\theta_{\text{old}}\) are constants, not trainable during optimization.
### _Symmetric policy_
Zinkevich and Balch [8] define that for a policy to be functionally homogeneous, if \(\pi(s)=a\), then \(\pi(f(s))=g(a)\), for all \((s,a)\in\Psi\). When dealing with reflection symmetries, objects are invariant under involutory transformations, i.e., obtaining the symmetric object and the original again just requires applying the same transformation twice. Mathematically, a function \(f\) that maps reflection symmetries is an involution (\(f=f^{-1}\)). In this context, a policy \(\pi\) is symmetric if \(\pi(s)=g(\pi(f(s))),\forall s\in\mathcal{S}\).
However, some symmetric transformations, including scaling, rotation and translation cannot always be inverted through the same function. Rotation around an axis, for instance, is only involutory when the transformation angle \(r\) in radians is a multiple of \(\pi\) (\(r=k\pi,k\in\mathbb{Z}\)). For example, if \(r=\pi/4\), the previous symmetric policy definition fails because after the state is rotated by \(\pi/4\) radians, the action is also rotated by the same angle, not yielding the original action.
Generalizing the symmetric policy definition to noninvolutory transformations only requires changing the side of one operation, such that \(g(\pi(s))=\pi(f(s)),\forall s\in\mathcal{S}\). Care has to be taken when devising symmetry loss functions, to abide by this principle and avoid the negative effect that noninvolutory transformations would otherwise cause.
### _Neutral states_
A state \(s\) is said to be neutral if it is invariant under \(f\), i.e., \(s=f(s)\). Under a symmetric policy, neutral states are inescapable, in the sense that all future states are guaranteed to also be neutral, unless the environment introduces its own bias. This problem occurs in numerous scenarios, from board games to robotic tasks. Consider the problem of a humanoid robot trying to initiate walking by taking its first step, but at the beginning of the episode, the environment is in a neutral state, \(s_{0}=f(s_{0})\). The symmetric policy cannot raise the left foot to take the first step in \(s_{0}\) because that would mean the right foot should be raised in \(f(s_{0})\), but this is impossible because there are two actions for the same state and \(\pi(s_{0})\neq g(\pi(s_{0}))\). So, one possible option would be to jump with both legs, but after that, it would still be stuck in a neutral state unless the environment introduced some noise.
Perfectly symmetric policies are therefore inadequate for neutral states. Moreover, neighboring states are also affected due to the generalization ability of neural networks and the fact that they represent continuous functions. A solution based on a symmetry exclusion region around neutral states is presented in Section IX-B.
## III Related work
This section builds upon the model minimization overview from Section I-B and explores specific techniques, along with their advantages and disadvantages. In terms of temporal symmetry, achieving time inversion requires a conservative system without energy loss, such as a a frictionless pendulum controlled through RL, as proposed by Agostini and Celaya [9]. However, this assumption is impractical for real-world systems, making this approach less desirable. As for spatial symmetry, the contributions fall into the previously established categories:
#### Iii-1 Relabeling states and actions
Zeng and Graham [10] and Ildefonso et al. [11] create symmetry-reduced experiences in actor-critic and value-based domains, respectively, by shrinking the state space, running the policy, and finally restoring the actions using data from the original state. Surovik et al. [12] combine relabeling states and actions with reflection transformations to swap frame-dependent values, such as lateral rotation, to effectively reduce the state volume of a tensegrity robot.
However, this same idea can be applied for robot locomotion with simpler models. If, instead of left and right, we think of "stance" and "non-stance" leg for a biped robot, we can relabel them every half cycle to obtain a symmetric controller [13, 14]. Peng et al. [15] apply this relabeling method at fixed intervals of 0.5 s, forcing a fixed period on the gait cycle. This idea is simple to implement but the policy is constrained by some symmetry switch, whether it is based on time or behavioral pattern.
#### Iii-2 Data augmentation
In symmetry oriented solutions, data augmentation can be used with model-based [16] or model-free RL algorithms, although the scope of this work is limited to the latter alternative. Examples of successful applications of this technique include a real dual-armed humanoid robot that moves objects [17], the walking gait of several humanoid models [18] and a quadruped with more than one plane of symmetry [19], among others [9, 20]. If changing the loss function is possible, it is always preferable to data augmentation in terms of computational efficiency and memory footprint, despite the additional initial effort.
#### Iii-3 Symmetric Network
Concerning humanoid models, Abdolosseini et al. [18] introduced a symmetric network architecture that forces perfect symmetry. This method guarantees that if states and actions are symmetrically normalized, the behavior has no asymmetric bias. However, as the authors acknowledge, neutral states are inescapable. Pol et al. [21] generalize this approach to additional problems by introducing MDP homomorphic networks, which can be automatically constructed by stacking equivariant layers through a numerical algorithm. The equivariance constraints (under a group of reflections or rotations) are applied to the policy and value networks. Encoding invariances in neural networks [22, 23, 24, 25, 26] and the group of network solutions in general lacks the ability to control the symmetry enforcement at runtime and the ability to symmetrize asymmetric models.
#### Iii-4 Symmetry Loss function
Mahajan and Tulabandhula [27, 28] proposed an automated symmetry detection process and a method of incorporating that knowledge in the Q-learning algorithm [29]. A soft constraint was presented as an additional loss for the Q-function approximator, \(\hat{\mathbb{E}}_{t}[(Q_{\theta}(f(s_{t}),g(a_{t}))-Q_{\theta}(s,a))^{2}]\), where \(\theta\) is the parameter vector of \(Q\), and the expectation \(\hat{\mathbb{E}}_{t}\) indicates the empirical average over a batch of samples. Yu et al. [30] transpose this idea to policy gradients by fusing a new curriculum learning method with an homologous loss function -- the Mirror Symmetry Loss (MSL), \(\sum_{i}\|(\pi_{\theta}(s_{i})-g(\pi_{\theta}(f(s_{i})))\|^{2}\). The authors present very good results when both approaches are used simultaneously.
In previous work, we have presented the Proximal Symmetry Loss (PSL) [31], a loss function aimed at increasing the sample efficiency of the Proximal Policy Optimization (PPO) algorithm [39] by leveraging static model symmetries. The loss function was built to allow asymmetric exploration in models without symmetry perturbations. It leverages the trust region concept to harmonize its behavior with PPO's exploration, which is especially noticeable in smaller batch sizes.
In the next section we will delve into the details of MSL and PSL, while suggesting some modifications to expand the functionality of both loss functions. Then, we introduce the Adaptive Symmetry Learning, with the ability to handle imperfect symmetry by adapting itself during the learning process.
## IV Symmetry Loss Function
A symmetry loss function computes a value that we seek to minimize in order to obtain a symmetric behavior. Formally, if the robot performs action \(a\in\mathcal{A}_{s}\) in state \(s\in\mathcal{S}\), it should also perform the symmetric action \(g(a)\) in the symmetric state \(f(s)\). Since symmetry is not always the main objective while learning a new behavior, a good symmetry objective function should allow asymmetric exploration in favor of a better policy.
To understand the effect of the proposed loss function, we must analyze the most common implementation of PPO, where the policy and value networks do not share parameters. The PPO's objective \(L^{PPO}\) can then be expressed as:
\[L^{PPO}(\theta,\omega) =\hat{\mathbb{E}}_{t}\left[\begin{array}{c}L^{C}_{t}(\theta)-L ^{VF}_{t}(\omega)+cH(\theta)\end{array}\right], \tag{7}\] \[\text{with}\quad L^{C}_{t}(\theta) =\min\left(r_{t}(\theta)\hat{A}_{t}\;,\;(1+\text{sgn}(\hat{A}_{t })\epsilon)\hat{A}_{t}\right),\] (8) \[\text{and}\quad\quad r_{t}(\theta) =\frac{\pi_{\theta}(a_{t}\mid s_{t})}{\pi_{\theta_{old}}(a_{t} \mid s_{t})}, \tag{9}\]
where the stochastic policy \(\pi_{\theta_{old}}\) is parameterized by \(\theta\) and the value function by \(\omega\). \(\pi_{\theta_{old}}\) is a copy of the policy before each update, \(\hat{A}_{t}\) is the estimator of the advantage function, \(L^{VF}_{t}\) is a squared error loss to update the value function, \(\epsilon\) is a clipping parameter, \(c\) is a coefficient and \(H\) is the policy's distribution entropy. The expectation \(\hat{\mathbb{E}}_{t}\) indicates the empirical average over a finite batch of samples. The main objective of this algorithm is to keep the policy update within a trust region, preventing greedy updates that can be detrimental to learning. This behavior is formalized in the surrogate objective \(L^{C}_{t}\), as depicted in Fig. 1 for a single time step.
### _Extended loss function_
PPO's objective \(L^{PPO}\) can be extended with an arbitrary symmetry loss \(L^{S}\), such that
\[L^{PPO+S}(\theta,\omega)=L^{PPO}(\theta,\omega)-L^{S}(\theta, \omega), \tag{10}\] \[\text{with}\quad L^{S}(\theta,\omega)=\hat{\mathbb{E}}_{t}\left[\,w _{\pi}\cdot L_{t}^{\pi}(\theta)+w_{V}\cdot L_{t}^{V}(\omega)\,\right], \tag{11}\]
where \(L_{t}^{\pi}\) and \(L_{t}^{V}\) are symmetry loss functions that influence the update gradient of the policy (actor) and value function (critic), respectively.
In \(L_{t}^{C}\), the ratio \(r_{t}\) starts at 1 and tends to stay close to that value during optimization. The advantage estimate \(\hat{A}t\) is z-score normalized in typical PPO implementations [32, 33], resulting in a mean of 0. Consequently, \(L_{t}^{C}\)'s value is usually stable, with a low order of magnitude, which is important to retain when devising extensions, to preserve harmonious interactions among all losses.
### _Generalized Symmetry Loss_
The symmetry loss function can be expanded to an arbitrary number of symmetries with minimal changes. Let \(j\in\{1,2,3,...,N\}\) be a symmetry index for a total of \(N\) symmetries. Functions \(f_{j}(s)\) and \(g_{j}(a)\) apply symmetry transformation \(j\) to state \(s\), and \(s\)-dependent action \(a\), respectively. Accordingly, (11) can be rewritten as
\[L^{S}(\theta,\omega)=\hat{\mathbb{E}}_{t}\big{[}(w_{1}^{\pi} \cdot L_{1,t}^{\pi}(\theta)+w_{1}^{V}\cdot L_{1,t}^{V}(\omega)) +\] \[(w_{2}^{\pi}\cdot L_{2,t}^{\pi}(\theta)+w_{2}^{V}\cdot L_{2,t}^{V }(\omega)) +\] \[\ldots +\] \[(w_{N}^{\pi}\cdot L_{N,t}^{\pi}(\theta)+w_{N}^{V}\cdot L_{N,t}^{ V}(\omega))\big{]}\] \[=\sum_{j=1}^{N}\hat{\mathbb{E}}_{t}\big{[}w_{j}^{\pi}\cdot L_{j,t }^{\pi}(\theta)+w_{j}^{V}\cdot L_{j,t}^{V}(\omega)\big{]}. \tag{12}\]
## V Mirror Symmetry Loss
The Mirror Symmetry Loss (MSL) function proposed by Yu et al. [30]\(w_{\pi}\cdot\sum_{i}\|\mu_{\theta}(s_{t})-g(\mu_{\theta}(f(s_{t})))\|^{2}\), where \(w_{\pi}\) is a weight factor, computes the square error between the mean action taken by the stochastic policy before and after the symmetric transformation. On the one hand, this means that after crossing a certain asymmetry threshold, the loss dominates the policy gradient. On the other hand, if that threshold is crossed to the symmetric side, the symmetry loss loses influence in the policy update. Therefore, \(w_{\pi}\) does not dictate the weight of the symmetry loss in a consistent way, when computing the gradient, but rather the position of the symmetry threshold. Moreover, since the square error is based on actions, instead of probabilities, the weight of the symmetry loss is also dependent on the action space.
One other problem of this equation is the assumption of involutory transformations, a pitfall introduced in Section II-C. As aforementioned, this issue can be solved by changing where the action transformations is applied. Additionally, the method can be expanded with a value loss, in order to leverage the critic in actor-critic methods. Note that these modifications are a generalization of the original method and do not restrict its previous abilities in any way. The improved mirror symmetry loss function becomes
\[L^{MSL}(\theta,\omega)=\hat{\mathbb{E}}_{t}\left[\,w_{\pi}\cdot L _{t}^{\pi}(\theta)+w_{V}\cdot L_{t}^{V}(\omega)\,\right], \tag{13}\] \[\text{with}\quad L^{\pi}(\theta)=\hat{\mathbb{E}}_{t}\big{\|}\,g( \mu_{\theta}(s_{t}))-\mu_{\theta}(f(s_{t}))\,\big{\|}^{2},\] (14) \[L^{V}(\omega)=\hat{\mathbb{E}}_{t}\left[\,(V_{\omega}(f(s_{t}))- V_{t}^{\text{tr}\text{s}})^{2}\,\right], \tag{15}\]
where \(L^{\pi}\) and \(L^{V}\) are the policy and value losses for symmetry, and \(w^{\pi}\) and \(w^{V}\) are the respective weight factors. The value loss reduces the difference between the value function (for the symmetric state), and the same value target used by PPO.
The generalization in (12) can be applied to MSL by deriving \(L_{j,t}^{\pi}\) and \(L_{j,t}^{V}\) from (14) and (15), respectively,
\[L_{j,t}^{\pi}(\theta)=\big{\|}\,g_{j}(\mu_{\theta}(s_{t}))-\mu_{ \theta}(f_{j}(s_{t}))\,\big{\|}^{2}, \tag{16}\] \[L_{j,t}^{V}(\omega)=(V_{\omega}(f_{j}(s_{t}))-V_{t}^{\text{tr} \text{s}})^{2}. \tag{17}\]
## VI Proximal Symmetry Loss
The Proximal Symmetry Loss (PSL) builds on the trust region concept of PPO to reduce the model's asymmetry iteratively [31]. In contrast with MSL, PSL can already handle non-involutory operations. However, it was also generalized in this work to include a value loss component, such that
\[L^{PSL}(\theta,\omega)=\hat{\mathbb{E}}_{t}\left[\,w_{\pi}\cdot L _{t}^{\pi}(\theta)+w_{V}\cdot L_{t}^{V}(\omega)\,\right], \tag{18}\] \[\text{with}\quad L^{\pi}(\theta)=-\hat{\mathbb{E}}_{t}\left[\,\min( x_{t}(\theta),1+\epsilon)\,\right],\] (19) \[L^{V}(\omega)=\hat{\mathbb{E}}_{t}\left[\,(V_{\omega}(f(s_{t}))-V _{t}^{\text{tr}\text{s}})^{2}\,\right], \tag{20}\]
where \(\epsilon\) is a clipping parameter shared with \(L^{PPO}\), and \(x_{t}\) is a symmetry probability ratio.
In contrast to (8), \(L^{\pi}\) does not rely on an advantage estimator to determine the update direction, as its goal is to consistently increase or preserve symmetry. However, the RL algorithm can still opt to decrease symmetry if it benefits the policy, prioritizing \(L_{t}^{C}\) over \(L_{t}^{S}\) in (10). Furthermore, although
Fig. 1: Plots for PPO’s surrogate objective function \(L^{C}\) as a function of ratio \(r\), for a single time step, for a positive advantage estimate (on the left) or a negative advantage estimate (on the right).
the symmetry probability ratio shares similarities with (9), there are some distinctions:
\[x_{t}(\theta)=\frac{\min(\pi_{\theta}(g(\overline{a}_{t})\mid f(s_{t})),\pi_{ \theta_{old}}(a_{t}\mid s_{t}))}{\pi_{\theta_{old}}(g(\overline{a}_{t})\mid f( s_{t}))}, \tag{21}\]
where \(\overline{a}_{t}=\mu_{\theta_{old}}(s_{t})\) as defined in (5). Parameters \(\pi_{\theta}(\cdot\mid s)\) and \(\pi_{\theta_{old}}(\cdot\mid s)\) represent probability distributions during, and before the update, respectively, in accordance with the notation provided in Section II-B. The only term that is not constant during a policy update is \(\pi_{\theta}(g(\overline{a}_{t})\mid f(s_{t}))\), and only \(\theta\) is being optimized. Note that \(\theta_{old}\) is constant because it represents the policy's parameters before the update.
### _Unidirectional update_
The policy is modified so that the action \(\pi_{\theta}(f(s_{t}))\) carried out in the symmetric state \(f(s_{t})\) becomes symmetric to the action taken in the explored state \(s_{t}\). Note that ratio \(x_{t}\) in (21) promotes an adjustment to \(\pi_{\theta}(f(s_{t}))\) without changing \(\pi_{\theta}(s_{t})\). In contrast, the mirror symmetry loss attempts to reduce the distance between \(\pi_{\theta}(f(s_{t}))\) and \(\pi_{\theta}(s_{t})\) by promoting changes to both. This approach is generally adequate, except when there are exploration imbalances in symmetric states. One instance is when a humanoid robot is learning how to navigate a maze, but it initially experiences more left turns than right turns.
While exploring left turns, it makes sense to adapt the right turning ability to match the symmetric technique, but not vice versa. Adopting the right turning knowledge to change the left turning skill would be detrimental in this case. Yet, when the robot actually explores right turns, it will try to adapt the symmetric action in the wrong direction, but the effect will be diluted by the low probability of experiencing right turns. In summary, adapting the symmetric action is better than the explored action, because the explored action is more likely to be optimal than the symmetric action. In an extreme case, the symmetric state may be unreachable, making the symmetric action unreliable.
### _Generalized Proximal Symmetry Loss_
Similarly to MSL, PSL can also be expanded to an arbitrary number of symmetries by employing (12) and deriving \(L^{\pi}_{j,t}\) and \(L^{V}_{j,t}\), yielding:
\[L^{\pi}_{j,t}(\theta) =-\min(x_{j,t}(\theta),1+\epsilon), \tag{22}\] \[L^{V}_{j,t}(\omega) =(V_{\omega}(f_{j}(s_{t}))-V^{\text{aug}}_{t})^{2},\] (23) \[\text{with}\quad x_{j,t}(\theta) =\frac{\min\left(\pi_{\theta}(g_{j}(\overline{a}_{t})\mid f_{j}( s_{t})),\pi_{\theta_{old}}(a_{t}\mid s_{t})\right)}{\pi_{\theta_{old}}(g_{j}( \overline{a}_{t})\mid f_{j}(s_{t}))}. \tag{24}\]
## VII Adaptive Symmetry Learning: Overview
The previous symmetry loss functions follow the provided symmetry transformations strictly and update the policy accordingly. The main purpose of these loss functions is to find a symmetric outcome, independently of perturbations in the reward function, the agent (control process), the robot or the environment. As an example, consider a humanoid robot with a rusty left arm. The limb still has the same range of motion as the right one, but the amount of torque required to produce the same outcome is different. Assuming that for a given task, no motor needs to reach its maximum torque, there is a symmetry transformation that maps the torque difference between both arms to achieve the equivalent outcome in the symmetric state. For this example, the proposed algorithm finds a function that rectifies biases and achieves a symmetric outcome.
Nevertheless, a symmetric outcome is not always desirable, either due to biases introduced by the environment in favor of asymmetric behaviors, or due to explicit problem constraints expressed through the reward function. To generalize the algorithm to these cases, its purpose can be redefined as adapting symmetric transformation(s) in order to maximize the return. In general terms, this can be achieved by finding functions that best describe the relation between sets of two action dimensions across all states and symmetric transformations.
An ASL overview is shown in Fig.2. Initially, in addition to defining hyperparameters for the actor-critic method (PPO) and the symmetry algorithm, the user has to provide the same information required by previous symmetry loss functions -- the best known estimate for every symmetry transformation. Each transformation must include a vector of functions to transform states and actions. Symmetry transformations can be omitted if the user deems them to be unrelated or detrimental to the problem being optimized, but this is not required.
The algorithm starts by running the current policy and exploring the action space to gather a batch of samples (states, actions, rewards). Symmetric observations and actions are computed and stored, and neutral states are excluded based on a user-defined threshold (see Section IX-B for details on the exclusion method). Then, the provided symmetry transformations suffer a small adjustment in direction of the policy and, finally, the policy is updated. The final loss used to compute the gradient that updates the policy is a sum of PPO's loss and the Adaptive Symmetry Loss. So, in this last step, the policy suffers a small adjustment in direction of the symmetry
Fig. 2: Adaptive Symmetry Learning algorithm overview
transformation.
Consequently, there is a circular relation between reinforcement learning and symmetry fitting. However, the former learns the behavior (the best action for each state), while the latter learns constant state-independent perturbations from several sources, as enumerated in the beginning of this section. An alternative perspective is to consider the symmetry fitting as a moving average of the symmetric transformations learned by the policy (where the most recent average sample is restricted to the last batch of explored states). A more detailed explanation is given in the following sections.
## VIII Adaptive Symmetry Learning: symmetry fitting
An overview of the symmetry fitting process is shown in Fig. 3. Before learning, the algorithm extracts and groups types of mappings between action dimensions (from the provided symmetry transformations). Using humans as an example, with respect to the sagittal plane, each two distinct body parts that mirror each other form symmetric _pairs_, while central body parts like the head form reflexive relations (_singles_). Circular symmetry relations of 3 or more body parts (_cycles_) do not exist in humans. Nonetheless, models with _cycles_ have additional restrictions as will be discussed in Section VIII-A4.
The process is executed after the exploration phase, when local symmetry transformations are fitted to the policy. The term _local_ indicates that only the last batch of states from the exploration phase is used to generate the dataset. Local transformations are used as target for the global transformation estimators, according to an update weight. This weight is based on cycle restrictions, if they exist, otherwise a fixed weight is used, as defined by the user. Finally, the targets used to update each function in the transformation vectors are assessed in terms of stability to compute a function weight employed by the symmetry loss to penalize unstable functions. These topics will be reviewed in detail, in the following paragraphs, through an example robot model.
Inspired by a triangular lamina example by McWeeny [34], consider a 2D robot shaped as an equilateral triangle with 3 controllable limbs that are able to rotate around axes located at each vertex of the robot (see Fig. 4, top left). For the symmetry of a figure to be absolutely known, all its non-equivalent symmetry properties must be determined [35]. Assuming the robot cannot be flipped, it has 6 non-equivalent symmetry operations characterized by 3 symmetry planes (**a**, **b**, **c**) and 3-fold rotational symmetry (**d**, **e**), i.e., it is invariant under rotations of 120, 240 or 360 deg, although the last one is an identity operation. Therefore, in total, only 5 symmetry operations are to be considered (see Fig. 4, top middle).
Rotating 240 deg is equivalent to performing two 120 deg rotations. This means that, although both operations must be declared, in theory, only the specification of 120 deg should be needed. Yet, currently, ASL does not allow the specification of combined symmetry operations, which means that 5 specifications are required. Despite this limitation, the algorithm will automatically simplify redundant operations, as explained in the end of the next section.
### _Setup_
The state space of the robot is represented by 4 observed variables \([X_{0},X_{1},X_{2},Z]\), where \(X_{i}\) is the angular position of limb \(i\) relative to its neutral position (let the positive rotation direction be counterclockwise), and \(Z\) is the battery level. The purpose of \(Z\) is to introduce a variable that is not affected by symmetry transformations. In the top right corner of Fig. 4 is represented an arbitrary state \(s\), and in the bottom are represented the states that result from applying the 5 aforementioned symmetric transformations (e.g. \(f_{\textbf{a}}(s)\) is the reflection of \(s\) in relation to symmetry plane **a**).
Below \(f_{\textbf{a}}(s)\) to \(f_{\textbf{e}}(s)\) are two rows of values that characterize the respective transformation. The top row specifies how to rearrange the variables of \(s\) (observation indices) and the
Fig. 4: 2D robot shaped as an equilateral triangle with 3 limbs, 3 symmetry planes (**a**, **b**, **c**) and 3-fold rotational symmetry (**d**, **e**). \(f_{\textbf{a}}(s)\), \(f_{\textbf{b}}(s)\), \(f_{\textbf{c}}(s)\), \(f_{\textbf{d}}(s)\) and \(f_{\textbf{e}}(s)\) are symmetry transformations of \(s\) relative to **a**, **b**, **c**, **d** and **e**, respectively. The observations indices and multipliers characterize the transformations above.
Fig. 3: Overview of the symmetry fitting process
bottom row indicates if an inversion is necessary (observation multipliers). Transformation \(f_{\mathbf{a}}\), for instance, is a vector of functions such that
\[f_{\mathbf{a}}(s)=[f_{\mathbf{a}}[0](s[0]),\ f_{\mathbf{a}}[1](s[2]),\ f_{ \mathbf{a}}[2](s[1]),\ f_{\mathbf{a}}[3](s[3])]. \tag{25}\]
Given the observation multipliers in Fig. 4, it is possible to simplify the equation to
\[f_{\mathbf{a}}(s)=[-s[0],-s[2],-s[1],s[3]]. \tag{26}\]
The new value for \(X_{0}\) is given by \(X_{0}\cdot-1\) (same limb but inverted rotation direction), but limb 1 gets the inverted value of limb 2 and vice versa. Since the battery level is not affected by this transformation, \(X_{3}\gets X_{3}\cdot 1\). As observed, the rotation direction was inverted for every limb, and the same is true for \(f_{\mathbf{b}}(s)\) and \(f_{\mathbf{c}}(s)\), but not for \(f_{\mathbf{d}}(s)\) and \(f_{\mathbf{e}}(s)\), which require no inversion.
The robot has 3 action variables \([Y_{0},Y_{1},Y_{2}]\), where \(Y_{i}\) is the torque applied to limb \(i\). In this case, given action \(a\), the symmetry transformations \(g_{\mathbf{a}}(a)\), \(g_{\mathbf{b}}(a)\), \(g_{\mathbf{c}}(a)\), \(g_{\mathbf{d}}(a)\) and \(g_{\mathbf{c}}(a)\) are analogous to the already introduced transformations, excluding the battery level (see the respective actions indices and multipliers in Fig. 5).
As mentioned in the beginning of Section IX, in some circumstances, it is possible to achieve a symmetric outcome with perturbations in the control process, the robot or the environment. Therefore, it is possible to leverage the same symmetric principles if the motors of the 2D robot behave differently due to several factors (e.g. different power specifications, unbalanced control drivers, or external factors such as rust). As a basic example, consider that limb 1 is rusty and requires double the normal torque to perform the same movement, and limb 2 requires triple the normal torque. Until either motor reaches its maximum torque, the symmetric transformations can be described as in Fig. 6.
In some situations the symmetric transformations that lead to the highest return may not be known a priori. In Fig. 6, the transformation operation can be defined as a linear equation of the form \(y=mx\), where \(x\) is an arbitrary action and \(m\) is the action multiplier vector. On the one hand, different problems may require more complex functions to describe the required operation. On the other hand, the policy (typically parameterized by a neural network) is already able to represent complex non-linear relations. So, if a linear function can approximate the symmetric relation to a satisfactory degree, it should be preferred, as complex functions have increased learning time, potentially counteracting the sample efficiency benefit of using symmetries.
During its initialization, the symmetry fitting process extracts pairs of symmetric action dimensions from the user-defined action indices. In the current scenario, there are 3 pairs {(0,1),(0,2),(1,2)}, which means that the 15-dimensional operation represented in Fig. 5 and Fig. 6 can be reduced to a 3-dimensional problem. The relations between each pair of action dimensions can be written in the form \(a^{\prime}[y]=m_{x\to y}\cdot a[x]\). Fig. 7 shows how the action multipliers in Fig. 6 could be reduced to functions of the three proportional relations between pairs \(m_{0\to 1}\), \(m_{0\to 2}\) and \(m_{1\to 2}\).
#### Iii-B1 Generate dataset for pairs
After running the policy and collecting a batch of explored states, the symmetry fitting procedure can be executed. The first step is to generate a dataset to fit the user-defined function. This step is repeated for every pair of action dimensions, but for the sake of conciseness, only pair (0,1) will be analyzed. The objective is to fit a function \(\mathrm{F}:\mathrm{I}\!\mathrm{R}\rightarrow\mathrm{I}\!\mathrm{R}\), such that \(a^{\prime}_{t}[1]=\mathrm{F}(a_{t}[0])\) for \(t\in\{0,1,...,B\}\), where \(B\) is the batch size. The direction of such relation is \(0\to 1\) because it transforms the action associated with limb 0 to that of limb 1.
The dataset does not depend on actions sampled during exploration. Instead, in line with previously introduced symmetry loss functions, it uses the distribution mean to reduce noise. It is limited to states that were explored because the process of fitting symmetric transformations relies on what the policy has learned. Including rare or unreachable states would be counterproductive since the policy could produce unreliable results.
The dataset is generated as seen in Table I, with one column for input values and another one for output values. There are four variations of the same relation in symmetry transformations \(g_{\mathbf{c}}(a)\), \(g_{\mathbf{d}}(a)\) and \(g_{\mathbf{e}}(a)\). Relation (I) in \(g_{\mathbf{d}}(a)\) (see Fig. 7) follows the desired direction \(0\to 1\) (because under
Fig. 6: Hypothetical action indices and multipliers in a scenario where limb 1 is rusty and requires double the normal torque and limb 2 requires triple the normal torque. This transformation holds while both affected motors are below saturation.
Fig. 7: Rewriting the action multipliers from Fig. 6 as a function of three pair relations of the form \(a^{\prime}[y]=m_{x\to y}\cdot a[x]\). Highlighted in red are action multipliers that depend on \(m_{0\to 1}\).
Fig. 5: Action indices and multipliers that characterize the 2D robot’s symmetry transformations \(g_{\mathbf{a}}(a)\), \(g_{\mathbf{b}}(a)\), \(g_{\mathbf{e}}(a)\), \(g_{\mathbf{d}}(a)\) and \(g_{\mathbf{e}}(a)\)
\(g_{\mathbf{d}}(a)\), \(m_{0\to 1}\) is the value we multiply by \(a[0]\) to obtain \(a^{\prime}[1]\)). So, the inputs are directly obtained from the distribution mean for every state \(s_{t}\) in the latest batch (\(\overline{a}_{0}[0]\)), and the outputs are obtained from the distribution mean for all \(f_{\mathbf{d}}(s_{t})\) (\(\overline{a}_{\mathbf{d},t}^{\prime}[1]\)).
For relation (II), symmetry plane \(\mathbf{c}\) requires limb 1 to mirror limb 0 (inverting the rotation direction), yielding \(a_{t}^{\prime}[1]=\mathrm{G}(a_{t}[0])\). Function \(\mathrm{G}:\mathrm{IR}\rightarrow\mathrm{IR}\) expresses the local relation, which is a reflection of \(\mathrm{F}\) about the vertical axis, such that \(\mathrm{G}(a_{t}[0])=\mathrm{F}(-a_{t}[0])\). To fix this discrepancy, the sign of inputs for the second local relation in Table I is reversed.
Relation (III) is similar to relation (I) except that the direction is \(1\to 0\) and thus, the inputs and outputs are swapped. Finally, relation (IV) requires an inversion of inputs and outputs and a sign reversal for the inputs due to the local reflection.
#### V-B2 Generate dataset for singles
Symmetry transformations \(g_{\mathbf{a}}(a)\), \(g_{\mathbf{b}}(a)\) and \(g_{\mathbf{c}}(a)\) also contain reflexive relations (singles), i.e., relations from and to the same action dimension. In Fig. 7, singles have an action multiplier of \(-1\), which means that in the symmetric state, the symmetric action consists in rotating the same limb but in the opposite direction. One limitation of reflexive relations is that they must be characterized by involutory functions to avoid the complexity of state dependency. This notion is better understood with a practical example.
Fig. 8 denotes a practical use case for symmetry fitting, where limb 0 of the previously introduced 2D robot is subjected to a constant force pointing to the weight. Note that the weight is just a representation of the perturbation, which can have multiple sources, including the environment or robot defects.
On the left side of Fig. 8 is represented state \(s\). Assume that limb 0 has a relatively short range of motion and the perturbation manifests as a constant torque of +3 being applied to limb 0 (positive torque means clockwise direction). Let \(a[0]=-8\) be the optimal action in \(s\) for limb 0 and \(a[0]=2\) the optimal action in the symmetric state \(f_{\mathbf{a}}(s)\). Function \(g_{\mathbf{a}}[0](a)=-a[0]-6\) is a linear approximation of the symmetric transformation for the corresponding action dimension. This function is symmetric across the line \(y=x\) and, consequently, involutory. This means that for any symmetry plane and any state \(g_{\mathbf{a}}(g_{\mathbf{a}}(a))[0]=a[0]\).
A linear functions of the form \(y=mx+b\) is involutory if \(m=-1\) or \(m=1\wedge b=0\). Fig. 9 presents another practical use case for symmetry fitting, where the symmetric action transformation depends on the position of limb 0 (\(s[0]\)). The diagonal grid in Fig. 9 is a high friction area relative to the robot's body, where limb 0 must apply twice as much torque to overcome the extra friction. Therefore, when the limb is below symmetry plane \(\mathbf{a}\), or more formally, the angular position of limb 0 is positive (\(s[0]>0\)), the symmetric transformation is given by \(g_{\mathbf{a}}[0](a)=-2a[0]\). The rotation direction is inverted and the torque is multiplied by 2 to counteract the friction and obtain a symmetric outcome. However, when \(s[0]<0\), the inverse function is needed \(g_{\mathbf{a}}[0](a)=-0.5a[0]\).
Since \(y=-2x\) is not involutory, a 3D piecewise function is required, as defined in the bottom of Fig. 9. Recall that \(g\) is a function of state-action pairs \((s,a)\), even though the implicit state \(s\) is omitted, as mentioned in Section II-A. Considering a problem where the ground truth is unknown (e.g. a high friction area of unknown location or shape), fitting a symmetry transformation would require the algorithm to learn the number of sub-functions, the parameters of each sub-function and the associated subdomains. The possible ramifications of this problem are outside the scope of this paper.
Fig. 8: Practical use case for symmetry fitting, where the symmetric action transformation can be approximated by a linear involutory function, that does not depend on the environment state
Fig. 9: Practical use case for symmetry fitting, where the symmetric action transformation is dependent on the environment state because it is not characterized by an involutory function
Therefore, to avoid the complexity of state dependency, symmetry transformations for singles are limited to involutions. The dataset for singles is a simplified version of Table I since reflexive relations have a unique direction (e.g. \(0\to 0\)). Moreover, linear transformations are a special case that reduces the number of local relations to one. As aforementioned, involutory linear functions of the form \(y=mx+b\) are constrained by \(m=1\wedge b=0\) or \(m=-1\). The former constraint has no learnable parameter, so it is not included in the dataset. The latter defines a negative relation and can be included in the dataset to learn parameter \(b\). Since there are no positive learnable relations, there is no need to perform reflections about the vertical axis (reversing the sign of inputs), thus reducing the effective number of local relations to one.
#### Iii-B3 Optimization
A possible solution for this problem is to use the method of least squares to fit an arbitrary function F parameterized by \(\nu\) to the dataset
\[\operatorname*{arg\,min}_{\nu}\sum_{i=1}^{U}\big{(}y_{i}-\mathrm{F}_{\nu}(x_{i })\big{)}^{2}, \tag{27}\]
where \(U\) is the number of inputs and outputs extracted from the policy. If F is a linear function, the method formulation can be reduced to ordinary least squares, to which there is a constant-time analytical solution
\[m,b =\operatorname*{arg\,min}_{m,b}\sum_{i=1}^{U}\big{(}y_{i}-mx_{i}- b\big{)}^{2}, \tag{28}\] \[m =\frac{U(\sum_{i}x_{i}y_{i})-(\sum_{i}x_{i})(\sum_{i}y_{i})}{U( \sum_{i}x_{i}^{2})-(\sum_{i}x_{i})^{2}},\] (29) \[b =\frac{(\sum_{i}y_{i})-m(\sum_{i}x_{i})}{U}. \tag{30}\]
For singles, only parameter \(b\) can be learned in linear functions, simplifying the solution
\[b =\operatorname*{arg\,min}_{b}\sum_{i=1}^{U}\big{(}y_{i}+x_{i}-b \big{)}^{2}\] \[=\frac{(\sum_{i}x_{i})+(\sum_{i}y_{i})}{U}. \tag{31}\]
For pairs, having a bias component \(b\) can be useful in certain scenarios, as will be discussed in Section XIII. However, for the sake of completeness, fixing \(b\) reduces (28) to
\[m =\operatorname*{arg\,min}_{m}\sum_{i=1}^{U}\big{(}y_{i}-mx_{i} \big{)}^{2}\] \[=\frac{(\sum_{i}x_{i}y_{i})}{(\sum_{i}x_{i}^{2})}. \tag{32}\]
#### Iii-B4 Cycles
The described optimization problem can have additional restrictions if there are circular relations of 3 or more pairs (hereinafter referred to as cycles). In the 2D robot example, the pairs of symmetric action dimensions {(0,1),(0,2),(1,2)} form a single cycle (0 to 1 to 2), although it can also be represented as (2 to 1 to 0) since all relation pairs are bidirectional. One important property of cycles is that the composition of all functions in a cycle is an identity transformation \(\mathrm{id}:\mathrm{I\!R}\rightarrow\mathrm{I\!R}\), such that \(\mathrm{id}(x)=x\), e.g., \(\mathrm{F}_{0\to 1}\circ\mathrm{F}_{1\to 2}\circ\mathrm{F}_{2 \to 0}=\mathrm{id}\). Plugging this equality constraint into (27), and generalizing to an example with \(P\) pairs and \(C\) cycles yields
\[\operatorname*{arg\,min}_{\nu}\sum_{j=1}^{P}\sum_{i=1}^{U_{j}} \big{(}y_{i}-\mathrm{F}_{\nu_{j}}(x_{i})\big{)}^{2},\] (33) subject to \[(\mathrm{F}_{\nu_{T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(b_{1\to 2}\) are assumed to be initially zero, but these values can also be learned, influencing 12 action biases. Analogously, learning reflexive relations (\(b_{0\to 0}\), \(b_{1\to 1}\) and \(b_{2\to 2}\)) directly influences the remaining 3 action biases.
#### Viii-C6 Function weight
For the 2D robot example, there are 5 symmetric action transformations, each representing a vector of 3 functions (1 per limb). At an early optimization stage, the policy is mostly random and, consequently, the symmetry fitting targets tend to have high standard deviation \(\sigma\). As the convergence starts, \(\sigma\) is progressively reduced, albeit at different rates for distinct functions. Furthermore, a constant high \(\sigma\) (in successive batches of explored states) may indicate that a given function is not a good estimator for the symmetric transformation. For these reasons, when targets have high \(\sigma\), the respective function estimator has less importance in the symmetry loss.
For a given estimator function parameterized by vector **x**, let \(\mu_{x_{i}}\) and \(\sigma_{x_{i}}\) be the mean and standard deviation of the target value for parameter \(x_{i}\), respectively, in the last \(k_{\text{I}}\) iterations, where \(k_{\text{I}}\) is a hyperparameter. The metric used to assess the recent target spread is the absolute value of the coefficient of variation, based on the definition of Everitt and Skrondal [36], \(c_{v}=\sigma/|\mu|\). This metric is important to reduce the relative standard deviation as the mean increases. However, to avoid instability near \(\mu=0\), a small constant (0.1) is added to the function weight \(w_{F}\) equation
\[w_{F}=\text{H}_{\text{F}}(\frac{\sigma}{|\mu|+0.1}), \tag{37}\]
where \(\text{H}_{\text{F}}:\text{I}\text{R}_{0}^{+}\rightarrow[0,1]\) is a user-defined function that transforms the spread metric into a weight between 0 and 1. The function weight \(w_{F}\) will be used later by the symmetry loss in (57).
## IX Adaptive Symmetry Learning:
loss function
The Adaptive Symmetry Learning loss function computes a novel loss that tackles problems such as neutral states or disadvantageous updates. Additionally, symmetry transformation estimators are used according to their reliability, as defined above by \(w_{F}\). The algorithm is characterized by the following equation:
\[L^{ASL}(\theta,\omega) =\hat{\mathbb{E}}_{t}\left[w_{\pi}\cdot L_{t}^{\pi}(\theta)+w_{V }\cdot L_{t}^{V}(\omega)\right], \tag{38}\] \[\text{with}\quad L^{\pi}(\theta) =-\hat{\mathbb{E}}_{t}\left[r_{t}(\theta)\cdot\psi_{t}\cdot\phi_ {t}\right],\] (39) \[L^{V}(\omega) =\hat{\mathbb{E}}_{t}\left[(V_{\omega}(f(s_{t}))-V_{t}^{\text{ true}})^{2}\cdot\psi_{t}\cdot\phi_{t}\right], \tag{40}\]
where \(L^{\pi}\) and \(L^{V}\) are the policy and value losses for symmetry, and \(w^{\pi}\) and \(w^{V}\) are the respective weight factors. The value loss reduces the difference between the value function (for the symmetric state), and the same value target used by PPO. The policy loss promotes a policy symmetrization with a restricted update. This behavior is partly achieved through a dead zone gate \(\psi\) and a value gate \(\phi\), both based on user-defined thresholds. The former rejects neutral states or those that are near neutrality (see Section IX-B), and the latter avoids changing the symmetric action if it is more valuable than the current action (see Section IX-C).
However, the main progression restriction is imposed by the ratio \(r_{t}\), which works similarly to PPO's ratio (9), except that the restriction is applied on the target action \(\tau_{t}\) instead of the ratio itself, and the policy's standard deviation \(\sigma\) is not actively modified:
\[r_{t}(\theta) =\frac{\pi_{\theta}(\tau_{t},f(s_{t})\mid\mu_{\theta},\sigma)}{ \pi_{\theta_{\text{out}}}(\tau_{t},f(s_{t})\mid\mu_{\theta_{\text{out}}},\sigma )}, \tag{41}\] \[\tau_{t} =\text{clip}(g(\mu_{\theta_{\text{out}}}(s_{t})),\overline{a}_{t }^{\prime}-\Delta\mu,\overline{a}_{t}^{\prime}+\Delta\mu), \tag{42}\]
where \(\hat{A}_{t}\) is PPO's advantage function estimator; \(a_{t}\) is the action sampled at time \(t\); \(\overline{a}_{t}\) (5) and \(\overline{a}_{t}^{\prime}\) (6) are the mean of the distribution when \(a_{t}\) was sampled for states \(s_{t}\) and \(f(s_{t})\), respectively; \(g(\mu_{\theta_{\text{out}}}(s_{t}))\) is the mean of the distribution after the last mini-batch update; and \(\Delta\mu\) is a user-defined restriction threshold explained in the following subsection.
The mean \(g(\mu_{\theta_{\text{out}}}(s_{t}))\) represents the most recent action for state \(s_{t}\). It is similar to \(g(\mu_{\theta_{\text{out}}}(s_{t}))\) in the sense that it is a non-trainable constant, but it is updated during the optimization stage, at every mini-batch update of every train epoch. In (42), \(g(\mu_{\theta_{\text{out}}}(s_{t}))\) could be replaced by the explored action \(a_{t}\) in the cases where \(a_{t}\) has a positive advantage estimate (\(\hat{A}_{t}\geq 0\)), since PPO would try to increase \(a_{t}\)'s relative probability. However, doing that would subvert PPO's trust region by allowing the symmetry update to exploit the sampled action in a greedy way. Moreover, the objective of the symmetry update is not to explore, but rather to motivate symmetry in a nondestructive way.
As an example, consider a bidirectional variable-speed conveyor belt that is trying to center an item, and the state indicates whether the item is on the left or the right side. Assume that initially the neural network outputs a speed of -5 for both cases, leading every item to eventually fall to the left. After some exploration, the policy finds that for an item on the right, a speed of \(a_{t}\sim\pi_{\theta}(s_{t})=-2\) is advantageous (as it increases the time the item spends near the center). Let \(\Delta\mu=0.5\). The target \(\tau_{t}\) becomes \(\text{clip}(5,-5-0.5,-5+0.5)=-4.5\). Ideally, after every mini-batch update, \(\mu_{\theta_{\text{out}}}(s_{t})\) gets closer to -2, leading to a target of \(\text{clip}(2,-5-0.5,-5+0.5)=-4.5\). In this scenario the target does not change because the action for the symmetric state is too far from \(g(\mu_{\theta_{\text{out}}}(s_{t}))\). Large symmetry updates could harm the policy by changing the neural network too much.
If the target is exceeded (e.g. \(\mu_{\theta}(f(s_{t}))=-4.4\)), the gradient of \(r_{t}(\theta)\) will try to move the policy back to the target. In an analogous situation, PPO would only stop motivating further progress towards its sampled target. This harsh symmetry restriction has the purpose of breaking momentum build-ups, as explained in the next subsection. Now assume that after hours of training, the NN outputs -10 and 10 for an item on the right and left sides, respectively. Yet, it finds that 11 is advantageous for the left side item. The target \(\tau_{t}\) becomes \(\text{clip}(-10,-10-0.5,-10+0.5)=-10\), which produces no difference. However, as \(\mu_{\theta}(s_{t})\) increases to 10.1 along several
mini-batch updates, the target decreases to \(\tau_{t}=-10.1\). So, the symmetric action closely follows the sampled action until it reaches -10.5. If the symmetric limitation is small enough, PPO's loss function will dominate the update direction, allowing for asymmetric updates, if required.
The next subsection delves into the motivation and design of the restricted update, and one of its most critical parameters -- the distribution's mean shift \(\Delta\mu\). It also provides context for the standard deviation preservation in (41), and the momentum build up issues. The following subsections derive the dead zone and value gates, \(\psi\) and \(\phi\) from (39); present a condensed implementation of the loss function and summarize all ASL's hyperparameters.
### _Restricted update_
Both PPO and TRPO [37] apply the trust region concept as depicted in Fig. 1. During policy updates, the target relative probability of a certain action is restricted by a soft limit imposed by the clipping parameter \(\epsilon\). As an example, consider an explored action with a positive estimated advantage (\(\hat{A}_{t}>0\)), and \(\epsilon=0.3\). The loss function gradient will motivate a relative probability growth of 30%, although the new relative probability can surpass that soft limit due to the influence of other actions in the update batch, or the optimizers used to update network weights.
As the policy follows a normal distribution, there are 2 parameters that can influence the relative probability for a given action: the mean \(\mu_{\theta}\) and the standard deviation \(\sigma\). The mean is parameterized by a neural network and altering it means shifting the preferred action for a given state, while the standard deviation expands or narrows the exploration around said action. For symmetry, there are three key differences in relation to the original concept of trust region:
1. For a given time step \(t\) there is always exactly one desired action, so the objective is to always increase the relative probability of its symmetric counterpart (except to enforce the trust region as explained in point 3);
2. Since the update direction is known a priori, there is no need for exploration, so it could be risky for the symmetry loss to modify the policy's standard deviation \(\sigma\), as it could negatively affect the main algorithm. The Proximal Symmetry Loss's term \(\pi_{\theta_{old}}(a_{t}\,|\,s_{t})\) in (21) attempts to mitigate this issue by limiting the relative probability of the symmetric action. However, ideally, the symmetry loss should not actively modify \(\sigma\);
3. While using common momentum-based optimizers such as Adaptive Moment Estimation (Adam) [38], the trust region applied to constant targets is inherently hard to control, as the constant update direction can build up momentum and easily override asymmetric exploration. There are at least three ways of overcoming this issue: not using momentum-based optimizers, which comes at a heavy cost; reduce the symmetry weight so that the noise created by the total loss is enough to prevent cyclic patterns (as applied by the PSL); or create a fixed target at the trust region border, which actively reduces the relative probability of the symmetric action if it exceeds the region, instead of simply not motivating further progress, as does PPO. This last solution is used by ASL in (41) and (42).
It is important to control the influence of symmetry in exploration and harmonize the final loss function. To understand the best approach, we need to analyze the average PPO update for a single time step, considering the above mentioned restrictions. Fig. 10 shows the relation between exploration and target policy update for a single time step. Action \(a=\mu_{\theta_{i}}+k_{\text{e}}\sigma\) has a positive estimated advantage and follows a univariate normal distribution with fixed standard deviation. Exploration parameter \(k_{\text{e}}\) defines the distance of the sampled action to the initial mean of the distribution \(\mu_{\theta_{i}}\) in relation to \(\sigma\). The gradient of PPO's loss function motivates the initial relative probability \(p_{i}\) of action \(a_{t}\) to increase until \(p_{i}\cdot\epsilon\). Assuming an ideal update, the neural network shifts the distribution's mean to \(\mu_{\theta_{f}}\). The relation between \(k_{\text{e}}\) and the distribution mean shift \(\Delta\mu\) will allow us to replicate that behavior for the symmetry update.
However, this example deals with univariate distributions, and typical reinforcement learning environments have high-dimensional actions spaces. In PPO, the clipping parameter \(\epsilon\) limits the relative probability of the action as a whole. Considering the notion of joint probability for independent events, each dimension has an average soft limit
\[\xi=\sqrt[4]{1+\epsilon}-1, \tag{43}\]
where \(n\) is the number of dimensions. Although the PPO loss function does not require this dimension-level specification, the explored action uses an independent standard deviation value for each dimension, so it makes sense to also establish individual targets for each dimension.
The following derivation considers the relation between \(k_{\text{e}}\) and \(\Delta\mu\) for a single dimension. Let the probability density function (PDF) for a univariate normal distribution be \(f(x|\mu,\sigma)=\beta e^{-\frac{1}{2}(\frac{\epsilon-\mu}{\sigma})^{2}},\) with \(\beta=1/(\sigma\sqrt{2\pi})\). The distribution mean shift \(\Delta\mu\) can be obtained as the distance from \(a_{t}\) to the largest \(a\) for which the PDF of \(\mathcal{N}(\mu_{\theta_{i}},\sigma)\) is \(p_{f}\):
Fig. 10: Relation between: (1) distance of an advantageous explored action (\(a_{t}\)) to the distribution mean before the update (\(\mu_{\theta_{i}}\)) and (2) the progress of the distribution. Assume an ideal update with the trust region and a policy that follows a univariate normal distribution with fixed standard deviation.
\[p_{i} =f(\mu_{\theta_{i}}+\mathsf{e}\sigma\,|\,\mu_{\theta_{i}},\sigma)= \beta e^{-\frac{1}{2}k_{\mathrm{e}}^{2}}, \tag{44}\] \[p_{f} =\min((1+\xi)p_{i},\beta),\] (45) \[\Delta\mu =\mu_{\theta_{i}}+k_{\mathrm{e}}\sigma-f^{-1}(p_{f}|\mu_{\theta_{ i}},\sigma)\] \[=\rho\mathbf{\sigma}_{i}^{-}+k_{\mathrm{e}}\sigma-\rho\mathbf{\sigma}_{i }^{-}-\sigma\sqrt{-2\cdot\ln\frac{\min((1+\xi)\not{\!\!\!\!/}e^{-\frac{1}{2}k_{ \mathrm{e}}^{2}},\not{\!\!\!\!/})}{\not{\!\!\!\!/}}}\] \[=\sigma\left(k_{\mathrm{e}}-\sqrt{-2\cdot\ln\min((1+\xi)e^{- \frac{1}{2}k_{\mathrm{e}}^{2}},1)}\right), \tag{46}\]
where \(f^{-1}(x|\mu,\sigma)=\mu+\sigma\sqrt{-2\ln(x/\beta)}\) is the largest solution for the inverse PDF. Fig. 11 shows the plot of \(\Delta\mu\) as a function of \(k_{\mathrm{e}}\), for a typical clipping value \(\epsilon=0.2\)[39] and 3 action space dimensions \(n\in\{4,16,64\}\). For each function represented in Fig. 11, the maximum at \(k_{\mathrm{e}}=\sqrt{-2\ln(1/(1+\xi))}\), separates the linear part on the left, from the non-linear part on the right (that corresponds to Fig. 10). The linear case happens when \(\mu_{\theta_{f}}=a_{t}\).
Empirically, restricting the symmetry update such that \(\Delta\mu\geq\sigma\sqrt{-2\ln(1/(1+\xi))}\) serves no purpose, other than to accelerate the symmetry convergence at the beginning of training. Therefore, for the sake of simplicity, the update restriction can be reduced to
\[\Delta\mu=k_{\mathrm{s}}\sigma\sqrt{-2\ln\frac{1}{1+\xi}}. \tag{47}\]
In both cases, the restriction is given in relation to \(\sigma\) and \(\xi\). However, instead of defining the distance of the artificially explored symmetric action through \(k_{\mathrm{e}}\), and then compute the maximum distribution shift, we are defining that shift directly through \(k_{\mathrm{s}}\). Consequently, the stationary point from Fig. 11 is removed and \(\Delta\mu(k_{\mathrm{s}}|\sigma,\xi)\) becomes a linear function, as shown in Fig. 12.
The unit value for \(k_{\mathrm{e}}\) and \(k_{\mathrm{s}}\) also has a different meaning. Choosing \(k_{\mathrm{e}}=1\) means adopting PPO's update behavior for an action sampled at 1 standard deviation from the distribution's mean (\(a=\mu_{\theta_{i}}\pm\sigma\)), assuming a constant \(\sigma\). Alternatively, defining \(k_{\mathrm{s}}=1\) means adopting PPO's update behavior for the action that would yield the largest update, also assuming a constant \(\sigma\). Moreover, defining \(k_{\mathrm{s}}>1\) has an intuitive meaning, since the update magnitude increases monotonically.
### _Exclusion of neutral states_
As introduced in Section II-D, neutral states and neighboring areas in the state space pose a significant challenge for learning efficient behaviors through symmetric policies. ASL provides the option to reject regions that surround neutral states through a dead zone gate \(\psi_{t}\):
\[\psi_{t}=\begin{cases}1,&\text{if }\frac{1}{n}\sum_{i=1}^{n}\left( \delta_{t,i}\right)>k_{\mathrm{d}}\\ 0,&\text{otherwise}\end{cases}, \tag{48}\] \[\text{with}\quad\delta_{t,i}=\frac{|s_{t,i}-f(s_{t,i})|}{\text{ MAD}_{i}},\] (49) \[\text{and}\quad\text{MAD}_{i}=\frac{\sum_{t=1}^{k_{\mathrm{t}}}| s_{t,i}-\overline{s}_{i}|}{k_{\mathrm{t}}}, \tag{50}\]
where \(\text{MAD}_{i}\) is the mean absolute deviation of state \(s\) for state dimension \(i\), for the last \(k_{\mathrm{t}}\) time steps (typically \(k_{\mathrm{t}}=10\cdot B\), where \(B\) is the batch size). Parameter \(\delta_{t,i}\) represents the relative distance between state \(s\) and its symmetric counterpart \(f(s)\), for a specific time step \(t\) and state dimension \(i\).
The dead zone gate \(\psi_{t}\) is 0 if \(s_{t}\) is relatively close to a neutral state, and 1 otherwise. When \(\psi_{t}=0\) in (39) and (40), the gradients of \(L^{ASL}\) w.r.t. \(\theta\) and \(\omega\) ignore time step \(t\). In practice, the policy is free to learn asymmetric behaviors near neutral states, thus circumventing the problem stated in Section II-D.
### _Value gate_
Analogously to PSL's unidirectional update (see Section VI-A), ratio \(r_{t}\) in (41) promotes an adjustment to \(\pi_{\theta}(f(s_{t}))\) without changing \(\pi_{\theta}(s_{t})\). As already concluded, adapting the symmetric action is better than the explored action because the explored action is more likely to be optimal than the symmetric action.
This approach can be improved by comparing the estimated value of the explored and symmetric actions, and then blocking updates when the symmetric action is considerably more valuable. To achieve that purpose, a value gate \(\phi_{t}\) yields 1 to allow an update, or 0 to block it, based on a user defined parameter \(k_{\mathrm{v}}\in(1,+\infty)\), such that
Fig. 11: Plot of the distribution’s mean shift \(\Delta\mu\) as a function of \(k_{\mathrm{e}}\) with \(\xi=\sqrt[5]{1+0.2}-1\) and \(n\in\{4,16,64\}\)
Fig. 12: Plot of the distribution’s mean shift \(\Delta\mu\) as a function of \(k_{\mathrm{s}}\) with \(\xi=\sqrt[5]{1+0.2}-1\) and \(n\in\{4,16,64\}\)
\[\phi_{t} =\begin{cases}1,&\text{if }v_{t}>V_{\omega}(f(s_{t}))\\ 0,&\text{otherwise}\end{cases}, \tag{51}\] \[\text{with}\quad v_{t} =\begin{cases}k_{\text{v}}\cdot V_{\omega}(s_{t}),&\text{if }V_{\omega}(s_{t})\geq 0\\ \frac{V_{\omega}(s_{t})}{k_{\text{v}}},&\text{otherwise}\end{cases}. \tag{52}\]
Although setting \(k_{\text{v}}=1\) would obviate the need for (52), simplifying the condition in (51) to \(V_{\omega}(s_{t})>V_{\omega}(f(s_{t}))\), it revealed to be counterproductive in early tests, since many time steps were being blocked due to inaccuracies in the value function estimator. Therefore, to reduce the gate sensitivity, it is recommended to use values for \(k_{\text{v}}\) well above 1 (e.g. 1.5). Since \(k_{\text{v}}\) is a proportional factor, it is required to distinguish when \(V_{\omega}(s_{t})\) is positive and negative, as illustrated in Fig. 13.
In both cases, the update is blocked (\(\phi_{t}=0\)) when the value of the symmetric state is larger than \(v_{t}\), being the only difference the way \(v_{t}\) is calculated. Using a proportional factor \(v_{t}\) instead of adding a constant is important to deal with outlier states with unusually small or large values.
### _Implementation_
This subsection presents a condensed implementation of ASL's loss function, compiling the final form of all previously explained modules. Rewriting (38) as an empirical average over a finite batch of \(B\) samples for \(N\) symmetries yields:
\[L^{ASL}(\theta,\omega) =\sum_{j=1}^{N}\frac{1}{B}\sum_{t=1}^{B}\left[w_{j}^{\pi}\cdot L_ {j,t}^{\pi}(\theta)+w_{j}^{V}\cdot L_{j,t}^{V}(\omega)\right], \tag{53}\] \[\text{with}\quad L_{j,t}^{\pi}(\theta) =-r_{j,t}(\theta)\cdot\psi_{j,t}\cdot\phi_{j,t},\] (54) \[L_{j,t}^{V}(\omega) =(V_{\omega}(f_{j}(s_{t}))-V_{t}^{\text{true}})^{2}\cdot\psi_{j,t }\cdot\phi_{j,t}. \tag{55}\]
The ratio \(r_{j,t}\) between two probability densities can be simplified, considering that both deal with the same standard deviation variable \(\sigma\), and by applying the quotient rule for exponents with the same base:
\[r_{j,t}(\theta) =\frac{\pi_{\theta}(\tau_{j,t},f_{j}(s_{t})\mid\mu_{\theta}, \sigma)}{\pi_{\theta\text{\tiny{ad}}}(\tau_{j,t},f_{j}(s_{t})\mid\mu_{\theta _{\text{\tiny{ad}}}},\sigma)}\] \[=\frac{\frac{1}{\sqrt[2]{\pi^{2}}\prod_{i}\sigma_{i}}\cdot\exp \left(-\frac{1}{2}\sum_{i=1}^{n}\frac{(\tau_{j,i,i}-\mu_{\theta_{i}}(f_{j}(s_{ t})))^{2}}{\sigma_{i}^{2}}\right)}{\frac{1}{\sqrt[2]{\pi^{2}}\prod_{i} \sigma_{i}}\cdot\exp\left(-\frac{1}{2}\sum_{i=1}^{n}\frac{(\tau_{j,i,i}-\overline {a}_{j,t,i}^{\prime}(f_{j}(s_{t})))^{2}}{\sigma_{i}^{2}}\right)}{\sigma_{i}^{ 2}}\] \[=\exp\left(\sum_{i=1}^{n}\frac{(\tau_{j,t,i}-\overline{a}_{j,t,i}^ {\prime})^{2}-(\tau_{j,t,i}-\mu_{\theta_{i}}(f_{j}(s_{t})))^{2}}{2\cdot\sigma_ {i}^{2}}\right), \tag{56}\]
where \(\overline{a}_{j,t}^{\prime}=\mu_{\theta_{\text{\tiny{ad}}}}(f_{j}(s_{t}))\) and \(n\) is the number of dimensions of each action. Plugging in the function weight \(w_{F}\) defined in (37) for symmetry fitting:
\[r_{j,t}(\theta)=\exp\left(\sum_{i=1}^{n}\frac{(\tau_{j,t,i}-\overline{a}_{j, t,i}^{\prime})^{2}-(\tau_{j,t,i}-\mu_{\theta_{i}}(f_{j}(s_{t})))^{2}}{2\cdot \sigma_{i}^{2}/w_{j,i}^{F}}\right) \tag{57}\]
Target \(\tau_{t}\), distribution mean shift \(\Delta\mu\) and dead zone gate \(\psi_{t}\) can be rewritten for a specific symmetry index \(j\) with minimum changes:
\[\tau_{j,t} =\text{clip}(g_{j}(\mu_{\theta_{\text{\tiny{ad}}}}(s_{t})), \overline{a}_{j,t}^{\prime}-\Delta\mu_{j},\overline{a}_{j,t}^{\prime}+\Delta \mu_{j}), \tag{58}\] \[\Delta\mu_{j} =k_{j}^{\text{s}}\sigma\sqrt{-2\ln\frac{1}{1+\xi}},\] (59) \[\psi_{j,t} =\begin{cases}1,&\text{if }\frac{1}{n}\sum_{i=1}^{n}\left(\delta_{j,t,i} \right)>k_{j}^{\text{d}}\\ 0,&\text{otherwise}\end{cases},\] (60) \[\text{with}\quad\delta_{j,t,i}=\frac{|s_{t,i}-f_{j}(s_{t,i})|}{ \text{MAD}_{i}},\] (61) \[\text{and}\quad\text{MAD}_{i}=\frac{\sum_{t=1}^{h}|s_{t,i}- \overline{s}_{i}|}{k_{\text{t}}}, \tag{62}\]
where \(\xi\) is defined in (43). Value gate \(\phi_{t}\) can also be easily adapted to a specific symmetry index \(j\) but the auxiliary variable \(v_{t}\) can be improved for parallel computation by replacing the branch with an equivalent equation:
\[\phi_{j,t} =\begin{cases}1,&\text{if }v_{t}>V_{\omega}(f_{j}(s_{t}))\\ 0,&\text{otherwise}\end{cases}, \tag{63}\] \[\text{with}\quad v_{t} =\alpha\cdot V_{\omega}(s_{t})+(k_{\text{v}}-\alpha)|V_{\omega}(s_ {t})|, \tag{64}\]
where \(\alpha\) is an auxiliary constant equal to \(({k_{\text{v}}}^{2}+1)/(2k_{\text{v}})\).
### _Hyperparameters_
The hyperparameters described in previous subsections are summarized in Table II. Most parameters are vectors with one scalar per symmetry operation. The exceptions are system-wide parameters that include \(\mathbf{g}(a)\) form, \(\text{H}_{\text{U}}\), \(\text{H}_{\text{F}}\) and \(k_{\text{t}}\).
Fig. 13: Value gate \(\phi_{t}\) behavior when the explored state value \(V_{\omega}(f(s_{t}))\) is negative (left) and positive (right). The difference lies in how the auxiliary variable \(v_{t}\) is obtained.
## X Evaluation Environment
The experiments for this paper were executed in PyBullet -- a physics simulator based on the Bullet Physics SDK [40]. An ant robot based on the AntBulletEnv-v0 environment was chosen as the base to develop several learning scenarios. This robot model was selected due to the type and number of symmetry transformations under which it is invariant.
The ant robot, shown in Fig. 14, is a four-legged robot with 8 joints and actuators. Joint 0 controls the rotation of one leg around the robot's \(z\)-axis, and joint 1 controls the knee rotation for the same leg. The remaining joints control the other legs in an equivalent way. The robot has 4 planes of symmetry: xy-plane, yz-plane, y=x and y=-x. Additionally, it has 4-fold rotational symmetry around the z-axis. More specifically, the robot is invariant under rotations of 90, 180, 270 or 360 deg around the z-axis, although the last rotation is an identity operation.
The state space for all ant-based scenarios is listed in Table III. The first observation indicates the relative height of the robot in comparison to the height at the beginning of an episode, which is fixed. The second observation is a unit vector parallel to the ground that indicates the goal direction, relative to the robot. After the relative linear velocity of the robot, roll and pitch, comes the vector \([p_{0},s_{0},p_{1},s_{1},...,p_{7},s_{7}]\), where \(p_{x}\) and \(s_{x}\) are the position and speed of joint \(x\). Finally, a binary value indicates whether each foot is touching the ground.
The symmetry transformations are determined as explained in Section VIII-A. Consider the xz-plane depicted in Fig. 14. The respective action transformation is composed of 8 _action indices_ and 8 _action multipliers_, as listed in the top two rows of Fig. 15. Note that a positive action in all dimensions will cause knee flexion at joints 1 and 7, and knee extension at joints 3 and 5. Although for the xz-plane this results in direct proportion, for the yz-plane the relation is inverted.
The last two rows in Fig. 15, correspond to the _observation indices_ and _observation multipliers_). The first column corresponds to the robot height as presented in Table III. The height is invariant to any of the symmetry planes or rotations, so its index corresponds to itself, and the multiplier is 1. For the unit goal vector \((y,x)\), \(y\) must be inverted and \(x\) is invariant. Therefore, the index is kept unchanged (1 and 2), and the multiplier is -1 and 1, respectively. This logic can be applied to all the elements of Table III to fill the remaining columns of _observation indices_ and _observation multipliers_. See Appendix A for the symmetry transformations concerning the remaining symmetry transformations.
## XI Evaluation Scenarios
In all scenarios, the main objective is to go in the direction of the goal. The original reward function from AntBulletEnv-v0 was not changed, and it encompasses: stay-alive bonus, progress towards the objective, electricity cost, joints-at-limit cost to discourage stuck joints, and foot collision cost. An episode ends if the ant's torso makes contact with the ground or after 1000 steps.
### _Main objective_
Contrary to training a single goal direction like the original AntBulletEnv-v0, the proposed scenarios define 8 possible directions with a fixed angular distance of 45 deg as shown in
Fig. 14: AntBulletEnv-v0’s robot has 8 joints/actuators. Even joints rotate around the robot’s \(z\)-axis, and odd joints (ant’s knees) rotate around a vector orthogonal to the z-axis. The arrows show the positive rotation direction for each joint. The robot has 4 planes of symmetry, xy-plane, yz-plane, y=x and y=-x, and 4-fold rotational symmetry around the z-axis.
Fig. 15: xz-plane symmetry transformation for AntBulletEnv-v0
Fig. 16. For a batch of episodes, the goal generation is sequential (0 to 7) and not random, in order to reduce the variability while training and also to reduce biases when periodically evaluating the policy. In our environment implementation (see Section XII-A), to minimize code modifications, we changed the initial robot orientation instead of the goal location.
During preliminary experiments the robot had a tendency to move towards the goal with one front leg, one back leg, and 2 side legs. In practice, if the goals were 1, 3, 5 or 7, the robot would move without rotating its body, while for other goals, it would first rotate until one of the legs was pointing to the goal. To prevent this bias, in addition to the terminal conditions applied by the original AntBulletEnv-v0 environment, the episode is terminated if the robot's orientation deviates more than 25 deg from its initial value. Fig. 17 presents sequential frames of example runs where the robot is pursuing goal 6 (bottom) and 7 (top).
### _Controlled scenarios_
The first set of proposed scenarios is listed in Table IV. Scenario **A1.1** assumes that no perturbation is introduced by the robot, environment, reward function or control mechanism. The action modifier vector from the second column is multiplied by the policy's output (a vector of ones means no modification). The third column indicates the range to which the action is clipped after being modified. For Scenario **A1.1**, all action dimensions are clipped to [-1,1] before being sent to the robot. The last column indicates the goals used during training, where the index corresponds to the goal numbering denoted in Fig. 16. During evaluation, all scenarios presented in Table IV are tested for goals 0 to 7.
Scenario **A1.2** is similar in all aspects, except the training goals. While training, the robot is only exposed to goals 0 and 1 but, during evaluation, it is still tested on all goals (0-7). In theory, according to the ant robot's symmetry planes and rotations, symmetry-aware algorithms should be able to transpose the knowledge from goal 0 to (2, 4, 6) and from goal 1 to (3, 5, 7).
The remaining scenarios from Table IV simulate an artificial perturbation in the action controller. In **A2**, each action dimension is multiplied by a value in the vector [0.65,0.75,...,1.35]. **A2.1** then clips the values between -1 and 1. In this case all motors are equivalent, there is only a control imbalance. This means that a symmetric outcome is still possible if the perturbation is compensated by the policy.
However, **A2.2** clips the action values between [-0.65,-0.75,...,-1.35] and [0.65,0.75,...,1.35]. Now, some actuators have different maximum torque values (e.g. the maximum torque for the knee at joint 1 in relation to the knee at joint 7 is 55%). This problem is not trivial and requires asymmetric learning, independently of perturbation corrections. In this example, the ability to perform symmetric actions in symmetric states is not guaranteed. Scenarios **A3** are a harder version of **A2** in terms of action modification.
### _Realistic scenarios_
A second set of scenarios listed in Table V was devised to test realistic robot and environment perturbations. As mentioned in the beginning of Section XI, AntBulletEnv-v0's reward function has a stay-alive bonus. Its purpose is to reward the agent for lifting its torso above the ground at all times. Yet, in harsh conditions, the agent may learn to exploit the stay-alive bonus by not moving, since any wrong movement can end the episode earlier. To avoid changing the reward function, a new condition was added to the environment for the second set of scenarios -- the episode ends if no progress was made towards the goal in the last 30 steps.
in a similar way, except that they modify the first and third feet.
An explanation is required for some of the design choices. First, the different mass configurations are an attempt to create a large amount of perturbation patterns across all symmetry planes and rotations. Second, the modified mass values (40, 20) are a trade-off between disrupting existing models (i.e. models trained without the modification have no control over the robot after the modifications) and allowing training (since joint actuators have limited torque). Third, the train and evaluation goals are separated into groups to allow the additional assessment of 2 types of locomotion (associated with the odd and even goals).
Scenarios **A5.1** and **A5.2** are a complement to the perturbations tested in **A4.1** and **A4.4**, respectively, but the agent is only able to train on half of the goals. The objective is for the algorithm to learn the perturbations with the training goals and generalize that knowledge to the remaining goals. In this experiment, allowing only two training goals, like in **A1.2**, would make the symmetry fitting unfeasible, since some symmetric states would never be experienced -- hence the 4 training goals.
Finally, scenarios **A6** follow a similar setup to **A4** but instead of modifying the feet mass, the surface is tilted 5 deg. For **A6.1** to **A6.3**, the tilt is in direction of goal 0 (i.e. gravity pulls the robot towards goal 0), and for **A6.4** to **A6.6**, the tilt is in direction of goal 1. It is important to note that if the agent was able to know the tilt direction through observations, there would be no need for symmetry fitting. If for instance, the roll and pitch observations (see Table III) were relative to the ground and not the tilted surface, the agent would know the tilt direction. Consider the example presented as follows.
Fig. 18 on the left, shows a scenario where the platform is tilted in direction of goal 6. In each episode, the ant would chase a different goal direction, but the green and blue feet would always be stepping on higher ground, when directly compared to the other two feet (note that the robot cannot rotate more than 25 deg or the episode is terminated). So, for instance, considering the xz-plane, the symmetric state (on the right) is unreachable, and the algorithm will not leverage symmetry for this symmetry plane. This is the case for all symmetry rotations and symmetry planes, except for the yz-plane because it is aligned with the tilt direction. In conclusion, this is not a scenario where a symmetry loss function would be very useful.
With this in mind, it is possible to make use of symmetric states and symmetry loss functions when the perturbation is not part of the state space (i.e. it is not observable). In this way, the optimization algorithm will be able to generalize, to a certain extent, the learned behavior to the symmetric states, even if in practice the initial symmetry transformations do not accurately represent reality.
To make sure the roll and pitch observations were not affected, instead of tilting the surface, we changed the gravity vector to achieve the equivalent effect.
## XII Evaluation Methodology
The performance in each scenario was evaluated for four algorithms: PPO (without any extension), PPO+ASL (PPO extended with Adaptive Symmetry Learning), PPO+PSL (PPO extended with Proximal Symmetry Loss), and PPO+MSL (PPO extended with Mirror Symmetry Loss). MSL and PSL employ the improved versions introduced in Section V and VI, respectively.
All experiments were conducted on a single machine with two Intel Xeon 6258R with 28 cores. In total, 908 distinct hand-tuned configurations were tested. Each configuration was used to train 12 models. Typically, between 48 and 96 models were learned in parallel, requiring between 12h and 24h. A new set of hand-tuned configurations was designed after each learning round.
Automatic tuning approaches were not used for two main reasons. First, the ASL algorithm was tweaked during the experiments to fix errors and test new methods and implementation techniques. This process requires constant human
Fig. 17: Sequential frames of the ant robot moving to the right, in pursuit of goal 6 (bottom) and 7 (top)
Fig. 18: Experienced tilted state (left) and its symmetric state in relation to the xz-plane (right)
intervention since auto-tuning methods do not detect problems in the algorithm. Second, due to time and computational constraints, hand-tuning was the only viable option in terms of efficiency.
### _Codebase_
The implementation of each extension was made in Python on top of a Stable Baselines 3 repository fork [41]. This codebase was chosen due to its high quality implementation, considering how specific code-optimizations can have a deep impact on PPO's performance [42]. The final code allows the user to select any of the aforementioned extensions per symmetry transformation, allowing for versatile combinations of symmetry extensions and parameters, although in the scope of this work, symmetry extensions were evaluated independently to allow for a fair comparison. All the algorithm implementations tested in this work, as well as code modification to the ant environment are available at our GitHub repository2.
Footnote 2: [https://github.com/m-abr/Adaptive-Symmetry-Learning](https://github.com/m-abr/Adaptive-Symmetry-Learning)
### _Hyperparameters_
The choice of hyperparameters for PPO is crucial to enable successful applications, and should be tuned case by case [43]. However, for consistency among scenarios and algorithms, a fixed set of hyperparameters was used, as listed in Table VI.
The chosen values were adapted from the RL Baselines3 Zoo training framework [44]. The number of environments was reduced from 16 to 1 to improve performance while training parallel models (by eliminating the thread synchronization overhead). Considering this modification and the increased complexity imposed by the proposed scenarios, in comparison with the original AntBulletEnv-v0 environment, some other parameter values were changed: batch steps, mini-batch size and the total time steps (which is defined per scenario).
Regarding the evaluated symmetry algorithms, their hyperparameters were tuned per scenario. Nonetheless, the values presented in Table VII were common to all cases. Symmetry transformations \(\mathbf{f}(s)\) and \(\mathbf{g}(a)\) represent the initial knowledge about the ant robot, assuming a perturbation-free system. The remaining hyperparameter vectors in Table VII were assigned a scalar as a slight abuse of notation to convey that the same scalar was assigned to all symmetry transformations, with the exception of \(\mathbf{k}_{\text{d}}[\text{rot}]\), which concerns only the subset of rotations.
The value symmetry loss function is the same in all evaluated symmetry algorithms. Since \(\mathbf{w}_{\text{V}}^{PPO}=\mathbf{w}_{\text{V}}^{ASL}=\mathbf{w}_{\text{V}}^{PSL}=\mathbf{w }_{\text{V}}^{MSL}\) typically produces good results, and \(\mathbf{w}_{\text{V}}^{PPO}=0.5\) (see Table VI), parameter \(\mathbf{w}_{\text{V}}\) was also set to 0.5 for all symmetry algorithms.
In the scenarios tested in this paper, neutral states affect reflections but not rotations. Every time a rotation transformation is applied, the unit goal vector from the state space (see Table III) is also rotated, so the symmetric state can never be the same as the initial state, which is a necessary condition for neutral states. For that reason \(k_{\text{d}}=0\) for all rotations. For reflections, \(\mathbf{k}_{\text{d}}\) was set independently for each scenario. The remaining parameters from Table VII were found to be consistent across all experiments.
### _Metrics_
All algorithms were evaluated and compared in terms of performance. Additional metrics were used to analyze symmetric properties. The reinforcement learning process alternates between training for 15 iterations (\(15\times 4096\) batch steps), and evaluating the policy for 16 episodes (using the deterministic neural network output), as depicted in Fig. 19. Some variables are logged every 5 iterations, depending on the chosen symmetry extension, and are an average over the last 4096 time steps. For all metrics, each data point is then averaged again for 12 independent instances. In total, four variables were generated:
1. Average episode return -- the undiscounted return provides a raw measure of performance that takes into consideration all the implicit objectives defined in the reward function;
2. Average value distance -- absolute difference between the value estimates of the explored state and the corresponding symmetric state. This metric assesses how accurately the current symmetry transformations describe the policy for explored states;
3. Average neutral state rejection ratio (NSRR) -- ratio of states that were rejected by the Adaptive Symmetry Learning algorithm (see Section IX-B). The NSRR measures to what extent the neutral state rejection module was actually applied to each symmetry plane: xz-plane, yz-plane, yz=x plane, and y=-x plane.
4. Average symmetry transformation targets -- target values for the current action transformation approximation to a user-defined parameterized function. For linear
functions, the ant model has 12 action multipliers (e.g. \(m_{0\to 2}\)) and 16 actions biases (12 based on symmetric pairs, such as \(b_{0\to 2}\), and 4 based on reflexive relations such as \(b_{0\to 0}\) or \(b_{0}\)). This metric evaluates the repeatability of results while comparing independent optimizations, and the accuracy of the learned action transformations if the ground truth is known. A detailed explanation for linear approximations is given in the next paragraph.
The problem of learning action transformations (see their initial values in Fig. 15 and 20) can be simplified due to the existence of redundant symmetry relations. The ant model has 8 actions and 7 non-identity symmetry transformations, totaling 56 action transformations. Consequently, there are 112 variables if the operations can be defined as linear equations of the form \(a^{\prime}[y]=m_{x\to y}\cdot a[x]+b_{x\to y}\), or 56 variables if \(b\) is not considered. Analogously to the process described in Fig. 7, the ant's action transformations can be written as a function of 12 pairs of symmetric action dimensions {(0,2),(0,4),(0,6),(1,3),(1,5),(1,7),(2,4),(2,6),(3,5), (3,7),(4,6),(5,7)} and 4 singles {0,2,4,6}, yielding 12 actions multipliers and 16 biases (28 variables for \(y=mx+b\) operations).
## XIII Evaluation Results
The hyperparameters used for each scenario are listed in Appendix B and the evaluated metrics are plotted in Appendix C. Below are discussed the results for each scenario. For the purpose of this analysis, PPO without any extension is considered the baseline.
Due to lacking a mandatory progression condition, the controlled scenarios (**A1.1** to **A3.2**) exhibit a similar behavior in the beginning of training (see Fig. 21). The local peak at approximately \(3\times 10^{5}\) time steps corresponds to the algorithm learning how to exploit the stay-alive bonus by not moving to avoid irrecoverable mistakes, as explained in Section XI-C. Then, the slight drop coincides with a more exploratory phase, where the agent starts takings risks.
The average symmetry transformation target error in Fig. 24 was only plotted for scenarios **A2** and **A3**. For **A2.1** and **A3.1** the ground truth is known. Let \(\mathbf{x}\) be the action multiplier vector shown in the second column of Table IV. Consider for instance the symmetric relation between two action dimensions \(a_{0}\) and \(a_{2}\). To achieve a symmetric outcome, the equality \(a_{0}x_{0}=a_{2}x_{2}\) must hold. Therefore, the transformation parameterized by \(m_{0\to 2}\) can be obtained such that
\[\begin{cases}a_{0}x_{0}\!=\!a_{2}x_{2}\\ a_{2}\!=\!a_{0}m_{0\to 2}\end{cases}\ \Rightarrow\ \ a_{0}x_{0}\!=\!a_{0}m_{0 \to 2}x_{2}\ \Rightarrow\ \ m_{0\to 2}\!=\!\frac{x_{0}}{x_{2}}. \tag{65}\]
So, in Fig. 24, the ground truth for \([m_{0\to 2}\),\(m_{0\to 4}\),...,\(m_{5\to 7}]\) is given by \([0.76,0.62,0.52,0.79,0.65,0.56,0.81,0.68,0.83,\)\(0.70,0.84,0.85]\) for **A2.1**, and \([0.43,0.27,0.20,0.56,0.38,\)\(0.29,0.64,0.47,0.69,0.53,0.76]\) for **A3.1**. Since the ground truth for **A2.2** and **A3.2** is unknown, for comparison, we reuse the values used for **A2.1** and **A3.1**, respectively.
The remaining of this section delves into the discussion of specific results for each scenario. For the sake of simplicity, when mentioning ASL, it should be read as PPO+ASL, and the same is true for the other symmetry extensions.
#### a.1.1
All symmetric algorithms performed similarly in this scenario, contrasting with the baseline, as depicted in Fig. 21 for 4 million time steps. ASL performed best without symmetry fitting, which indicates that linear modifications to the symmetry transformations yield no advantage in this context. In terms of average value distance (see Fig. 22), all the symmetry extensions follow a common pattern of allowing asymmetric exploration until around \(2.2\times 10^{6}\) time steps, before the model starts to converge to the provided symmetric transformations.
The average NSRR for ASL shown in Fig. 23 reveals the contrast between both gaits depicted in Fig. 17 in terms of neutral state incidence. When walking with one front leg, a symmetric policy is a good policy if and only if the initial position is perfectly aligned with the objective, and there is no perturbation in the policy or environment. These conditions are impractical, even more so due to the initial random component introduced by AntBulletEnv-v0. Nonetheless, walking with one front leg results in an increasing number of neutral states as the policy converges to local optima, resulting in an increasing NSRR for planes y=x and y=-x. In contrast, walking with two front legs rarely generates neutral states, hence the low NSRR for planes \(xz\) and \(yz\).
counterproductive, consequently being disabled. Therefore, the only modules that contributed to this performance were the ratio with fixed standard deviation in (56), and neutral state rejection (Section IX-B).
The value distance plots show that the policies under all symmetry methods converged earlier to the symmetric transformations, which led to a steeper learning curve in all three cases. The NSRR of planes yz and y=-x was zero during training because the agents only trained on goals 0 and 1. Yet, the NSRR for plane y=x indicates that convergence to the symmetric transformations was much faster in comparison with **A1.1**, and for the xz-plane it was constantly higher. These results show that symmetry in gaits is correlated with performance in this scenario, and that generalizing to symmetric states can be more efficient when training only in a subset of non-symmetrically-redundant states. Note that in **A1.1**, the policy is also being generalized to symmetric states. The only difference is that states similar to those generalized also occur during training in **A1.1**.
From this point forward, symmetry fitting was enabled on all experiments. In **A2.1**, the introduced modifiers are proportional to the action, and in **A2.2** there is an additional non-linear component. In both cases, as supported by the value distance plots, the ASL policies are following the learned symmetry transformation closer, in comparison with the other methods. In **A2.1**, the learning curve is identical to **A1.1**, indicating that symmetry fitting was able to compensate the action modifiers without performance loss, while PSL and MSL experienced some decay by having the extra effort of learning an asymmetric policy. In **A2.2**, as expected, symmetry fitting could only handle linear perturbations, resulting in less advantage over the other approaches. This difference is also evident from the NSRR data, when comparing both scenarios with **A1.1**.
By observing Fig. 24, it is possible to conclude that the learned symmetry transformations are very similar in both **A2.1** and **A2.2**, and the respective error relative to the ground truth of **A2.1** converges to the vicinity of zero. Note that the initial symmetry transformation had an average error of 0.282 and, after training, that value dropped to 0.031 on **A2.1** and 0.049 on **A2.2**.
Due to the difficulty of this and following scenarios, the training was prolonged to 5 million time steps. **A3.1** and **A3.2** are harder versions of the previous scenarios, with larger action modifiers. The learning curves reveal a disproportionate effect of this additional difficulty on the extended methods, being PSL and MSL considerably more affected.
The value distance shows that ASL learned valuable symmetry transformations but not as reliable as in **A2**, resulting in a more asymmetric policy. The average target error confirms this efficiency decline even though the initial symmetry transformation had an average error of 0.503 and, after training, that value dropped to 0.128 on **A2.1** and 0.106 on **A2.2**. As expected, due to some transformation targets still having a considerable error, the gait was more affected for some goals, resulting in an NSRR imbalance between plane y=x and y=-x.
In **A4.1** to **A4.3**, additional weight is placed on the first and second feet, while in **A4.4** to **A4.6**, the extra weight is placed on the first and third feet. This realistic scenario takes advantage of symmetry fitting to adjust the linear imbalance in symmetry transformations, while the non-linear adjustments are handled by PPO. For this reason, in general, ASL learns faster and reaches higher episode returns in the end. The best results where attained for transformations modeled as linear operations of the form \(y=mx+b\).
MSL and PSL had similar performances, with the former learning faster in 3 scenarios, while the latter achieved a higher final performance in the last 2 scenarios. As for ASL, the transformation targets standard deviation across 12 instances, averaged over all multipliers and biases, was [0.05,0.06,0.06,0.04,0.13,0.08] for each scenario, indicating that symmetry fitting was consistent in independent optimizations.
A noteworthy result was that simultaneously training on all goals (**A4.1** and **A4.4**) did not appear to harm the final performance. This implies that in some cases, the performance on odd goals actually improved, in comparison with **A4.2** and **A4.5**. The scenario with lowest gain for ASL was **A4.6** in terms of episode return. This might be explained by the value distance plots, since its the only scenario where ASL does not take the lead, hinting that symmetry fitting did not sufficiently improve results in this configuration. Finally, the NSRR plots show an imbalance towards y=x except in **A4.3** and **A4.6**, since these do not contain any odd goals (where one front leg is required).
**A5.1** and **A5.2** are equivalent to **A4.1** and **A4.4**, respectively, except that the policies could only train on the first 4 goals, having to generalize that knowledge to the remaining ones. In theory, the first 4 goals have enough information to learn the linear part of these modification. However, when training with 4 goals, the policy explores only 4 different gaits, yielding a more biased linear transformation estimate, despite having lower average standard deviation, at [0.03,0.03] per scenario. Regardless of the overall loss of performance in comparison with **A4**, ASL generates considerably better policies than MSL or PSL.
In **A6** there are no feet mass modification, but the floor surface is tilted 5 degrees in a specific direction. The results were underwhelming but also though-provoking. For instance, the gait used to go down a small slope is different from the one used to go up, but we expected to find similarities that could be used to linearly model part of that relationship. In fact, the symmetry transformation targets are almost as stable as in **A4**, with an average standard deviation of
[0.06,0.08,0.07,0.06,0.08,0.06]. Moreover, these transformations were not discarded by the policy, as demonstrated by the lower value distance in comparison with other methods. However, in general, the results were mostly on par with MSL and PSL, both in terms of learning speed and final accuracy.
Two potential conclusions can be derived from this outcome. First, the nonlinear part of the symmetric relation can monopolize the optimization, leaving most of the adaptation work to PPO. Second, the benefit of adapting the symmetry transformations in **A6** might be canceled out by the additional variance introduced by modifying such transformations during optimization.
## XIV Conclusion
There are multiple approaches to model minimization that are concerned with reducing redundant symmetries. Loss functions are the most versatile way of achieving this objective, due to their ability to represent complex concepts but also because they can directly control the optimization direction by producing a gradient. For actor-critic methods such as PPO, their loss functions can be extended to introduce symmetry concepts that will influence the optimization of the policy and value function.
This work presents two existing symmetry loss functions -- Mirror Symmetry Loss (MSL) and Proximal Symmetry Loss (PSL) -- extending them with value losses and, for MSL, the capacity to handle involutory transformations, such as most rotations. A novel Adaptive Symmetry Learning (ASL) method is presented, where the main focus is to handle incomplete or inexact symmetry descriptions by learning adaptations. There are multiple sources for potential imperfections, including perturbations in the robot, environment, reward function, or agent (control process).
ASL is composed of a symmetry fitting component and a modular loss function with useful gates to exclude neutral states and disadvantageous updates. While the policy is learning the best action for each state, ASL's loss function is enforcing a common symmetric linear relation across all states and, at the same time, adapting that same relation to match what the policy is learning. Adapting non-linear relations is also an option, although it can be counterproductive due to the additional learning time. In the end, it is a trade-off between the convergence efficiency of symmetry transformations and the policy itself.
The case study presented in this work explored a four-legged ant model adapted from PyBullet's AntBulletEnv-v0. This is an interesting model to learn multidirectional locomotion tasks as it admits 7 non-identity symmetry transformations -- 4 reflections and 3 rotations. In all the proposed scenarios, vanilla PPO was not enough to learn solid policies. By contrast, PPO+ASL was able to equal or outperform the alternative symmetry-enhanced methods in most situations.
The main strengths that differentiate ASL are its performance in recovering from large linearly modelable perturbations, and generalizing knowledge to hidden symmetric states. The former characteristic was observed in a controlled setting (scenario **A3**) and in a realistic context (**A4**). The ability to successfully cope with hidden symmetric states was verified in configurations with symmetry fitting (**A5**) and without it (**A1.2**).
ASL was comparable to MSL and PSL in perfectly symmetric and fully observable conditions (**A1.1**), and slightly imperfect models (**A2**). Finally, in an inclined surface (**A6**), ASL exhibited a mixed performance, producing consistent adaptations but not translating that into better episode returns. In conclusion, the advantage of using ASL in relation to other symmetry loss functions depends on the context. However, in general, its efficiency is at least as good as the other methods, being in some circumstances considerably better.
Future research directions encompass automating hyperparameter optimization through meta-learning to mitigate setup complexity, enabling faster results with limited computational resources. Additionally, exploring symmetry perturbations that deteriorate over time as a result of wear and tear on robot components constitutes another promising avenue of investigation. While ASL can accommodate dynamic symmetry relationships without getting stuck in local minima, alternative approaches to PPO should be explored for handling changing environments.
|
2310.17519 | FLARE: Fast Learning of Animatable and Relightable Mesh Avatars | Our goal is to efficiently learn personalized animatable 3D head avatars from
videos that are geometrically accurate, realistic, relightable, and compatible
with current rendering systems. While 3D meshes enable efficient processing and
are highly portable, they lack realism in terms of shape and appearance. Neural
representations, on the other hand, are realistic but lack compatibility and
are slow to train and render. Our key insight is that it is possible to
efficiently learn high-fidelity 3D mesh representations via differentiable
rendering by exploiting highly-optimized methods from traditional computer
graphics and approximating some of the components with neural networks. To that
end, we introduce FLARE, a technique that enables the creation of animatable
and relightable mesh avatars from a single monocular video. First, we learn a
canonical geometry using a mesh representation, enabling efficient
differentiable rasterization and straightforward animation via learned
blendshapes and linear blend skinning weights. Second, we follow
physically-based rendering and factor observed colors into intrinsic albedo,
roughness, and a neural representation of the illumination, allowing the
learned avatars to be relit in novel scenes. Since our input videos are
captured on a single device with a narrow field of view, modeling the
surrounding environment light is non-trivial. Based on the split-sum
approximation for modeling specular reflections, we address this by
approximating the pre-filtered environment map with a multi-layer perceptron
(MLP) modulated by the surface roughness, eliminating the need to explicitly
model the light. We demonstrate that our mesh-based avatar formulation,
combined with learned deformation, material, and lighting MLPs, produces
avatars with high-quality geometry and appearance, while also being efficient
to train and render compared to existing approaches. | Shrisha Bharadwaj, Yufeng Zheng, Otmar Hilliges, Michael J. Black, Victoria Fernandez-Abrevaya | 2023-10-26T16:13:00Z | http://arxiv.org/abs/2310.17519v2 | # FLARE: Fast Learning of Animatable and Relightable Mesh Avatars
###### Abstract
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. To that end, we introduce FLARE, a technique that enables the creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the preferred environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material, and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.
2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 23 2023 2023 2023 2323 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023
used to obtain impressive 3D reconstructions, while NeRF-based [11] approaches [14, 15] have shown an outstanding ability to synthesize novel views of the subject. Further, these methods are trained such that the learned avatars can be controlled with novel poses and expressions, making them appropriate for entertainment and telecommunication.
There are several challenges that remain for existing head avatars to be widely applicable in industry. First, the majority of methods are slow to train and/or to render, taking hours [14] or days [15, 16] of processing to obtain a single, scene-dependent avatar. This limits the scope of applications and hinders the creation of immersive experiences. Fast approaches have recently been proposed [13, 14, 15], but they suffer from low-quality geometry and often do not generalize well to novel views. Second, to achieve high-quality reconstructions, current methods use shape representations that are not compatible with standard graphics pipelines. Ideally, a mesh representation should enable easy asset extraction and integration. However, recent neural methods that are built on triangulated meshes [14, 15] do not achieve the same geometric quality as methods based on more flexible representations. Finally, most neural approaches generate avatars that can only be rendered in the same environment in which they were captured since they do not disentangle the light from intrinsic material properties. What is still missing is an efficient method to extract head avatars that have high-fidelity geometry and can be animated and relit.
In this work we present a new method, FLARE (Fast Learning of Animatable and RElightable mesh avatars), for building 3D facial avatars from monocular videos that addresses all of these challenges, as shown in Table 1 and Figure 1. We use a mesh representation to allow easy integration as well as fast computation during training and at inference time. We represent the canonical head geometry as a triangular mesh with optimizable vertex locations and learn blendshapes as well as skinning-weight fields to deform the canonical mesh given FLAME [13] expression and pose parameters. To disentangle the intrinsic material properties and extrinsic light conditions, we leverage physically-based rendering [13, 14] where materials and lighting are represented by multi-layer perceptrons (MLPs). Specifically, we use the Disney material model [17] and represent albedo, roughness, and specular intensity as hash-encoded spatial MLPs [11]. To render color efficiently we adopt the split-sum approximation proposed in [10]. However, explicitly computing the environment light is challenging with monocular head videos given their narrow field of view. To address this, we approximate the pre-filtered environment map in the split-sum approximation with a neural network, together with an Integrated Directional Encoding (IDE) [20] that accounts for different roughness levels. Our networks are trained using a two-stage approach, where the first stage is focused on geometry and then the second stage refines the color by leveraging the hash-grid encoding [11]. This allows FLARE to control the pace at which geometry and color are learned relative to each other, achieving
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Converges} & High-fidelity & Compatible with & \multirow{2}{*}{Relightable} \\ & within 15 mins & geometry & graphics pipelines & \\ \hline \hline Davatar & ✗ & ✓ & ✗ & ✗ \\ NHA & ✗ & ✗ & ✓ & ✗ \\ PointAvatar & ✗ & ✓ & ✗ & ✓ \\ INSTA & ✓ & ✗ & ✗ & ✗ \\ FLARE & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1. Compared to other methods, FLARE converges rapidly, reconstructs high-fidelity geometry, is compatible with graphics pipelines since it employs a mesh representation, and produces head avatars that can be relighted.
Figure 2. **Method pipeline.** Top: Given an input video, we optimize for vertex displacements to obtain a canonical geometry. A _deformation_ network \(\mathcal{D}\) (green) then predicts FLAME [13] expression blendshapes \(\mathcal{E}\), pose correctives \(\mathcal{P}\) and blend skinning weights ‘\(\mathcal{W}\) given canonical vertices, which are used to deform the mesh into the corresponding expression and pose. The deformed mesh is rasterized following a deferred shading pipeline to obtain per-pixel canonical coordinates \(\mathbf{x_{c}}\) and deformed normals \(\mathbf{n}\). Bottom: \(\mathbf{x_{c}}\) is used to query the _material_ network \(\mathcal{A}\) (purple) to obtain the albedo \(\rho\), roughness \(r\), and specular intensity \(k\). Next, the _lighting_ network \(\mathcal{L}\) (pink) obtains an estimate of the diffuse shading and specular reflection from the normal and reflection vectors, while taking roughness into account. We use physically-based rendering to compute the final color, which is compared with the ground-truth frame during training.
detailed results in both areas. While maintaining high accuracy, our method is carefully designed to improve training and rendering efficiency: (1) The canonical material estimation MLPs are fueled by hash-grid encoding [14], which effectively represents high-resolution mappings with shallow MLPs, boosting query speed significantly; (2) The neural split-sum approximation reduces the evaluation of expensive integrals into one look-up in the pre-integrated texture [15, 14], as well as one forward pass of an MLP; (3) Our morphable mesh representation enables efficient differentiable rasterization with existing tools [11], in contrast to implicit representations that require hundreds of queries per pixel. Thanks to the above components, our method reconstructs detailed relightable avatars in around 15 minutes. Our experiments show that our proposed approach achieves high-fidelity geometry as well as realistic renderings, which are on par with, or superior to, existing approaches while being much faster to train as demonstrated in Figure 3. Code is available for research purposes at [https://flare.is.tue.mpg.de](https://flare.is.tue.mpg.de).
## 2. Related Work
### 3D head avatars from videos
Creating animatable 3D head avatars from videos is a popular research topic because it replaces the need for complex capture equipment [1, 2, 3, 4] with more easily accessible commodity sensors. Classic approaches [11, 12] employ statistical models [13] to recover the 3D shape and appearance, but only focus on the facial area and produce relatively coarse reconstructions. NetFACE [15] was the first to use dynamic neural radiance fields (NeRF) [16] to represent head avatars. Mavatar [12] recovers accurate geometry using implicit surfaces by jointly learning canonical head geometry and expression deformations. However, methods based on implicit representations can be inefficient to train and render. PointAvatar [12] uses a similar deformation model but employs a point cloud representation, enabling faster rasterization and better image quality. Recently, several methods [10, 13] employ InstantNGP [14] to speed up radiance field queries and can reconstruct avatars within 5 to 20 minutes. To the best of our knowledge, none of these fast avatar reconstruction methods produce high-quality surface normals. Neural Head Avatar (NHA) [11] reconstructs mesh-based avatars with complete head and hair geometry. However, the reconstructed geometry is relatively coarse, with many details represented in the texture space. None of these recent neural methods factorize light and albedo, with the exception of PointAvatar, which performs a rudimentary factorization using a diffuse shading model. In contrast, our method reconstructs mesh-based avatars with high-quality geometry within 15 minutes, and factorizes lighting into albedo, roughness and extrinsic illumination. This enables our avatars to be readily rendered in new scenes.
### Relightable 3D reconstruction from multi-view images
The ability to learn relightable 3D assets from 2D observations has extensive applications in AR and VR content creation. Several previous methods [1, 15, 16, 17, 18] leverage neural implicit representations such as NeRF [16] or neural SDF [19, 20], which benefit from unconstrained topology but are inefficient to render. Recently, [14] convert neural SDFs to meshes with differentiable marching tetrahedrons [20] and employ physically-based rendering to reconstruct high-quality relightable 3D assets in less than an hour. Neural-PIL [16] leverages a similar idea to us and approximates parts of the split-sum formula [15] with neural networks. However, the method requires pre-training on a large dataset, which hinders generalization, and is only tested with multi-view images that have full coverage of the scene.
To obtain 3D head avatars that are both animatable and relightable, recent methods [10, 17] leverage the deformable geometry of pretrained 3DMMs [15, 16], and predict albedo and lighting from a single image. SIRA [18] improves the coarse 3DMM geometry by learning a deformable SDF but requires a large number of 3D scans for training. In contrast, our method reconstructs relightable 3D head avatars from a single monocular video, achieving high-quality geometry without requiring expensive 3D scans for prior training.
## 3. Method
Given a fixed-viewpoint video of a person with frames \(\{I_{1},\ldots,I_{N}\}\), foreground masks \(\{M_{1},\ldots,M_{N}\}\), and pre-computed FLAME [15] parameters for shape \(\beta\), expressions \(\{\psi_{1},\ldots,\psi_{N}\}\), and poses \(\{\theta_{1},\ldots,\theta_{N}\}\), we jointly train (1) the deforming head geometry, parameterized by a canonical mesh with optimizable vertex locations and expression deformation fields (Sec. 3.1); (2) the intrinsic surface reflectance properties, including albedo, roughness, and specular intensity, represented by an MLP in canonical space (Sec. 3.2), and (3) a lighting MLP that approximates the pre-filtered environment map of the scene. To train these we follow a physically-based rendering approach and rasterize the mesh into images, which are compared with the ground-truth frames (Sec. 4). Figure 2 gives an overview of the method.
Figure 3. Training time vs image quality (a) and geometric quality (b) for SOTA methods. Our method is nearly as quick as INSTA while performing on-par or better than competitors. Lower is better for (a) and higher is better for (b).
### Geometry
To achieve high train- and test-time efficiency, and for compatibility with standard graphics pipelines, we use triangle meshes as the geometric representation. As shown in Figure 2, we learn a single canonical mesh that best explains all views, along with deformation fields that transform the canonical mesh given FLAME pose and expression parameters. We describe each of these below.
#### 3.1.1. Canonical mesh
Given a pre-defined FLAME template mesh \(\mathcal{V}=(V,\mathcal{F})\), with the set of vertices \(V=\{\mathbf{v}_{1},\dots,\mathbf{v}_{M}|\mathbf{v}_{i}\in\mathbb{R}^{3}\}\), and the set of triangular faces \(\mathcal{F}\), we obtain a personalized shape by optimizing \(\{\Delta\mathbf{v}_{i}|i=1\dots M,\Delta\mathbf{v}_{i}\in\mathbb{R}^{3}\}\), such that the final canonical mesh vertices are \(\{\mathbf{v}_{i}+\Delta\mathbf{v}_{i}|i=1\dots M\}\). To facilitate learning we employ a coarse-to-fine approach (Worchel et al., 2022; Zheng et al., 2023), where we upsample the mesh to \(\sim 11k\) vertices vertices during training using the algorithm in (Botsch and Kobbelt, 2004).
#### 3.1.2. Deformation field
We deform the canonical geometry using the FLAME parameters computed during the pre-processing step. Specifically, given a canonical vertex \(\mathbf{v}\in\mathbb{R}^{3}\), we deform it as follows:
\[\begin{split} FLAME(\mathbf{v},\mathcal{P},\mathcal{E},\mathcal{W },\theta,\psi)=\\ &\hskip 28.452756ptLBS(\mathbf{v}+B_{P}(\theta;\mathcal{P})+B_{E}( \psi;\mathcal{E}),J(\beta),\theta,\mathcal{W}),\end{split} \tag{1}\]
where \(J(\beta)\) is the joint regressor, LBS is the standard linear blend-skinning function with blend-skinning weights \(\mathcal{W}\), \(\theta\) and \(\psi\) are the FLAME pose and expression parameters, and \(B_{P}(\cdot)\) and \(B_{E}(\cdot)\) compute the pose and expression offsets using the blendshape components \(\mathcal{P}\) and \(\mathcal{E}\). Similar to IMavatar (Zheng et al., 2022), we train a deformation network \(\mathcal{D}\) parameterized by an MLP that, given a canonical vertex location \(\mathbf{v}\), returns the expression blendshapes \(\mathcal{E}\in\mathbb{R}^{n_{\mathbf{c}}\times 3}\), the pose correctives \(\mathcal{P}\in\mathbb{R}^{n_{\mathbf{c}}\times 3\times}\), and the linear blend skinning weights \(\mathcal{W}\in\mathbb{R}^{n_{J}}\) of the vertex (with \(n_{\mathbf{e}}\) and \(n_{j}\) the number of expression parameters and bone transformations, respectively),
\[\mathcal{D}(\mathbf{v}):\mathbb{R}^{3}\rightarrow\mathcal{E},\mathcal{P}, \mathcal{W}. \tag{2}\]
Note that, while IMavatar requires a costly root-finding process to search for canonical correspondences given deformed ray samples, our mesh formulation avoids this by directly deforming the canonical mesh and rasterizing it.
### Reflectance
To make our avatars relightable, we factorize the observed colors into learned albedo, roughness, and specular intensity, as well as a neural representation of the environment illumination. We adopt the Disney shading model (Burley, 2012) and modify it to better suit our input data. We elaborate on the appearance model in the following.
#### 3.2.1. Physically-based rendering
According to the classic rendering equation (Kaijya, 1986), the radiance \(L_{o}(\mathbf{x},\omega_{o})\in\mathbb{R}^{3}\) leaving from a surface point \(\mathbf{x}\in\mathbb{R}^{3}\) with normal vector \(\mathbf{n}\) in the direction \(\omega_{o}\) is modeled as
\[L_{o}(\mathbf{x},\omega_{o})=\int_{\Omega}f_{r}(\mathbf{x},\omega_{i},\omega_{ o})L_{i}(\omega_{i})(\mathbf{n}\cdot\omega_{i})d\omega_{i}, \tag{3}\]
where the integral is over the hemisphere \(\Omega=\{\omega_{i}|(\omega_{i}\cdot\mathbf{n})>0\}\), \(f_{r}(\mathbf{x},\omega_{i},\omega_{o})\) is the bi-directional reflectance distribution function (BRDF), and \(L_{i}\) is the incoming light intensity from direction \(\omega_{i}\).
Following the dichromatic reflection model (Shafer, 1985), the BRDF is decomposed into a diffuse term \(f_{d}\) and a specular term \(f_{s}\), and the total reflectance is calculated as \(f_{r}(\mathbf{x},\omega_{i},\omega_{o})=f_{d}(\mathbf{x},\omega_{o},\omega_{o}) +k(\mathbf{x})f_{s}(\mathbf{x},\omega_{i},\omega_{o})\), where \(k(\mathbf{x})\) is a spatially-varying specular intensity factor that weighs the contribution of the specular BRDF, similar to (Riviere et al., 2020). The rendering equation then becomes:
\[\begin{split} L_{o}(\mathbf{x},\omega_{o})=&\underbrace {\int_{\Omega}f_{d}(\mathbf{x},\omega_{i},\omega_{o})L_{i}(\omega_{i})( \mathbf{n}\cdot\omega_{i})d\omega_{i}}_{L_{o}^{diff}}+\\ k(\mathbf{x})&\underbrace{\int_{\Omega}f_{s}( \mathbf{x},\omega_{i},\omega_{o})L_{i}(\omega_{i})(\mathbf{n}\cdot\omega_{i}) d\omega_{i}}_{L_{o}^{spec}}.\end{split} \tag{4}\]
We use a simple Lambertian model for the diffuse term:
\[L_{o}^{diff}=\frac{\rho(\mathbf{x})}{\pi}\int_{\Omega}L_{i}(\omega_{i})( \mathbf{n}\cdot\omega_{i})d\omega_{i}, \tag{5}\]
where \(\rho\) is the spatially-varying RGB albedo. For the specular term we use the Cook-Torrance microfacet model (Cook and Torrance, 1982):
\[L_{o}^{spec}=\] \[\int_{\Omega}\frac{D(\mathbf{n},\omega_{o},\omega_{i},r)G(\omega_{ o},\omega_{i},r)F(\omega_{o},\omega_{i},F_{0})}{4(\omega_{o}\cdot\mathbf{n})( \omega_{i}\cdot\mathbf{n})}L_{i}(\omega_{i})(\mathbf{n}\cdot\omega_{i})d \omega_{i}. \tag{6}\]
Here, the surface roughness \(r\) modulates the microfacet normal distribution function \(D\), and the geometry attenuation function \(G\) that accounts for self-shadowing. \(F\) denotes the Fresnel equation that describes the proportion of light reflected at different surface angles. We follow the Disney material model for the specific choice of \(D\), \(G\) and \(F\) functions, see (Burley, 2012; Karis, 2013).
#### 3.2.2. Estimating intrinsic materials
We optimize the albedo and roughness of our head model, as well as specular intensity values. These properties are canonical properties of the surface and remain constant during facial deformation. Therefore, we employ an MLP, \(\mathcal{M}\), that receives canonical surface points \(\mathbf{x}_{\mathbf{c}}\) as input and predicts albedo \(\rho\), roughness \(r\) and specular intensity \(k\):
\[\mathcal{M}(\mathbf{x}_{\mathbf{c}}):\mathbb{R}^{3}\rightarrow\rho,r,k. \tag{7}\]
#### 3.2.3. Split-sum approximation
The split-sum approximation (Karis, 2013) was proposed to efficiently evaluate the specular reflectance by splitting it into two integrals that can be pre-computed:
\[L_{o}^{spec}\approx\] \[\int_{\Omega}L_{i}(\omega_{i})D(\mathbf{n},\omega_{i},\omega_{o},r)( \omega_{i}\cdot\mathbf{n})d\omega_{i}\int_{\Omega}f(\omega_{i},\omega_{o})( \omega_{i}\cdot\mathbf{n})d\omega_{i}. \tag{8}\]
The first term corresponds to a _pre-filtered environment map_, where the environment light \(L_{i}\) is convolved with the normal distribution function \(D\). This term is pre-computed for a set of roughness values and stored as a series of 2D look-up textures (LUT), where each impmap level is selected based on roughness, and each texture is indexed by the reflection vector \(\omega_{r}=2(\omega_{o}\mathbf{n})\mathbf{n}-\omega_{o}\). The second integral, known as the _BRDF integration map_ contains the rest of the terms, and it is equivalent to integrating Equation 6 with a white environment map (\(L_{i}(\omega_{i})=1\)) (Karis, 2013). This term depends on
the roughness \(r\) and the cosine angle (\(\omega_{o}\cdot\mathbf{n}\)), and it is also stored as an LUT, which will be referred to as \(FG-LUT\).
#### 3.2.4. Neural split-sum approximation
The split-sum approximation can help to efficiently learn a rich model of illumination, and has been used to disentangle the light and materials from multi-view images (Munkberg et al., 2022). However, our setting considers as input a fixed viewpoint video, which is a more challenging scenario for light disentanglement. We found through experiments that optimizing environment maps directly often leads to sub-optimal results (See Figure 10). To address this, we approximate the pre-filtered environment map in Eq. 8 with a neural network:
\[\mathcal{L}(\omega_{r},r)\approx\int_{\Omega}L_{i}(\omega_{i})D(\mathbf{n}, \omega_{i},\omega_{o},r)(\omega_{i}\cdot\mathbf{n})do_{i}. \tag{9}\]
To design this neural network, we observe that the roughness parameter \(r\) influences the output via the normal distribution function \(D\), i.e., a larger roughness corresponds to a wider distribution and leads to blurrier filtered light maps. In 2D LUTs, the pre-filtered environment maps for different roughness values are represented as impmaps. A key challenge for the neural split-sum approximation is to model this behavior for different roughness levels. To address this, we propose to adapt the Integrated Directional Encoding (IDE) (Verbin et al., 2022) to represent different mip levels of neural fields. The IDE encodes the input reflection vector \(\omega_{r}\) through the expected value of a set of spherical harmonics under a von Mises-Fischer (vMF) distribution centered at \(\omega_{r}\), where the concentration parameter \(\kappa\) is defined as the inverse roughness \(1/r\):
\[IDE(\omega_{r},r)=\mathbb{E}_{\omega\sim MF(\omega_{r},1/r)}\{Y_{l}^{m}|(l,m) \in\mathcal{M}_{L}\}, \tag{10}\]
with \(\mathcal{M}_{L}=\{(l,m):l=1\dots 2^{L},m=0\dots l\}\), \(Y_{l}^{m}\) the spherical harmonics basis functions, and \(L=4\). In practice, this positional encoding limits the representational power of the neural network when using larger roughness values, which essentially mimics the behavior of mipmap levels in a continuous manner. Note that the incident illumination \(L_{l}\), now represented as part of the pre-filtered light MLP \(\mathcal{L}\), also determines the diffuse shading. We observe that setting the roughness to its maximum value \(r=1\) within the GGX distribution for \(D\) (employed by the Disney material model) equates to \(1/\pi\), and the pre-filtered environment map term becomes the diffuse shading of Equation 5. Hence, we can use \(\mathcal{L}\) to represent both the diffuse shading and the specular pre-filtered environment map:
\[L_{o}^{diff} =\frac{\rho(\mathbf{x})}{\pi}\cdot\mathcal{L}(IDE(\mathbf{n},1))\] \[L_{o}^{spec} =\mathcal{L}(IDE(\omega_{r},r))\cdot FG-LUT(r,\omega_{o}\cdot \mathbf{n}). \tag{11}\]
We thus replace the explicit integration of a scene environment map with a single query over the pre-filtered light MLP, while still grounding the formulation on a physics-based model via the \(FG-LUT\) term. At test time we right the avatar by simply replacing \(\mathcal{L}\) with a pre-filtered environment map.
#### 3.2.5. Color prediction
The final outgoing radiance is calculated as
\[L_{o}(\mathbf{x},\omega_{o})=\frac{\rho(\mathbf{x})}{\pi}\cdot \mathcal{L}(IDE(\mathbf{n},1))+\] \[k(\mathbf{x})\mathcal{L}(IDE(\omega_{r},r))\cdot FG-LUT(r( \mathbf{x}),\omega_{o}\cdot\mathbf{n}). \tag{13}\]
## 4. Training
### Loss Functions
In this section, we discuss the loss functions employed by FLARE, grouped by image-related losses, geometry-related losses, deformation-related losses, and material regularizers.
#### 4.1.1. Image
Given a ground-truth frame \(I_{j}\) and a predicted image \(\tilde{I}_{j}\), we compute (1) the \(L2\) loss in log space between the masked ground-truth and the predicted image following (Munkberg et al., 2022):
\[\mathcal{L}_{RGB}=||log(I_{j})-log(\tilde{I}_{j})||_{2}^{2}, \tag{14}\]
(2) an L2 loss between ground-truth and predicted binary masks:
\[\mathcal{L}_{mask}=||M_{j}-\tilde{M}_{j}||_{2}^{2}, \tag{15}\]
and (3) a perceptual loss (Johnson et al., 2016) given as:
\[\mathcal{L}_{agg}=||F_{agg}(I_{j})-F_{agg}(\tilde{I}_{j})||_{2}^{2}, \tag{16}\]
where \(F_{agg}\) represents the extracted features from the first four layers of a pre-trained VGG (Simonyan and Zisserman, 2015) network.
#### 4.1.2. Geometry
During the optimization of mesh vertices, it is necessary to constrain them in order to avoid self-intersections and to obtain a coherent shape. We follow (Luan et al., 2021; Worchel et al., 2022) and use a Laplacian smoothness regularizer where the magnitude of the differential coordinates of each vertex is minimized. For canonical vertices given by \(\{\mathbf{v}_{i}+\Delta\mathbf{v}_{i}|i=1\dots M\}\), the regularizer is defined as:
\[\mathcal{L}_{laplacian}=\frac{1}{M}\sum_{1}^{M}||\delta_{i}||_{2}^{2} \tag{17}\]
where \(\delta_{i}=(LV)_{i}\) are the differential coordinates of the \(i\)-th vertex, and \(L\in\mathbb{R}^{M\times M}\) the graph Laplacian of the mesh (Sorkine, 2005). Additionally, we apply a normal consistency term (Luan et al., 2021; Worchel et al., 2022) that enforces cosine similarity between neighboring face normals and is given by:
\[\mathcal{L}_{normal}=\frac{1}{|\mathcal{F}|}\sum_{(i,j)\in\mathcal{F}}(1- \mathbf{n}_{i}\cdot\mathbf{n}_{j})^{2}. \tag{18}\]
\(\mathcal{F}\) is the set of triangle pairs that share an edge, and \(n_{i}\) is the normal of triangle \(i\).
#### 4.1.3. Deformation
We regularize the blendshapes and skinning weights similar to (Zheng et al., 2022) as follows:
\[\mathcal{L}_{flame}=\] \[\frac{1}{M}\sum_{1}^{M}(\lambda_{e}||\mathcal{E}_{i}-\hat{ \mathcal{E}}_{i}||_{2}+\lambda_{p}||\mathcal{P}_{i}-\hat{\mathcal{P}}_{i}||_{2 }+\lambda_{w}||\mathcal{W}_{i}-\hat{\mathcal{W}}_{i}||_{2}), \tag{19}\]
where \(\mathcal{L}_{flame}\) regularizes \(\mathcal{E}\), \(\mathcal{P}\) and \(\mathcal{W}\) with pseudo ground-truth \(\hat{\mathcal{E}}\), \(\hat{\mathcal{P}}\) and \(\hat{\mathcal{W}}\), obtained from the nearest vertex of the FLAME (Li et al., 2017) template. Here, \(\lambda_{e}=50,\lambda_{p}=50,\lambda_{w}=2.5\).
#### 4.1.4. Material Regularization
We apply a white light regularization over the diffuse shading as in (Munkberg et al., 2022):
\[\mathcal{L}_{\text{light}}=\frac{1}{3}\sum_{i=0}^{3}|\overline{e}_{i}-\frac{1}{ 3}\sum_{i=0}^{3}\overline{e}_{i}|, \tag{20}\]
where \(\overline{e}_{i}\) is the per-channel average intensity. Additionally, we regularize the specular intensity \(k\) by computing the z-score of our predicted specular intensities relative to a Gaussian distribution based on the MERL / ETH Skin Reflectance Database (Weyrich et al., 2006). The dataset provides specular intensity measurements for 156 faces, with a mean value of 0.3753 and a standard deviation of 0.1655. The regularization is defined as:
\[\mathcal{L}_{\text{spec}}(\mathbf{x_{c}})=\frac{\mathbf{x_{c}}-0.3753}{0.1655}. \tag{21}\]
We employ a similar strategy to regularise the roughness. However, since the statistics reported in (Weyrich et al., 2006) are computed for the Torrance-Sparrow model, we empirically set the mean to \(\mu_{\text{rough}}=0.5\) and standard deviation to \(\sigma_{\text{rough}}=0.1\) through visual evaluation. We provide an ablation study in Section 5.4.1 to support the choice of this hyper-parameter. The loss is defined as:
\[\mathcal{L}_{r}(\mathbf{x_{c}})=\frac{\mathbf{x_{c}}-\mu_{\text{rough}}}{ \sigma_{\text{rough}}}. \tag{22}\]
Finally, we enforce a smoothness constraint for both albedo and roughness similar to (Munkberg et al., 2022), with an additional robust term (Barron, 2019) that helps preserve high-frequency details. Specifically, for each canonical point \(\mathbf{x_{c}}\in\mathbb{R}^{3}\) we compute a random displacement vector \(\epsilon\in\mathbb{R}^{3}\) sampled from a Gaussian distribution, and compute the albedo (\(\rho\)) and roughness (\(r\)) for both points. We apply an L1 loss between these two to enforce smoothness within neighboring points as follows:
\[\mathcal{L}_{smooth}(\mathbf{x_{c}})=f_{robust}(||\rho(\mathbf{x _{c}})-\rho(\mathbf{x_{c}}+\epsilon)||_{1}) \tag{23}\]
where \(f_{robust}\) is the adaptive robust loss function of (Barron, 2019).
#### 4.1.5. Loss function
\[\mathcal{L}=\lambda_{RGB}\mathcal{L}_{RGB}+\lambda_{agg} \mathcal{L}_{agg}+\lambda_{mask}\mathcal{L}_{mask}+ \tag{25}\] \[\lambda_{flame}\mathcal{L}_{flame}+\lambda_{laplacian}\mathcal{ L}_{laplacian}+\lambda_{normal}\mathcal{L}_{normal}+\] \[\lambda_{smooth}\mathcal{L}_{smooth}+\lambda_{r}\mathcal{L}_{r }+\lambda_{spec}\mathcal{L}_{spec}+\lambda_{light}\mathcal{L}_{light}\]
where \(\{\lambda_{i}\in\mathbb{R}\}\) weigh the importance of the corresponding terms and we empirically determined them as: \(\lambda_{RGB}=1.0\), \(\lambda_{agg}=0.1,\lambda_{mask}=2.0,\lambda_{flame}=5.0,\lambda_{laplacian} =60.0,\lambda_{normal}=0.1,\lambda_{smooth}=0.01,\lambda_{r}=0.01,\lambda_{ spec}=0.01,\lambda_{light}=0.01\).
### Training Details
We train FLARE using differentiable rendering to compare the predicted images with ground-truth frames. Given the current canonical mesh \(\{\mathbf{v_{i}}\mathbf{+}\Delta\mathbf{v_{i}}|i=1\dots M\}\), we first estimate expression blendshapes \(\mathcal{E}\), pose correctives \(\mathcal{P}\) and blend skinning weights \(\mathcal{W}\) through a forward pass of the deformation network, \(\mathcal{D}(\mathbf{v_{i}}+\Delta\mathbf{v_{i}})\rightarrow(\mathcal{E}, \mathcal{P},\mathcal{W})\). With the expression and pose parameters \(\psi,\theta\), we obtain deformed vertex positions \(\tilde{\mathbf{v}}_{i}\) using the FLAME function in Equation 1, \(\tilde{\mathbf{v}_{i}}=\textit{FLAME}(\mathbf{v_{i}}+\Delta\mathbf{v_{i}}, \mathcal{E},\mathcal{P},\mathcal{W},\theta,\psi)\). Following a deferred shading pipeline, the deformed vertices are rasterized to obtain triangle indices and barycentric coordinates per pixel. We then interpolate and obtain the corresponding canonical point locations \(\mathbf{x_{c}}\), deformed point locations \(\mathbf{x_{d}}\) (used to compute \(\omega_{0}\)), and deformed normals \(\mathbf{n_{d}}\) for each pixel. Next, we compute material properties by querying the material network \(\mathcal{M}(\mathbf{x_{c}})\rightarrow(\rho,r,k)\) (Equation 7). Finally, we query the lighting MLP \(\mathcal{L}(\omega_{r},r)\) using the deformed normals \(\mathbf{n_{d}}\) to obtain the left-hand side of Equation 8 and the diffuse shading of Equation 5. The final color for the pixel is computed using Equation 13. Our framework is implemented in PyTorch and trained using a single A100 Nvidia GPU with 80GB of memory and a batch size of 4 images per iteration.
#### 4.2.1. Two-stage training
To learn high-frequency facial features and to enable fast rendering, we incorporate hash-grid encoding (Muller et al., 2022) for the material MLP, \(\mathcal{M}\). However, we found that this approach overfits to colors quickly, learning texture much faster than the geometry, resulting in smoother shapes of lower quality. To address this, we design a two-stage training approach. During the first stage, we equip the material MLP with positional encoding (Mildenhall et al., 2020) and jointly optimize the geometry \(\Delta\mathbf{v_{i}}\), deformation \(\mathcal{D}\), material \(\mathcal{M}\), and lighting \(\mathcal{L}\). The first stage can achieve detailed geometry but often learns blurry texture. During the second stage of training, we leverage the pre-trained mesh geometry and deformation from the previous stage and re-optimize both material and lighting MLPs, where \(\mathcal{M}\) is now equipped with high-resolution hash-grid encoding (Muller et al., 2022). With the proposed two-stage training, our method can achieve both high-fidelity geometry and realistic texture (See Fig. 11).
## 5. Evaluation
In this section, we present qualitative and quantitative results of FLARE. First, we show qualitative examples of the individual components, including geometry, albedo, roughness, diffuse and specular shading, as well as relit images (Sec. 5.2). Next, we compare our results with the state-of-the-art (SOTA) baselines in terms of image quality and albedo, as well as geometric accuracy (Sec. 5.3). Finally, we conduct an ablation study to evaluate our design choices in Sec. 5.4. All the results in this section are generated using frames from the test set. For each test frame, we obtain FLAME parameters (pose and expression) from the pre-processing step and animate the personalized canonical representation (geometry) estimated by each baseline method. These animated renderings are relit with novel environment maps. We include additional results in the supplementary video.
### Dataset
We use 2 subjects released by (Zheng et al., 2022), 2 by (Zheng et al., 2023) (where 1 subject is captured by a webcam), and 1 by (Zielonka et al., 2023). We additionally capture 15 subjects with a smartphone to demonstrate the robustness of FLARE for diverse skin tones and shapes. We follow the protocol of (Zheng et al., 2022, 2023) for the capture and obtain an average of 3000-4000 frames for training and around 1000-3000 frames for testing. These new subjects gave prior informed written consent for their data to be used for academic research purposes. In total, we conduct the evaluations for 20 subjects. To measure geometric accuracy we use a dataset
of synthetic heads (Briceno and Paul, 2019; Grassal et al., 2022), in which each head has 200 frames for training and 200 frames for testing.
### Qualitative Evaluation: Intrinsic Materials and Relighting
The intrinsic material properties (albedo, roughness, and specular intensity) and relit faces are visualized in Figures 4 and 5. The rendered albedo images in Figure 4 illustrate that FLARE is capable of removing evident shadows and specular highlights in the face region; e.g., see the subject in the second row. The influence of the predicted roughness values can be visualized in the relit images: the teeth of the subject in the first row and the hair of the subject in the second row are correctly predicted as shiny surfaces (lower roughness values), which results in realistic reflections when relit with new environment lighting. Finally, the robustness of FLARE is demonstrated across different skin tones, skin textures, hair types, hair styles, facial hair, and even accessories such as a cap. Despite having a single monocular video as input, FLARE computes geometries and materials that enable realistic and plausible relighting.
#### 5.2.1. Comparison with PointAvatar
To the best of our knowledge, PointAvatar (Zheng et al., 2023) is the only other neural avatar method trained from a monocular video that disentangles diffuse shading from albedo. Thus, we qualitatively evaluate the albedo and shading of FLARE in Figure 5 by comparing it with PointAvatar. We relight the renderings of PointAvatar using a Lambertian shading model, where we use the predicted surface normals to obtain diffuse shading. From Figure 5, we observe that the albedo estimated by PointAvatar is biased towards light skin tones and fails to capture the skin color of the subjects. In comparison, the albedo estimated by FLARE resembles the color of the subject, and much of the shading is removed.
Relighting FLARE's estimated materials results in natural looking images. This is due, in part, to the estimated specular highlights, which are absent in PointAvatar's formulation. The specular highlights are visible in the 5th row of Figure 5, on the left and right cheeks of the first subject from the left, and in the rightmost subject, who has smooth and shiny hair that reflects the environment's light.
### Comparisons with State-of-the-Art
In this section, we compare the results of FLARE with the following state-of-the-art (SOTA) methods for neural head avatar estimation from videos: (1) IMavatar (Zheng et al., 2022) and (2) PointAvatar (Zheng et al., 2023), which use a deformation module similar to ours, with a signed distance function (SDF) and point cloud representation for geometry, respectively; (3) NHA (Grassal et al., 2022), which employs a mesh representation along with an alternating training strategy between geometry and color; and (4) INSTA (Zielonka et al., 2023), which learns an animatable avatar using a NeRF (Mildenhall et al., 2020) representation and leverages the InstantNGP framework (Muller et al., 2022) for faster optimization. NHA and PointAvatar employ test-time optimization of the expression and pose parameters due to noisy pre-processing estimates. Hence, we report the optimized quantitative evaluations for both NHA and PointAvatar to retain their best performance. However, it must be noted that the reported results of FLARE, IMavatar, and INSTA are _not_ optimized at test time.
#### 5.3.1. Image quality
First, we compare FLARE with SOTA methods with respect to image quality. We use the same FLAME parameters sampled from the test set on all baselines1 and measure the accuracy against the ground-truth frames by using mean absolute error (L1 distance), PSNR, structural similarity index measure (SSIM) (Wang
Figure 4. Qualitative results. The first three rows illustrate our intrinsic material estimates (albedo, roughness, and specular intensity) for three different subjects. The next three rows show the above subjects in the same pose and expression under 4 different environment maps.
et al. 2004), and perceptual loss (LPIPS) (Zhang et al. 2018). The evaluations are computed only on the masked regions for all the methods. Qualitative results can be found in Figure 6, and quantitative results are shown in Table 2.
We make the following observations in comparison to prior art: (1) In terms of image quality, the baselines perform approximately on par with each other, with FLARE obtaining the highest score over half the metrics and second highest over the other half. (2) Despite PointAvatar's ability to capture high-frequency texture details, the point cloud representation, containing approximately 400k points, is susceptible to sparsity at extreme jaw or neck poses. From Figure 6, 4th row and 5th column, we can observe the artifacts that occur at extreme poses, producing a salt-and-pepper-like noise. In comparison, our mesh representation inherently solves the sparsity issue with approximately 11k vertices (we evaluate mesh resolution in Sec. 5.4.4). (3) FLARE is able to capture high-frequency texture details better than IMavatar and this is evidenced qualitatively as well as quantitatively, where IMavatar has the weakest LPIPS score. (4) INSTA can converge quickly and produces visually convincing expressions and high-quality texture with forward-facing poses. However, at extreme neck poses we observe noisy texture, possibly due to the volumetric representation that fails to extrapolate well. (5) NHA also employs a mesh-based representation to learn the geometry. However, the predicted mesh is unable to capture high-fidelity details as well as the baselines and produces an over-smoothed representation. We believe that this is a result of their training strategy in which the geometry is primarily supervised with pseudo-normals from (Abrevaya et al. 2020), unlike the rest of the baselines, which learn geometry exclusively via inverse rendering. Instead, we carefully consider how fast the texture network is trained compared to the geometry network, and we observe that this was helpful in achieving high-fidelity geometry. We evaluate our training strategy further in Sec. 5.4.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & L1 \(\downarrow\) & LPIPS \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) \\ \hline IMavatar & 0.0290 & 0.2091 & 0.8491 & 23.0975 \\ NHA & 0.0265 & 0.1243 & 0.8390 & 22.7071 \\ PointAvatar & 0.0234 & 0.1400 & 0.8391 & 24.6520 \\ INSTA & 0.0290 & 0.1607 & 0.8379 & 23.6279 \\ \hline Ours & 0.0245 & 0.1225 & 0.8421 & 24.7845 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Quantitative comparisons in terms of image quality on real data. The evaluations are performed only on the face region. Red color indicates the best value, yellow second best, and light yellow is the third best.
Figure 5. Qualitative comparision with PointAvatar. The first two rows show albedo and diffuse shading estimated by FLARE compared with PointAvatar (Zheng et al. 2023). The next row shows the roughness and specular intensity (Spec. Intensity) estimated by FLARE for the same subjects as above. The bottom two rows contain relighting results of FLARE and PointAvatar for the same subjects, animated with test poses and expressions.
#### 5.3.2. Geometric Accuracy
We quantitatively evaluate the geometry quality on synthetic heads using the renderings generated by the authors of NHA with the open source _MakeHuman_ project (Briceno and Paul, 2019). Geometric accuracy is measured using the cosine similarity between the ground truth and predicted normals. Results are shown in Table 3 and Figure 7. IMavatar and FLARE exhibit high-fidelity normals that resemble the input identity and obtain
Figure 6. Qualitative comparison between FLARE and state-of-the-art methods. The canonical representation of each baseline method is animated using test poses and expressions. Odd columns: generated images. Even columns: generated normals.
relatively close scores quantitatively. However, IMavatar is approximately 200 times slower to train than FLARE, mainly due to the root-finding step during ray tracing between deformed and canonical points. Moreover, it uses an SDF representation that requires a post-processing step to obtain a mesh, while FLARE can be trained in approximately 15 minutes and directly produces a canonical mesh that can be animated. INSTA, on the other hand, exhibits noisy shapes that can be observed in both Figure 7 and Figure 6, and the normals of NHA do not completely capture the identity. Note that the synthetic heads have smooth geometry and, consequently, most methods do well with only small numerical differences between methods.
#### 5.3.3. Training Time
Figure 3 plots the training time of each method against image quality (LPIPS) and geometric quality (cosine similarity). The plot is measured over the same data as Tables 2 and 3. We find that FLARE can be trained almost as quickly as INSTA but with better performance in terms of image quality and state-of-the-art results in terms of geometry.
### Ablation Study
#### 5.4.1. Loss Functions
We evaluate the contribution of the terms in the loss function that are not adopted by prior avatar methods but are crucial in our setting.
Specular Intensity Regularization.\(\mathcal{L}_{spec}\)The specular intensity \(k\) controls the intensity of the specular highlights. In Figure 8 we qualitatively evaluate the effectiveness of using the regularizer and show relighting results for one subject with and without the specular intensity regularization. We observe the occurrence of unnaturally sharp highlights that have high intensity around the subject's lower lip and cheek regions when specular intensity is left unconstrained. Constraining it with \(\mathcal{L}_{spec}\) makes the non-Lambertian effects more subtle and natural.
Roughness Regularization\(\mathcal{L}_{r}\)To regularize the roughness we employ a statistical approach similar to specular intensity. However, we know of no suitable database of statistical values for roughness that can be used to regularize the appearance model. Hence, we
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline & Female 1 & Female 2 & Male 1 & Male 2 \\ \hline IMavatar & 0.961 & 0.966 & 0.954 & 0.955 \\ NHA & 0.94 & 0.95 & 0.94 & 0.94 \\ PointAvatar & 0.954 & 0.954 & 0.944 & 0.958 \\ INSTA & 0.665 & 0.751 & 0.757 & 0.713 \\ \hline Ours & 0.950 & 0.955 & 0.948 & 0.953 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Quantitative comparisons in terms of geometric accuracy on a synthetic dataset. Showing cosine similarity compared to ground-truth normals (higher is better). Red color indicates the highest value, yellow second highest and light yellow is third.
Figure 7. Qualitative comparisons on synthetic data. The estimated surface normals and texture on synthetic data are compared with state-of-the-art methods by animating the canonical representation using test poses and expressions. Our method can capture high-fidelity geometry as well as color. GT = Ground Truth.
employ an empirical mean with a fixed standard deviation of 0.1, and evaluate the results of using different mean values in Table 4. Additionally, we evaluate the results of not using this regularization and show qualitative results in Figure 9. We observe a similar behavior as with specular intensity when roughness is left unconstrained. The final numerical prediction of each subject is not affected by a large margin since the network learns to compensate for wrong predictions with other estimations. However, Figure 9 reveals that the regularizer helps produce visually realistic renderings.
#### 5.4.2. Standard PBR vs. Lighting MLP
We compare our proposed approach with a method that estimates a standard texture-based environment map for training, which will be referred to as "Standard PBR". For this experiment we use the same hyper-parameters, loss functions, training protocol, and geometric representation of our method, and replace the pre-filtered light MLP \(\mathcal{L}\) with a learnable texture of the environment map, where the integral is solved with the approach proposed by [14]. In Figure 10 we visualize an example of geometry and relighting obtained with both methods. We observe that the standard PBR results in noisy texture and geometry predictions that are evident after relighting the subject. This is probably due to the redundant calculations of the regions in the environment map that are never observed in our monocular setting, creating instability in the optimization process. Further, we can observe that the input image in Figure 10 (bottom left) is captured such that the main light source is from the right of the subject. However, the environment maps have the main light source coming from the left. Here, PBR exhibits shadows in the texture that are retained from the original input data; for instance, see the shadowing on the nose. This is also observed in the estimated albedo, and it is not prominent in our results.
#### 5.4.3. Two-stage training
Through the course of our experiments, we noticed that it is necessary to control the speed at which the texture is learned in order to obtain both good geometry and albedo. In particular, using a hash-grid positional encoding [13] results in better image quality, and the method converges very fast. However, this results in noisier geometries since there is not enough gradient signal coming from the color supervision. This behavior can be observed in the first column of Figure 11, where a high-quality rendered image corresponds to a relatively noisy geometry. On the other hand, using a standard positional encoding [12] (second column in Figure 11) converges slower and leads to blurry textures, but learns geometric details from the observed images. Our two-stage training approach achieves the best of both options, as shown in the last column of Figure 11.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Mean & \multirow{2}{*}{L1 \(\downarrow\)} & \multirow{2}{*}{LPIPS \(\downarrow\)} & \multirow{2}{*}{SSIM \(\uparrow\)} & \multirow{2}{*}{PSNR \(\uparrow\)} \\ Roughness & & & & \\ \hline
0.3 & 0.0257 & 0.1014 & 0.8772 & 24.341 \\
0.4 & 0.0250 & 0.1031 & 0.8744 & 24.268 \\
0.5 & **0.0239** & **0.0941** & 0.8834 & 24.847 \\
0.6 & 0.0225 & 0.0954 & **0.8824** & **25.023** \\
0.7 & 0.0247 & 0.0985 & 0.8790 & 24.471 \\ \hline w/o & \multirow{2}{*}{0.0265} & \multirow{2}{*}{0.1028} & \multirow{2}{*}{0.8738} & \multirow{2}{*}{23.984} \\ regularization & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation of \(\mathcal{L}_{r}\). Influence of \(\mathcal{L}_{r}\) on image quality when using different mean roughness values.
Figure 8. Ablation of \(\mathcal{L}_{spec}\). Qualitative comparison of relighting results with and without the specular intensity regularizer. Results indicate that it is necessary to constrain the specular intensity statistically to avoid unrealistically sharp highlights.
Figure 10. Ablation Study, Comparison against learning a full environment map (“standard PBR”), as in [14]. Using this representation typically results in noisier geometry and color. From left to right: predicted geometry, predicted albedo, relighting under two different environment maps. The bottom row shows the input test image, and the two environment maps.
Figure 9. Ablation of \(\mathcal{L}_{r}\). Qualitative comparison of relighting results with and without the roughness regularizer. Results indicate that it is necessary to constrain the roughness to ensure non-lambertian reflections on the skin look plausible.
#### 5.4.4. Mesh Upsampling
The FLAME mesh contains 5023 vertices that model the face and neck region, without hair or shoulders. In our setting, we learn the geometry of the subjects including diverse hair types and hairstyles, facial hair, head accessories, and part of the shoulder. However, optimizing with only 5023 vertices results in a smooth coarse geometry, as illustrated in Figure 12. The output mesh appears smooth as the vertices around the shoulder and hair region are stretched out to form triangles occupying a large area. To capture the high-fidelity geometric details of the subject, we increase the resolution by upsampling the mesh (Botsch and Kobbelt, 2004) to around 11k vertices. This improves the quality in the hair, neck, and shoulder regions. Note that our resolution is lower than the roughly 16K vertices used by NHA, yet our geometric quality is higher.
## 6. Limitations and Future Work
FLARE can be trained in around 15 minutes and produces competitive results compared to methods that generate high-fidelity geometry at the expense of longer training times (on the order of days). However, there are still limitations, as shown in Figure 13. Firstly, the quality of the eyes and mouth interior needs improvement. These are challenging areas due to their complex material properties, and most neural avatar methods currently struggle with modeling these. For the mouth interior, an additional challenge comes from the fact that the teeth are exposed to varying degrees during training and this varies significantly between subjects. When a person does not smile with their teeth or does not articulate sufficiently, then the model does not have enough information to correctly reproduce the tooth color and geometry. Further, the FLAME mesh does not have vertices in the mouth interior and thus, during rasterization, there are no vertices projected onto the image of the mouth, resulting in no gradient being propagated there. Our remeshing step partly addresses this problem and, for some subjects, there are vertices formed around the teeth. However, modeling the teeth remains a challenging task due to the constant motion of the lips and limited supervision. Similarly, the eye area exhibits challenging photometric properties that are not always captured by our method. In addition, our pre-processing step does not track eye blinking, resulting in inevitable errors during optimization that yield a noisy geometry around the eyes. Future work should develop techniques that can enhance the estimation of the mouth and eye area, in both photometric and geometric respects.
Second, capturing harsh neck shadows, self-shadows, and sharp specular highlights is difficult as demonstrated in Figure 13. We can remove shadows cast on the face region as the subject moves their head in various directions. However, the shoulder and neck areas remain mostly static and shadows are baked in. Additionally, although non-Lambertian reflections that look plausible can be captured by our method during relighting due to the estimated materials, we miss reproducing the sharp specular highlights of the ground truth. This is due to the several approximations that we make to model the pre-filtering of the environment and to simplify
Figure 11. Ablation Study. Qualitative comparison between the hash-grid encoding of (Müller et al., 2022), the positional encoding of (Mildenhall et al., 2020) (“PE”), and our two-stage approach. Top row: estimated normals; bottom row: estimated rendering.
Figure 12. Ablation Study: Mesh resolution. Qualitative comparison between surface normals of two subjects with and without upsampling the mesh. This figure demonstrates that upsampling the FLAME mesh to roughly 11K vertices helps capture high-fidelity geometry.
Figure 13. Limitations. Modeling the mouth interior and eyes are challenging due to their complex material properties, variation in appearance (e.g subjects 1, 2, and 3 have different-sized teeth), and the fact that we do not model eye blinks (e.g. subject 3). Capturing sharp specular highlights is also challenging due to the approximations made by our lighting model (subjects 3 and 4).
the integral of the rendering equation. Finally, our method does not model more subtle skin properties such as sub-surface scattering, or time-dependent appearance changes. We hypothesize that this could enhance realism and, consequently, the estimated geometry. We believe this is an interesting direction to pursue in the future.
## 7. Ethics
The goal of FLARE is to enable fast, subject-specific, avatar creation that can be used to generate novel expressions and to place the avatars in different scenes. This capability, however, opens the door to potential misuse, where new malicious content of the training subject can be generated without their consent. Although the quality of FLARE still exhibits identifiable artifacts signaling its AI origin, the rapid progression of the field suggests these cues may diminish over time. Addressing this remains an important technical and legal challenge.
## 8. Conclusion
In this work we presented FLARE, a new method for building animatable and relightable head avatars from monocular video in 15 minutes. Our approach directly produces a mesh representation that can be efficiently rendered and animated, along with material parameters that allow the avatars to be placed in scenes under novel illumination. This is achieved by combining traditional computer graphics methods for rendering with neural networks that approximate some of the components. More specifically, we optimize a canonical mesh geometry while approximating the expression deformations, albedo, roughness and specular intensity values using coordinate-based MLPs. Further, we avoid explicitly computing an environment map from a narrow field of view by approximating the pre-filtered environment map in the split-sum formulation with a neural network. Finally, we propose a two-stage approach designed to control the pace at which geometry and texture are learned relative to one another. Our experimental results show that we can obtain mesh avatars of high geometric and image fidelity. Once learned, the avatars can be readily inserted and rendered in arbitrary scenes using standard graphics pipelines, enabling downstream applications in gaming, film production and telepresence.
###### Acknowledgements.
We thank Jacob Munkberg, Jon Hasselgren, and Pramod Rao for fruitful discussions and Wojciech Zielonka for discussions regarding the baseline. We thank Asuka Bertler, Claudia Gallatz, Taylor McConnell, and Markus Hoschle for their support with data collection and thank all the participants for their time. We thank Haoran Yun, Peter Kultis, and Nikos Athanasiou for additional support.
###### Disclosure.
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a consultant for Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.
|
2302.00274 | Identifying the SN 2022acko progenitor with JWST | We report on analysis using the James Webb Space Telescope (JWST) to identify
a candidate progenitor star of the Type II-plateau supernova SN 2022acko in the
nearby, barred spiral galaxy NGC 1300. To our knowledge, our discovery
represents the first time JWST has been used to localize a progenitor system in
pre-explosion archival Hubble Space Telescope (HST) images. We astrometrically
registered a JWST NIRCam image from 2023 January, in which the SN was
serendipitously captured, to pre-SN HST F160W and F814W images from 2017 and
2004, respectively. An object corresponding precisely to the SN position has
been isolated with reasonable confidence. That object has a spectral energy
distribution (SED) and overall luminosity consistent with a single-star model
having an initial mass possibly somewhat less than the canonical 8 Msun
theoretical threshold for core collapse (although masses as high as 9 Msun for
the star are also possible); however, the star's SED and luminosity are
inconsistent with that of a super-asymptotic giant branch star which might be a
forerunner of an electron-capture SN. The properties of the progenitor alone
imply that SN 2022acko is a relatively normal SN II-P, albeit most likely a
low-luminosity one. The progenitor candidate should be confirmed with follow-up
HST imaging at late times, when the SN has sufficiently faded. This potential
use of JWST opens a new era of identifying SN progenitor candidates at high
spatial resolution. | Schuyler D. Van Dyk, K. Azalee Bostroem, WeiKang Zheng, Thomas G. Brink, Ori D. Fox, Jennifer E. Andrews, Alexei V. Filippenko, Yize Dong, Emily Hoang, Griffin Hosseinzadeh, Daryl Janzen, Jacob E. Jencson, Michael J. Lundquist, Nicolas Meza, Dan Milisavljevic, Jeniveve Pearson, David J. Sand, Manisha Shrestha, Stefano Valenti, D. Andrew Howell | 2023-02-01T06:37:00Z | http://arxiv.org/abs/2302.00274v2 | # Identifying the SN 2022acko progenitor with _Jwst_
###### Abstract
We report analysis using the _James Webb Space Telescope_ (_JWST_) to identify a candidate progenitor star of the Type II-plateau supernova SN 2022acko in the nearby, barred spiral galaxy NGC 1300. To our knowledge, our discovery represents the first time _JWST_ has been used to localize a progenitor system in pre-explosion archival _Hubble Space Telescope_ (_HST_) images. We astrometrically registered a _JWST_ NIRCam image from 2023 January, in which the SN was serendipitously captured, to pre-SN _HST_ F160W and F814W images from 2017 and 2004, respectively. A star corresponding precisely to the SN position has been isolated with reasonable confidence, although a \(\sim 2.9\sigma\) difference exists between the measured position for the star from _HST_ and the transformed SN position from _JWST_. That star has a spectral energy distribution and overall luminosity consistent with a single-star model having an initial mass somewhat less than the canonical 8 M\({}_{\odot}\) theoretical threshold for core collapse, although the star's initial mass is inconsistent with that of a super-asymptotic giant branch star which might be a forerunner of an electron-capture SN. The properties of the progenitor alone imply that SN 2022acko is a relatively normal SN II-P, albeit most likely a low-luminosity one. The progenitor candidate should be confirmed with follow-up _HST_ imaging at late times, when the SN has sufficiently faded. This potential use of _JWST_ opens a new era of identifying SN progenitor candidates at high spatial resolution.
keywords: supernovae: general - supernovae: SN 2022acko - stars: massive - stars: evolution
## 1 Introduction
Supernovae (SNe) are the catastrophic endpoints of the lives of some stars. It is widely believed that the total disruption of a white dwarf (WD), as a consequence of critical mass accretion from a dwarf or giant star (or a merger with another WD), leads to thermonuclear explosion as normal Type Ia SNe, which have proven to be highly valuable cosmological probes owing to their high, standardizable luminosity. Stars with initial masses \(M_{\rm ini}\gtrsim 8\) M\({}_{\odot}\) reach a point in their evolution at which nuclear burning can no longer support the inner core, and the collapse of the core to neutron degeneracy rapidly leads to the explosive removal of the outer stellar envelope -- a core-collapse SN (CCSN; e.g., Woosley & Weaver, 1986). CCSNe account for \(\sim 76\%\) of all SNe locally (Li et al., 2011; Graur et al., 2017). CCSNe are not homogeneous in their properties: both spectroscopically and photometrically they roughly divide further into the H-rich Type II and the H-poor Type Ib and Ic, with Type IIb constituting an intermediate grouping (see, e.g. Filippenko, 1997; Gal-Yam, 2017, for reviews of SN classification). Type II SNe have been classically separated into the Type II-plateau (II-P) and II-linear (II-L), based on their light-curve properties, although such distinctions in the modern view are debatable (Anderson et al., 2014; Valenti et al., 2016). Further subdivisions have emerged among the SNe II, Ib, and Ic with respect
to the presence of relatively narrow spectral emission features (the so-called "n" subtypes), interpreted as being indicative of the presence of dense circumstellar material.
Of utmost importance in understanding SNe in general, and CCSNe specifically, is mapping the various SN subtypes to classes of progenitor star. A number of indirect methods have been employed, but the most direct approach is to identify the actual star that has exploded. The lion's share of such detections have been made in pre-explosion _Hubble Space Telescope_ (_HST_) archival images. As a community, we have successfully demonstrated in more than 20 cases that SNe II-P, in particular, arise from massive stars in the red supergiant (RSG) phase, as expected theoretically (Smartt et al., 2009; Smartt, 2015; Van Dyk, 2017). The RSGs are likely in a limited range of \(M_{\rm{ini}}\), the low end at the core-collapse limit and \(\lesssim 17\) M\({}_{\odot}\) at the high end (although see Davies & Beasor, 2020). SNe IIn appear to arise potentially from a higher-mass stellar group, with remarkable outbursts observed before explosion, ostensibly as luminous blue variable stars (e.g., Gal-Yam & Leonard, 2009; Smith et al., 2022; Brennan et al., 2022; Jencson et al., 2022). For SNe Ib, a hot He-star progenitor was identified for one event (iPTF13bw; e.g., Eldridge & Maund, 2016), whereas a cool, extended star was identified for another (SN 2019yvr; Kilpatrick et al., 2021). A potentially hot, luminous candidate progenitors has been isolated for one SN Ic (SN 2017ein; Van Dyk et al., 2018; Kilpatrick et al., 2018; Xiang et al., 2019), although this remains to be confirmed.
The most prevalent CCSNe, SNe II-P (\(\sim 48\%\) locally; Smith et al., 2011), are themselves also quite heterogeneous, most notably with a range in both peak and overall luminosities, as well as plateau duration (e.g., Anderson et al., 2014; Valenti et al., 2016; de Jaeger et al., 2019). At the low-luminosity end, the RSG progenitors have all been found to be at the lower end of the \(M_{\rm{ini}}\) range, such as SN 2005cs (9\({}^{+3}_{-2}\) M\({}_{\odot}\), Maund et al., 2005; \(10\pm 3\) M\({}_{\odot}\), Li et al., 2006), SN 2008bk (8-8.5 M\({}_{\odot}\), Van Dyk et al., 2012; 8-10 M\({}_{\odot}\), O'Neill et al., 2021), and SN 2018aoq (\(\sim 10\) M\({}_{\odot}\), O'Neill et al., 2019). For a nominal local Universe initial-mass function, we would expect lower-mass progenitors and, thus, low-luminosity SNe II to predominate in number. What makes this of critical interest is that these low masses are very near the expected boundaries between WD formation and core collapse, as well as the intermediate scenario of electron-capture (EC) explosions of super-asymptotic giant branch (AGB) stars with O-Ne-Mg cores (e.g., Miyaji et al., 1980; Nomoto, 1984; Doherty et al., 2017). The recent case of SN 20182d has brought this potential fine line to the fore: Hiramatsu et al. (2021) claimed, based on the SN's properties, this SN was the first _bona fide_ example of an ECSN, whereas Zhang et al. (2020) argued that SN 2018zd had properties more consistent with those of normal SNe II-P. Both studies characterised the identified progenitor candidate as potentially being a super-AGB (SAGB) star (Van Dyk et al., 2023) has since confirmed the candidate as the progenitor. Nevertheless, probing where these boundaries exist and what factors govern them is a key area of exploration in our understanding of stellar astrophysics and evolution. Garnering further examples of low-luminosity SNe II-P and their progenitors is therefore important and welcome.
Here we present the identification of the progenitor of the SN II-P 2022acko. The precise locations of previous SNe in archival images have been established either using _HST_ or adaptive-optics ground-based observations of the SN itself. In this case, to our knowledge for _the first time_, we employ the novel method of pinpointing SN 2022acko in pre-SN _HST_ data via _JWST_ images containing the SN. SN 2022acko was first discovered on 2022 December 6 (UTC dates are used throughout this paper) at 16.5 mag, \(58\aas@@fstack{\prime\prime}2\) north and \(29\aas@@fstack{\prime\prime}6\) west of the nucleus of NGC 1300 (Lundquist et al., 2022), by the Distance Less Than 40 Mpc (DLT40) survey (Tartaglia et al., 2018). It was classified the following day as a young SN II-P (Li et al., 2022). The SN is also known as DLT22v, ATLAS22bumns, ZTF22abyivuq, PS22mpv, and Gaia23aap. A detailed description of the SN at early times, including analysis of _HST_ ultraviolet spectroscopy, will be presented by K. A. Bostroem et al. (2023, in prep.); they find that it is most consistent with a low-luminosity SN II-P. SN 2022acko is, as far as we know, the first historical SN to be discovered in this nearby, nearly face-on barred spiral galaxy.
## 2 Observations and reductions
### _Hst_ data
The site of SN 2022acko was serendipitously covered in various _HST_ observations of the host galaxy by several different programs. We obtained all of these publicly available data from the Mikulski Archive for Space Telescopes (MAST). The first set of data was obtained with the now-decommissioned Wide-Field and Planetary Camera 2 (WFPC2) on 2001 January 6 (21.9 yr before explosion) in F606W by program GO-8597 (PI M. Regan). The second set was obtained with the Advanced Camera for Surveys in the Wide Field Channel (ACS/WFC) on 2004 September 21 (18.2 yr) in F435W, F555W, F658N, and F814W by GO/DD-10342 (PI K. Noll; these were "Hubble Heritage" observations); note that only the "NGC1300-POS1" field contains the relevant half of the galaxy with the SN site. Next, the host galaxy was observed with the Wide-Field Camera 3 IR channel (WFC3/IR) on 2017 October 27 (5.1 yr) in F1606W by GO-15133 (PI P. Erwin). Observations were also obtained with the WFC3 UVIS channel (WFC3/UVIS) in F475W and F814W; however, these were in subarray mode, centered on the nucleus, so they missed the SN site. Finally, WFC3/UVIS observations were conducted, in full-array mode, on 2020 January 4 (2.9 yr) in F275W and F336W by GO-15654 (PI J. Lee; PHANGS-HST, Lee et al., 2022). We show portions of the image mosaics in F814W and F160W, containing the SN site, in Figure 1.
### _Jwst_ data
The host galaxy was also observed on 2023 January 25, just 52 d after explosion, using _JWST_ with both the NIRCam and MIRI instruments by program GO-2107 (PI J. Lee; PHANGS-JWST, Lee et al., 2022). The bands used for NIRCam in both the short- and long-wavelength channels were F200W (9620 s), F300M (773 s), F335M (773 s), and F360M (859 s), whereas for MIRI, the bands were F770W, F1000W, F1130W, and F2100W. We obtained these publicly available data as Level-3 mosaics from MAST as soon as they appeared. The focus of this paper is on utilising the NIRCam data for the purpose of potentially identifying the SN progenitor, since those bands match up best in wavelength and at comparable spatial resolution with the available _HST_ bands, whereas the MIRI images do not meet these stipulations. We will therefore not be analyzing the MIRI data here at all.
The point-spread-function (PSF) profile of the SN in F200W, unfortunately, appears hopelessly saturated, as a result of the SN's brightness and exposure depth of the observations in that band, making it worthless for our purpose. On the contrary, the PSF was of high quality in the remaining medium bands. The first three medium bands are intended to sample various molecular emission features from components in the interstellar medium (water ice, PAHs/CH\({}_{4}\)), and the fourth band is intended for continuum emission from brown
dwarfs and planets. Nebular emission is apparent in the images in those first three bands; however, we selected F300M for use here, rather than the continuum band (F335W); any ice emission features in the data do not appear to be particularly strong, such that most of the observed flux in the image appears to arise from point sources. Furthermore, F300M is the one closest in wavelength to the reddest _HST_ band (F160W). We show a portion of the image mosaic in F300M, containing the SN, in Figure 1.
### _Spitzer_ data
Archival data containing the SN site from the post-cryogenic _Spitzer Space Telescope_ mission are publicly available from the NASA/IPAC Infrared Science Archive (IRSA), via the _Spitzer_ Heritage Archive. These data are in the 3.6 and 4.5 \(\mu\)m bands on 2009 October 4 (13.2 yr) from program 61009 (PI W. Freedman) and 2009 September 5 (13.3 yr) and October 10 (13.2 yr) from program 61065 (PI K. Sheth).
### Astrometry and photometry
In order to perform the necessary relative astrometry and determine which star, if any, could be considered a progenitor candidate, we first selected 20 relatively isolated stars in common spread spatially across the mosaics in both _HST_ F160W and _JWST_ F300M. We carefully eliminated consideration of any objects in the mosaics that were extended and did not have a clean stellar PSF profile. Using the task _geonomy_ within PyRAF with second-order fitting, we were able to align the two mosaics with a formal 1\(\sigma\) root-mean-square (rms) uncertainty in the mean astrometry of 0.09 WFC3/IR pixel (or \(\sim\)12 mas). As mentioned above, the PSF of the SN itself in the F300W mosaic was very stellar-like, and we could therefore accurately measure its position in those _JWST_ data and transform its position with the task _geoxytran_ to the F160W mosaic. Doing so, we were able to isolate the star as indicated in Figure 1 (middle panel). We note, however, that the difference between the pixel position transformed from _JWST_ and the position measured from the F160W mosaic itself is 0.26 WFC3/IR pixel, which is a difference of \(\sim\)2.9\(\sigma\). We performed a similar alignment between the F814W and F300M mosaics, based on 15 stars in common, and obtain a rms uncertainty in the registration of 0.10 UVIS pixel and a positional offset between the measured position and the transformed one of 0.52 pixel, a \(\sim\)5.2\(\sigma\) difference. Similarly, we associated the detected star at F814W indicated in Figure 1 (left panel) with the SN. Furthermore, we consider it highly likely that the F160W and F814W detections are of the same star. The positional offsets, however, are somewhat disconcerting, yet no other source appears to exist at F160W or F814W closer to the transformed positions. We further note that the PSF of the object at both F160W and F814W appears stellar-like, although at the detection level for the star in both bands (see below), measuring an accurate centroid for the PSF was not straightforward. At the assumed distance to the host (see below) we cannot rule out that it is a blend or a compact star cluster (the 1.84-pixel full-width at half maximum, FWHM, of the PSF at F160W corresponds at that distance to \(\sim\)22 pc). Nevertheless, we proceed from this point assuming that this object is the SN progenitor candidate; of course, it remains to be confirmed that this is indeed the progenitor when the candidate is shown to have vanished (e.g., Van Dyk et al., 2023).
The _HST_ WFPC2, ACS/WFC, and WFC3/UVIS data were mosaicked per band using Astrodrizzle (STScI Development Team, 2012). In the case of WFC3/IR both the pipeline-drizzled "drz" mosaic and the individual "flt" frames obtained from MAST were utilised. Similarly, we obtained the pipeline-produced _JWST_ NIRCam "i2d" mosaic directly from the archive. As a by-product of running Astrodrizzle on the _HST_ data, cosmic-ray (CR) hits on the detectors are masked in the data-quality array for each frame. The frames for each of the various _HST_ bands were then run through PSF-fitting photometry using Dolphot (Dolphin, 2016), with the mosaic in one band serving as the source detection reference. DoLphot input parameters were set at FitSky=3, RAPer=8, and InterpPSFIib=1, with further charge-transfer efficiency (CTE) correction set to zero (except for the one WFPC2 dataset). The CR-hit flagging is important, in particular, for the aperture-correction measurements on each frame. The photometric results from Dolphot are returned as Vega magnitudes, and we present these in Table 1.
The star is detected only in F814W and F160W. We note that the F160W detection is at a formal signal-to-noise ratio (S/N) of 15.3, and the F814W detection is at S/N = 12.1. The star was not detected in any of the other bands. Instead, we provide upper brightness limits on detection in these bands. Here we set these limits at the formal S/N = 5 returned by Dolphot (following Van Dyk et al., 2023, but see the limitations and caveats associated with this assumption).
For the _Spitzer_ data we combined all of the artefact-corrected basic calibrated data (BCD) in each band and mosaicked these frames using MOPEX(Makovoz & Khan, 2005). PSF-fitting photometry was also executed on the individual frames using APEX(Makovoz & Marleau, 2005). No source was visibly detected at the relatively uncrowded SN site in the resulting mosaics (not shown). From the photometry we considered the faintest detected source nearest the SN site in each band, in order to place upper limits on detection of the progenitor: these were at flux densities of 13.87 and 10.55 \(\mu\)Jy at 3.6 and 4.5 \(\mu\)m, respectively.
## 3 The SN 2022acko progenitor
The next step is to interpret the photometry by building a spectral energy distribution (SED) for the progenitor candidate. This is reasonably straightforward to do, in that we only have two photometric detections along with many upper limits. For lack of any other reliable distance indicator, we adopt a distance to the host galaxy of \(18.99\pm 2.85\) Mpc (distance modulus \(\mu=31.39\pm 0.33\) mag) that Anand et al. (2021) estimated from the Numerical Actions Method (NAM; Shaya et al., 2017). As far as extinction to SN 2022acko, from an analysis of the Na i D line profiles in an early-time moderate-resolution spectrum, Bostroem et al. (in prep.) measured essentially the same equivalent width for a component associated internally to the host as a component associated with the Galactic foreground. The foreground visual extinction is \(A_{V}=0.083\) mag, so we assume
\begin{table}
\begin{tabular}{l c} \hline Band & Mag\({}^{a}\) \\ \hline F275W & \(>25.2\) \\ F336W & \(>26.0\) \\ F435W & \(>27.8\) \\ F555W & \(>27.0\) \\ F606W & \(>26.0\) \\ F658N & \(>24.7\) \\ F814W & 25.61(09) \\ F160W & 22.88(07) \\ \hline \end{tabular} \({}^{a}\)Magnitudes are in the Vega system, and uncertainties of hundredths of a mag are in parentheses; upper limits are 5\(\sigma\).
\end{table}
Table 1: Photometry of the SN 2022acko progenitor candidate.
here that the total extinction, \(A_{V}\left(\mathrm{tot}\right)\), is equivalent simply to doubling the foreground value: \(A_{V}\left(\mathrm{tot}\right)\approx 0.166\) mag. Bostroem et al. estimated a slightly higher value, 0.205 mag, but this difference is not particularly critical.
Correcting the observed photometry for the distance and extinction, assuming an interstellar reddening law with \(R_{V}=3.1\), results in the SED for the progenitor candidate shown in Figure 2. The uncertainties in the F814W and F160W data arise from the uncertainty in the distance modulus, the photometric uncertainties, and the difference in the assumed extinctions between Bostroem et al.and this study, all added in quadrature; the total 1\(\sigma\) uncertainties are overwhelmingly driven by the uncertainty in the distance.
The final step, then, is to model the observed SED. We consider recent single-star evolutionary models from BPASS (the Binary Population and Spectral Synthesis code, v2.2.2; Stanway & Eldridge 2018). For lack of observational evidence to the contrary, we assume models at solar metallicity, which is nominally consistent with the \(\sim\)7 kpc distance of the SN site from the host nucleus. Little motivation exists to consider binary evolutionary models here, since the available evidence for low-luminosity SN II-P progenitors being in binary systems (perhaps other than in wide, noninteractive binaries; e.g., O'Neill et al. 2019) is scant. We show in Figure 2 the endpoints of three models at slightly different values of \(M_{\mathrm{ini}}\): 8, 7.7, and 7.6 M\({}_{\odot}\). (Note that the termini of the BPASS models is the end of carbon burning.) As can be seen in the figure, the upper limits on the SED are not particularly constraining. The 8 M\({}_{\odot}\) model, an RSG with effective temperature \(\mathrm{\log}\,T_{\mathrm{eff}}/\mathrm{K}=3.544\), bolometric luminosity \(\mathrm{\log}\,\left(L_{\mathrm{bol}}/\mathrm{L}_{\odot}\right)=4.454\), and radius \(R=459\) R\({}_{\odot}\) appears to be too luminous in F814W. The 7.7 M\({}_{\odot}\) model, with \(\mathrm{\log}\,T_{\mathrm{eff}}/\mathrm{K}=3.552\), \(\mathrm{\log}\,(L_{\mathrm{bol}}/\mathrm{L}_{\odot})=4.304\), and \(R=372\) R\({}_{\odot}\) is a better comparison, although it is slightly beyond the 1\(\sigma\) uncertainty in F814W. For the 7.6 M\({}_{\odot}\) model, the transition from massive RSG to SAGB has occurred, according to BPASS; this model SED has a markedly different shape and total luminosity (with \(\mathrm{\log}\,(T_{\mathrm{eff}}/\mathrm{K})=3.475\), \(\mathrm{\log}\,(L_{\mathrm{bol}}/\mathrm{L}_{\odot})=5.047\), and \(R=1247\) R\({}_{\odot}\)) than do the RSG models and provides overall a very poor fit to the observations.
Note that we have not included circumstellar dust for any of these models, particularly the RSGs, primarily since little evidence exists that such dust is important for RSGs at this low \(M_{\mathrm{ini}}\) (e.g., Massey et al. 2005; O'Neill et al. 2019, 2021). We also cannot be certain that the presumed SAGB does not have at least some circumstellar dust as well (e.g., Dell'Agli et al. 2017). However, it is possible that some RSG progenitors may experience outbursts prior to explosion that may result in obscuring material that could "hide" the progenitor from detection, although such outbursts are predicted to occur \(<1\) yr prior to the SN (Davies et al. 2022; none of the archival _HST_ or _Spitzer_ data were obtained that soon before explosion). We consider the _Spitzer_ upper limits, above, and the implications for the possibility of an undetected SN 2022acko progenitor detection, neighboring the candidate within its PSF. These limits are \(>18.3\) and \(>18.1\) mag at 3.6 and 4.5 \(\mu\)m, respectively1. At the host distance (and with negligible extinction), these correspond to absolute brightnesses of \(>-13.1\) and \(>-13.3\) mag, respectively (not shown in Figure 2). Note that the luminosity, dusty progenitor of SN 2017eaw (e.g., Van Dyk et al. 2019, 2023) had absolute brightnesses of \(-11.46\) and \(-11.67\) mag in the two respective bands. If there were a dusty progenitor for SN 2022acko, it would have required \(\gtrsim 1\) mag circumstellar dust extinction than did the progenitor of SN 2017eaw, in order to evade detection in Spitzer, such dust would likely have been destroyed during the SN explosion.
Footnote 1: Based on the IRAC zeropoints, \(280.9\pm 4.1\) and \(179.7\pm 2.6\) Jy at 3.6 and 4.5 \(\mu\)m, respectively.
## 4 Discussion and Conclusions
We have, for the first time, identified an SN progenitor candidate in pre-explosion _HST_ archival data using _JWST_, but only by happenstance. The star that exploded as SN 2022acko in NGC 1300 was possibly an RSG with \(M_{\mathrm{ini}}\approx 7.7\) M\({}_{\odot}\). In our view, the fact that the F814W and F160W detections, considered together, are consistent with an RSG SED, as we would expect for an SN II-P progenitor, is sufficiently compelling to believe that the progenitor has indeed been identified. The inferred \(M_{\mathrm{ini}}\) of the candidate is slightly below the generally-recognised theoretical juncture at \(\sim 8\) M\({}_{\odot}\) for core collapse, as opposed to WD formation in less-massive stars. However, this is also just slightly above what the models we considered predict to be an SAGB star, which presumably might end its life as an ECSN. We therefore conclude that SN 2022acko is most likely a normal SN II-P, as all of the observations to be presented forthcoming will show (Bostroem et al., in prep.). The slight discrepancy in our astrometric results, matching the _HST_ data to the _JWST_ observations and WFC3/IR to WFC/UVIS, is motivation itself for future _HST_
Figure 1: _Left_: A portion of a _HST_ WFC3/UVIS image mosaic in F814W from 2004 September 21, with the progenitor candidate for SN 2022acko indicated with tick marks. _Middle_: same as the left panel, but for a WFC3/IR F160W mosaic from 2017 October 27. _Right_: a portion of a _JWST_ NIRCam mosaic in F300M from 2023 January 25. All panels are shown at the same scale and orientation. North is up and east is to the left. |
2308.05357 | Quantum-inspired Hash Function Based on Parity-dependent Quantum Walks
with Memory | In this paper, we develop a generic controlled alternate quantum walk model
(called CQWM-P) by combining parity-dependent quantum walks with distinct
arbitrary memory lengths and then construct a quantum-inspired hash function
(called QHFM-P) based on this model. Numerical simulation shows that QHFM-P has
near-ideal statistical performance and is on a par with the state-of-the-art
hash functions based on discrete quantum walks in terms of sensitivity of hash
value to message, diffusion and confusion properties, uniform distribution
property, and collision resistance property. Stability test illustrates that
the statistical properties of the proposed hash function are robust with
respect to the coin parameters, and theoretical analysis indicates that QHFM-P
has the same computational complexity as that of its peers. | Qing Zhou, Xueming Tang, Songfeng Lu, Hao Yang | 2023-08-10T05:54:32Z | http://arxiv.org/abs/2308.05357v1 | # Quantum-inspired Hash Function Based on Parity-dependent Quantum Walks with Memory (August 2023)
###### Abstract
In this paper, we develop a generic controlled alternate quantum walk model (called CQWM-P) by combining parity-dependent quantum walks with distinct arbitrary memory lengths and then construct a quantum-inspired hash function (called QHFM-P) based on this model. Numerical simulation shows that QHFM-P has near-ideal statistical performance and is on a par with the state-of-the-art hash functions based on discrete quantum walks in terms of sensitivity of hash value to message, diffusion and confusion properties, uniform distribution property, and collision resistance property. Stability test illustrates that the statistical properties of the proposed hash function are robust with respect to the coin parameters, and theoretical analysis indicates that QHFM-P has the same computational complexity as that of its peers.
Controlled alternate quantum walks, quantum-inspired hash function, quantum walks +
Footnote †: This work was supported in part by the National Natural Science Foundation of China under Grant 62101197, in part by the China Postdoctoral Science Foundation under Grant 2021M691148.
## 1 Introduction
Cryptographic hash function acts as a key component of identification, message authentication, digital signatures, and pseudorandom number generation. From a security perspective, cryptographic hash functions can be divided into two broad categories: provably secure hash functions based on hard mathematical problems and dedicated hash functions based on ad-hoc constructions, especially iterative constructions of one-way compression functions. The former only satisfy computational security and are inefficient to be used in practice; the latter are efficiently computable, but the security of which is not built on a firm foundation.
To develop a secure and efficient hash function, more and more researchers have shown interests in hash functions based on quantum computing [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], especially on discrete quantum walks [1, 2, 5, 6, 7, 8, 9, 10, 11, 12] (hereafter, simply DQW-based hash functions). The security of this kind of hash functions is based on quantum mechanics; more precisely, it is ensured by the theoretically infinite possibilities of the initial state and the irreversibility of modulo operation [7]. The output results of DQW-based hash functions are more attainable by classical simulation of the quantum walk process, where the final amplitudes can be calculated with linear overhead with respect to the number of steps. For this reason, a DQW-based hash function can be considered as a "quantum inspired" algorithm, where everything is calculated classically using some desired properties of a quantum system and its dynamics. In what follows, hash schemes constructed by simulating discrete quantum walks and then calculating hash values from the resulting probability distributions are called DQW-based hash functions or quantum inspired hash functions interchangeably.
Quantum inspired hash functions was first explored by Li et al. in [12], and further intensively discussed by Cao et al. in [10], Yang et al. in [5, 6, 7, 8, 11], and Zhou et al. in [1]. Until Zhou et al.'s scheme (called QHFM) [1], all quantum inspired hash functions simply use quantum walks with different coin parameters, where only the coin operator is controlled by the message bit. QHFM is based on quantum walks with memory, where an additional changeable transform--the direction-determine operator is also controlled by the message bit. More recently, Hou et al. [2] has proposed a hash function QHFL based on lively quantum walks, where the shift operator contains a liveliness parameter, whose value (one out of two) is determined by the message bit. The good statistical performance of QHFM and QHFL suggests that
combining quantum walks differing in component operators other than coin transform is a promising way to construct good hash functions.
Quantum walks with memory (QWM) [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] are types of modified quantum walks where the next direction of the walking particle is governed by the direction-determine operator, which specifies how the latest path together with the coin state affects the movement direction of the particle; different QWM models have different direction-determine operators. Unlike usual quantum walks without memory [23] or lively quantum walks [2], where the changeable operator (the coin or shift operator) can be characterized by a single parameter, the direction-determine operator of QWM is highly flexible. It is influenced by two factors: the memory length and the movement rule, the latter can be an arbitrary relation between the movement direction and the recorded shifts of the walker, which cannot simply be dictated by a few parameters. Hence, QWM have great potential to be used to construct various quantum-inspired hash functions.
Here, we present an alternative QWM-based hash function QHFM-P, which is inspired by quantum walks with memory provided by parity of memory [15]. The proposed hash function is on a par with the state-of-the-art DQW-based hash functions in terms of sensitivity of hash value to message, diffusion and confusion properties, uniform distribution property, collision resistance property, and computational complexity. Furthermore, it can preserve near-ideal statistical performance when changing the values of its coin parameters, indicating that QHFM-P has nice stability with respect to different coin angles.
## II Controlled quantum walk with memory depending on the parity of memory
One-dimensional quantum walks with \(m\)-step memory depending on the parity of memory (QWM-P), or parity-dependent quantum walks with \(m\)-step memory on the line [15], is a quantum system living in a Hilbert space \(\mathcal{H}_{p}\otimes\mathcal{H}_{d_{m}}\otimes\cdots\otimes\mathcal{H}_{d _{2}}\otimes\mathcal{H}_{d_{1}}\otimes\mathcal{H}_{c}\) spanned by orthogonal basis states \(\{\left|x,d_{m},\ldots,d_{2},d_{1},c\right\rangle\left|d_{m},\ldots,d_{1},c \in\mathbb{Z}_{2};x\in\mathbb{Z}\right\}\), where \(c\) is the coin state, \(d_{j}\) (0 stands for left and 1 stands for right) records the shift of the walker \(j\) steps before (\(d_{1}\) is the direction of the most recent step, and \(d_{m}\) is the earliest direction that is memorized), and \(x\) is the current position of the walker. If the walker moves on a cycle with \(n\) nodes, then \(x\in\mathbb{Z}_{n}\). The one-step evolution of QWM-P may be decomposed into three parts: the first is a \(2\times 2\) coin operator \(C\) on subspace \(\mathcal{H}_{c}\), here \(C\) is parameterized by an angle \(\theta\), i.e.,
\[C=\begin{pmatrix}\text{cos}\theta&\text{sin}\theta\\ \text{sin}\theta&-\text{cos}\theta\end{pmatrix}; \tag{1}\]
the second is the direction-determine transform \(D\) on \(\mathcal{H}_{d_{m}}\otimes\cdots\otimes\mathcal{H}_{d_{2}}\otimes\mathcal{H}_ {d_{1}}\otimes\mathcal{H}_{c}\), whose action can be written as
\[\begin{split}\hat{D}:&\left|d_{m},d_{m-1},\ldots,d_{2},d_{1},c \right\rangle\rightarrow\\ &\left|d_{m-1},d_{m-2},\ldots,d_{1},c\oplus 1\oplus ifeven(d_{m}, \ldots,d_{1}),c\right\rangle,\end{split} \tag{2}\]
where \(ifeven(d_{m},\ldots,d_{1})\) equals 1 (respectively, 0) if the number of zeros in the memorized directions \(d_{m},\ldots,d_{1}\) is even (respectively, odd); and the third is the shift operator \(S\) on \(\mathcal{H}_{p}\otimes\mathcal{H}_{d_{1}}\), whose action can be expressed as
\[S:\left|x,d_{1}\right\rangle\rightarrow\left|x+2d_{1}-1,d_{1}\right\rangle. \tag{3}\]
If the walker moves on a cycle with \(n\) nodes, then the shift operator becomes \(\left|x,d_{1}\right\rangle\rightarrow\left|x+2d_{1}-1(\mathrm{mod}n),d_{1}\right\rangle\).
A controlled one-dimensional quantum walks with memory depending on the parity of memory (CQWM-P) can be obtained by alternately applying QWM-P with different memory lengths (as well as different coin parameters). More precisely, CQWM-P evolves according to a \(t\)-bit binary string \(msg=(m_{1},m_{2},\ldots,m_{t})\in\{0,1\}^{t}\): at the \(j\)th time step, if \(m_{j}=0\) (\(j\in\{1,2,\ldots,t\}\), then the walker performs QWM-P with \(s_{0}\) step memory (denoted by QWs\({}_{0}\)M-P) and with coin parameter \(\theta_{0}\); if \(m_{j}=1\), then the walker performs QWM-P with \(s_{1}\) (\(s_{1}\neq s_{0}\)) step memory (denoted by QWs\({}_{1}\)M-P) and with coin parameter \(\theta_{1}\).
To enable QWs\({}_{0}\)M-P and QWs\({}_{1}\)M-P to be performed alternately, \(\left|s_{0}-s_{1}\right|\) redundant qubits can be added to the walk process with less memory length, so that two walks live in the same Hilbert space. For instance, if \(0<s_{0}<s_{1}\), then \(s_{1}-s_{0}\) redundant qubits are added to QWM\({}_{80}\)P. In this case, the basis states of QWMs\({}_{0}\)P becomes \(\{\left|x,d_{s_{1}},\ldots,d_{2},d_{1},c\right\rangle\left|d_{s_{1}},\ldots,d_{ 1},d\in\mathbb{Z}_{2};x\in\mathbb{Z}\right\}\), wherein the first \(s_{1}-s_{0}\) qubits are invariant under the transforms of QWM\({}_{80}\)P, and the \(D\) transform becomes
\[\begin{split} D:&\left|d_{s_{1}},\ldots,d_{s_{0}+1},d_{s _{0}},d_{s_{0}-1},\ldots,d_{2},d_{1},c\right\rangle\rightarrow\\ &\left|d_{s_{1}},\ldots,d_{s_{0}+1},d_{s_{0}-1},\ldots,d_{1},c \oplus 1\oplus ifeven(d_{s_{0}},\ldots,d_{1}),c\right\rangle.\end{split} \tag{4}\]
In this work,we focus on CQWM-P with one- and two-step memory, whose evolution operator controlled by \(msg\) is the product of \(t\) unitary transforms
\[U_{msg}=U^{(m_{i})}U^{(m_{t-1})}\cdots U^{(m_{2})}U^{(m_{1})}, \tag{5}\]
where \(U^{(m_{j})}\) is the one-step transform defined as
\[U^{(m_{j})}=S\cdot\left(I_{n}\otimes D^{(m_{j})}\right)\cdot\left(I_{4n} \otimes C^{(m_{j})}\right). \tag{6}\]
In Eq. (6), \(C^{(0)}\) and \(C^{(1)}\) are coin operators parameterized by \(\theta_{0}\) and \(\theta_{1}\), respectively; \(I_{4n}\) and \(I_{n}\) are \(4n\times 4n\) and \(n\times n\) identity operators, respectively; \(S\) is the conditional shift operator controlled by the next direction \(d_{1}\), such a direction is determined by an \(8\times 8\) unitary operator \(D^{(m_{j})}\). If \(m_{j}=0\), then \(D^{(m_{j})}\) is the direction-determine transform of QW1M-P, i.e., \(\hat{D}^{(0)}:\left|d_{1},c\right\rangle\rightarrow\left|c\oplus 1\oplus d_{1},c\right\rangle\); and if \(m_{j}=1\), then \(D^{(m_{j})}\) is the direction-determine transform of QW2M-P, i.e., \(D^{(1)}:\left|d_{2},d_{1},c\right\rangle\rightarrow\left|d_{1},c\oplus(d_{1} \oplus d_{2}),c\right\rangle\). By appending a redundant qubit \(d_{2}\) to the state of QW1M-P and letting \(D^{(0)}:\left|d_{2},d_{1},c\right\rangle\rightarrow\left|d_{2},c\oplus 1\oplus d_{1},c\right\rangle\) determines the next direction if the controlling bit equals 0, the walk process can switch freely between QW1M-P and QW2M-P.
## III Hash Function using Quantum Walks with One- and Two-Step Memory on Cycles
The proposed hash function is constructed by numerically simulating CQWM-P with one- and two-step memory on cycles under the control of the input message, and then calculating the hash value from the resulting probability distribution of this walk.
Specifically, our hashing algorithm is parameterized by three positive integers \(\{n,m,l|n\bmod 2=1,10^{l}\gg 2^{m}\}\) and three angles \(\{\theta_{0},\theta_{1},\alpha|\theta_{0},\theta_{1},\alpha\in(0,\pi/2)\}\), where \(n\) specifies the total number of nodes in the cycle that the walker moves along, \(m\) is the number of hash bits that are contributed by each node, \(l\) is the number of digits in the probability value (associated with each node) that are used to calculate the hash result, \(\theta_{0}\) (respectively, \(\theta_{1}\)) is the coin parameter of QW1M-P (respectively, QW2M-P), and \(\alpha\) is the parameter of the initial state of the walker. Given the input message \(msg\), the \(m\times n\)-bit hash value \(H(msg)\) is calculated as follows.
1. Initialize the walker in the state \(|\psi_{0}\rangle\!=\!\text{cos}\alpha\)\(|\)0,1,0,0\(\rangle\!+\!\text{sin}\alpha\)\(|\)0,1,0,1\(\rangle\);
2. Apply \(U_{msg}\) to \(|\psi_{0}\rangle\) and generate the resulting probability distribution \(prob=(p_{0},p_{1},\ldots,p_{n-1})\), where \(p_{x}\) is the probability that the walker locates at node \(x\) when the walk is finished.
3. The hash value of \(msg\) is a sequence of \(n\) blocks \(H(msg)=B_{0}\|B_{1}\|\ldots\|B_{n-1}\), where each block \(B_{x}\) is the \(m\)-bit binary representation of \(|p_{x}\cdot 10^{l}\rfloor\bmod 2^{m}\) (\(\lfloor\cdot\rfloor\) denotes the floor of a number), and \(B_{x}\|B_{x+1}\) denotes the concatenation of \(B_{x}\) and \(B_{x+1}\).
## IV Statistical Performance Analysis
The proposed scheme is a kind of dedicated hash function, the security of which is hard to prove, and it is commonly evaluated by means of statistical analysis. To facilitate comparison and discussion, we consider two typical instances QHFM-P-264 and QHFM-P-296 of the proposed scheme, where QHFM-P-\(L\) produces \(L\)-bit hash values and will be compared with the existing DQW-based hash functions with \(L\)-bit output length. The values of \(m,l,\theta_{0},\theta_{1}\), and \(\alpha\) for the two instances are the same, which are taken to be \(8,8,\pi/4,\pi/3\), and \(\pi/4\), respectively; the only distinction between QHFM-P-296 and QHFM-P-264 lies in the value of \(n\), which are taken to be 37 for the 296-bit output length and 33 for the 26-bit output length.
Our statistical tests considers four kinds of properties: sensitivity of hash value to message, diffusion and confusion properties, uniform distribution property, and collision resistance property, the latter three are assessed by analyzing the same collection of hash values, whose input messages come from the public arXiv dataset (see [https://www.kaggle.com/Cornell-University/arxiv](https://www.kaggle.com/Cornell-University/arxiv)).
### Sensitivity of Hash Value to Message
Let \(msg_{0}\) be an original message and \(msg_{i}\) (\(i\in\{1,2,3\}\)) the slightly modified result of \(msg_{0}\), the sensitivity of hash value to message is assessed by comparing the hash value \(H(msg_{i})\) of \(msg_{i}\) with the hash result \(H(msg_{0})\) of \(msg_{0}\). Specifically, the original and modified messages are obtained under the following four conditions.
* Condition 0: Randomly select a record from the dataset, take the texts of the abstract field within this record as \(msg_{0}\);
* Condition 1: Invert a bit of \(msg_{0}\) at a random position and then get the modified message \(msg_{1}\);
* Condition 2: Insert a random bit into \(msg_{0}\) at a random position and then obtain \(msg_{2}\);
* Condition 3: Delete a bit from \(msg_{0}\) at a random position and then obtain \(msg_{3}\).
Corresponding to the above conditions, we list, as an example, four hash values in hexadecimal format produced by QWM-P-296 as follows.
* \(H(msg_{0})=\)"D8 CD B4 F1 A9 A8 E4 F8 60 F7 6F 74 B8 A7 C9 60 E8 7F 53 7F 10 F9 B0 1D 9D D6 37 A5 F1 8E F3 D5 71 5A 28 7B B1";
* \(H(msg_{1})=\)"Eb C6 F0 SF 74 5B B2 14 72 5F B7 29 CF 7F C6 96 E8 B8 87 8C 9A E9 50 51 8A D5 5D 62 0E ED F8 7B CE 15 D1 7F A7";
* \(H(msg_{2})=\)"90 7E 3E 3F 8A D5 D5 6B 70 41 BD F9 37 B2 55 22 2B F1 76 C0 D3 09 E4 6C B5 88 CA C8 07 6D 7B 3F 22 6A 28 BF 21";
* \(H(msg_{3})=\)"65 19 6A A5 EE AC 65 A9 52 FA E5 30 B7 22 56 67 S1 E2 8F EC D0 B6 EE 6F 04 25 9F 2B 8D E0 97 CE 49 82 66 7A A8".
The plots of \(H(msg_{0})\), \(H(msg_{1})\), \(H(msg_{2})\), and \(H(msg_{3})\) in binary format are shown in Fig. 1, where each asterisk (*) in the \(j\)th subgraph (\(j>0\)) marks a different bit between \(H(msg_{j})\) and \(H(msg_{0})\). It indicates that a slight modification in the input message can lead to a significant change in the hash result, and the positions of changed bits are evenly distributed over the entire interval [1,296] of position numbers. A similar result can be obtained using QHFM-P with other output lengths, thus the output result of the proposed hash scheme is highly sensitive to its input message.
### Diffusion and Confusion Properties
To test the diffusion and confusion properties of the proposed hash function, the statistical experiment getting \(msg_{0}\) and \(msg_{1}\) are independently repeated \(N\) times, then the hash values of those \(N\) pairs of messages are analyzed. Let \(B_{i}\) (\(i\in\{1,\ldots,N\}\)) be the Hamming distance between \(H(msg_{0})\) and \(H(msg_{1})\) obtained in the \(i\)th experiment, the diffusion and confusion properties are assessed based on the following four indicators:
1. mean changed bit number \(\overline{B}=\sum_{i=1}^{N}B_{i}/N\);
2. mean changed probability \(P=\overline{B}/(n\times m)\times 100\%\);
3. standard deviation of the changed bit number \(\Delta B=\sqrt{\sum_{i=1}^{N}\big{(}B_{i}-\overline{B}\big{)}^{2}/(N-1)}\);
4. standard deviation of the changed probability \(\Delta P=\sqrt{\sum_{i=1}^{N}\big{[}B_{i}/(n\times m)-P\big{]}^{2}/(N-1)} \times 100\%\).
The ideal value of \(P\) is \(50\%\), and smaller \(\Delta B\) and \(\Delta P\) are more desirable. Following Ref. [1], we take \(N=10000\) and use \(I_{\text{DC}}=(\Delta P+|P-50\%|)/2\times 100\%\) as a composite indicator for the diffusion and confusion properties. The test results of the diffusion and confusion properties for the proposed hash functions are presented in Table 1. For comparison, the reported results for the existing DQW-based hash schemes with 296- or 264-bit output length are also listed in Table 1, where Yang21-296 is the second instance (with \(p=2/n\)) in Ref. [5].
It can be seen that the test results for QHFM-P are very close to those for its peers, thus the diffusion and confusion properties of the proposed scheme are on a par with those of existing schemes with output length 296 or 264.
### Uniform distribution analysis
The uniform distribution property of the proposed hash function is assessed by analyzing the \(N\) pairs of hash values used in the diffusion and confusion test in a different way. Let \(T_{j}\) (\(j\in\{1,2,\ldots,n\times m\}\)) be the number of experiments in which the \(j\)th bit of \(H(msg_{0})\) is different from the \(j\)th bit of \(H(msg_{1})\), then the uniform distribution analysis considers the following two indicators:
1. mean number of experiments with flipped hash bit over \(n\times m\) bit-positions \(\overline{T}=\sum_{j=1}^{n\times m}T_{j}/(n\times m)\);
2. standard deviation of the number of experiments with flipped bit \(\Delta T=\sqrt{\sum_{j=1}^{n\times m}(T_{j}-\overline{T})^{2}/(n\times m-1)}\).
The ideal value of \(\overline{T}\) is \(N/2\), and smaller \(\Delta T\) suggests better uniform distribution property. As shown in Table 2, the values of \(\overline{T}\) and \(\Delta T\) for the two instances of QHFM-P are close to the corresponding values for the recent DQW-based hash functions1 with the same output length. Since the reported value of \(N\times P\) (which is equivalent to \(\overline{T}\) theoretically, see [1]) for Yang21-296 is 4995.41, the value of \(|\overline{T}-5000|\) for Yang21-296 may be considered between (or close to) 1.90 and 4.59, so the \(\overline{T}\) value of QHFM-P-296 is on the same level as that of Yang21-296.
Footnote 1: The values of \(\overline{T}\) and \(\Delta T\) for QHFL are both not available, so we only compare QHFM with the remaining four recent schemes.
To provide an intuitive description, we plot the number of experiments with flipped hash bit on every bit position of QHFM-P-296 in Fig. 2, where the number of experiments with flipped hash bit on every bit-position is very close to \(N/2\), suggesting that the proposed scheme has good uniform distribution property.
### Collision resistance
The collision resistance test is carried out by counting the number of experiments in which the hash values of the original and modified messages collide at a certain number of bytes, and then comparing the counting result with its theoretical value.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Hash Instances & \(\overline{B}\) & \(P(\%)\) & \(\Delta B\) & \(\Delta P(\%)\) & \(I_{\text{DC}}(\%)\) \\ or Schemes & & & & & \\ \hline \hline QHFM-P-296 & 147.9320 & 49.9770 & 8.5517 & 2.8891 & 1.4560 \\ QHFM-P-264 & 132.1286 & 50.0487 & 8.1938 & 3.1037 & 1.5762 \\ \hline QHFM-2962 & 148.1900 & 50.6060 & 8.5500 & 2.9800 & 1.4750 \\ \hline QHFM-254 [2] & 132.0300 & 50.0100 & 8.1100 & 3.0700 & 1.5400 \\ \hline QHFM-296 [1] & 147.9101 & 49.9696 & **8.5997** & 2.9053 & 1.4679 \\ \hline QHFM-264 [1] & 131.8667 & 49.9495 & 8.1378 & 3.0825 & 1.5665 \\ Yang21-296 [5] & 147.8640 & 49.9541 & 8.6414 & 2.9102 & 1.4781 \\ \hline Yang19-264 [6] & 131.6803 & 49.8789 & 8.8877 & 3.3666 & 1.7439 \\ Yang18-264 [7] & 132.1108 & 50.0420 & 8.0405 & 3.0457 & 1.5439 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test Results of the Diffusion and Confusion Properties
\begin{table}
\begin{tabular}{l c c c} \hline \hline Hash Instances & \(\overline{T}\) & \(|\overline{T}-5000|\) & \(\Delta T\) \\ or Schemes & & & \\ \hline \hline QHFM-P-296 & 499770 & 2.30 & 51.9772 \\ QHFM-P-264 & 5004.87 & 4.87 & 49.8667 \\ QHFM-2962 [1] & 4996.96 & 3.04 & 48.4334 \\ QHFM-264 [1] & 4994.95 & 5.05 & 48.9253 \\ Yang21-296 [5] & 4998.10 & 1.90 & **N/A** \\ Yang19-264 [6] & 4996.60 & 3.40 & N/A \\ Yang18-264 [7] & 5003.90 & 3.90 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test Results of the Uniform Distribution Property
Figure 1: Plots of the hash values produced by QHFM-P-296 under the four conditions, where C\({}_{j}\) stands for Condition \(j\) (\(j=0,1,2,3\)).
For ease of exposition, we use \(\left\{msg_{0}^{(i)},msg_{1}^{(i)}\right\}\) to denote the original and modified messages obtained under Condition 0 and Condition 1 in the \(i\)th experiment, \(\left\{H(msg_{0}^{(i)}),H(msg_{1}^{(i)})\right\}\) to denote the hash values of \(\left\{msg_{0}^{(i)},msg_{1}^{(i)}\right\}\), and \(g=\lceil(n\times m)/8\rceil\) to denote the number of bytes that a hash result produced by the proposed hash function can be divided into2. The collision resistance test counts the numbers \(\{W_{N}^{\epsilon}(\omega)|\omega=0,1,\ldots,g\}\) of experiments in which \(H(msg_{0})\) and \(H(msg_{1})\) have \(\omega\) identical bytes (\(\omega\) is also called the number of hits). For instance, if the first, third, and fourth bytes of \(H(msg_{0})\) are respectively the same as the first, third, and fourth bytes of \(H(msg_{1})\) in the 25th experiment, then \(\left\{H(msg_{0}^{(25)}),H(msg_{1}^{(25)})\right\}\) makes an incremental contribution of 1 to \(W_{N}^{\epsilon}(3)\).
Footnote 2: If \(n\times m\) is not divisible by 8, then add a prefix of \((8-n\times m)\bmod 8\) zeros to the hash value.
The theoretical value of \(W_{N}^{t}(\omega)\) is calculated using the binomial distribution formula
\[W_{N}^{t}(\omega) =\text{int}\left[N\times P^{t}(\omega)\right]\] \[=\text{int}\left[N\times\frac{g!}{\omega!(g-\omega)!}\left(\frac {1}{2^{8}}\right)^{\omega}\left(1-\frac{1}{2^{8}}\right)^{g-\omega}\right], \tag{7}\]
where \(\text{int}\left[\cdot\right]\) denotes rounding a real number to its nearest integer, and \(P^{t}(\omega)\) is the theoretical probability that \(\omega\) hits occur in \(\{H(msg_{0}),H(msg_{1})\}\). Substituting \(g=\lceil 296/8\rceil=37\) and N=10000 into Eq. (7), one can get \(\{W_{N}^{t}(\omega)|\omega=0,1,2,3\}=\{8652,1255,89,4\}\) for hash functions with 296-bit output length. Similarly, the values of \(W_{N}^{t}(\omega)\) with \(\omega=0,1,2,3\) for 264-bit hash functions are 8788, 1137, 71, and 3, respectively. For both 296- and 264-bit hash functions, the value of \(W_{N}^{t}(\omega)\) with \(\omega\geq 4\) is 0.
Let \(P^{e}(\omega)=W_{N}^{e}(\omega)/N\) be the experimental probability that \(H(msg_{0})\) and \(H(msg_{1})\) have \(\omega\) identical bytes, the collision resistance property of the proposed hash function can be assessed by the Kullback-Leibler (referred to as KL divergence for short) of \(P^{e}\) from \(P^{t}\)
\[D_{KL}\left(P^{e}\|P^{t}\right)=\sum_{\omega=0}^{g}P^{e}(\omega)\text{log}_{2 }\left(\frac{P^{e}(\omega)}{P^{t}(\omega)}\right) \tag{8}\]
The smaller the value of \(D_{KL}\left(P^{e}\|P^{t}\right)\) is, the closer \(P^{e}\) is to \(P^{t}\). Note that there exist some values of \(\omega\) such that \(P^{e}(\omega)=0\) and \(P^{t}(\omega)\neq 0\), hence we cannot use \(D_{KL}\left(P^{t}\|P^{e}\right)\) to indicate the distance between \(P^{e}\) and \(P^{t}\).
In addition to the KL divergence, the mean of the absolute difference per byte between \(H(msg_{0})\) and \(H(msg_{1})\) over \(N\) independent experiment can also be used to assess the collision resistance property. Let \(t(e_{j})^{(i)}\) and \(t(e^{\prime}_{j})^{(i)}\) be the decimal value of the \(j\)th byte of \(H(msg_{0}^{(i)})\) and \(H(msg_{1}^{(i)})\), respectively; the mean of the absolute difference per byte is given by
\[\overline{d}_{byte}^{e}=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{g}\frac{1}{g} \left|t(e_{j})^{(i)}-t(e^{\prime}_{j})^{(i)}\right|, \tag{9}\]
and the theoretical value of \(\overline{d}_{byte}^{e}\) is \(\overline{d}_{byte}^{t}=85.33\)[5].
The test results of the collision resistance property for the proposed hash function are shown in Table 3, where \(W_{N}^{e}(4+)\) denotes the number of experiments in which at least 4 hits occur in \(\{H(msg_{0}),H(msg_{1})\}\). The values of \(D_{KL}(P^{e}\|P^{t})\) and \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) for the two instances of QHFM-P is smaller than or close to those for the existing hash instances with the same output length excepting QHFL, meaning that the collision resistance property of QHFM-P is better than or on a par with that of its peers excepting QHFL. For QHFL-296, the experimental value of \(D_{KL}(P^{e}\|P^{t})\) is slightly smaller than that for QHFM-P-296, while the value of \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) is greater than that for QHFM-P-296; in similar, the value of \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) for QHFL-264 is smaller than that for QHFM-P-264, while the value of \(D_{KL}(P^{e}\|P^{t})\) for QHFL-264. Taking into account the fluctuation of the experimental results, the collision resistance property of QHFM-P can be regarded as being on the same level as that of QHFL.
## V Stability with respect to coin parameters
Recall from section III that the proposed hash function is parameterized by three integers and three angles, among which the two coin angles are crucial components of the underlying controlled alternate quantum walks and may affect the four statistical properties considered in section IV.
To explore how robust the hashing properties of QHFM-P are with respect to different coin angles, we uniformly divide \((0,\pi/2)\) into \(c=30\) subintervals, take the endpoints (except 0 and \(\pi/2\)) of those subintervals as the candidate values of each coin parameter, and then conduct \(N=2048\) experiments for each pair of values. During the experiments for each angle pair, primary indicators of each type of statistical property are calculated. Specifically, we take \(P\) together with
Figure 2: Histogram of the 195-bit hash space, where \(N=10000\).
\(\Delta P\), \(\overline{T}\) together with \(\Delta T\), and \(D_{KL}(P^{e}\|P^{t})\) together with \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) to be the primary indicators of the diffusion and confusion properties, the uniform distribution property, and the collision resistance property, respectively. Following Ref. [2], we take the mean Jensen-Shannon divergence [24] (over \(N\) random experiments) between the resulting probability distributions corresponding to the original and modified messages to be the quantitative indicator of the sensitivity of hash value to message. Suppose, in an experiment, the probability distribution produced by the underlying controlled alternate quantum walks controlled by \(msg_{j}\) (\(j=0,1,2,3\)) is \(P_{j}\), the Jensen-Shannon divergence (referred to as JS divergence for short) between \(P_{0}\) and \(P_{1}\) is
\[D_{JS}(P_{0},P_{1})=\frac{D_{KL}(P_{0}\|M)}{2}+\frac{D_{KL}(P_{1}\|M)}{2}, \tag{10}\]
where \(M=(P_{0}+P_{1})/2\), and \(D_{KL}(P_{0}\|M)\) is the KL divergence of \(P_{0}\) from \(M\) (see Eq. (8)).
The results of the stability test for the four kinds of properties of QHFM-P-296 are illustrated in Figs. 3 to 6, where \(\overline{D}_{JS}(P_{0},P_{j})\) (\(j=1,2,3\)) in Fig. 3 denotes the average value of \(D_{JS}(P_{0},P_{j})\) over \(N\) experiments. One may notice that the shape of Fig. 4(a) is identical to that of Fig. 5(a), this is because \(\overline{T}\) is directly proportional to \(P\) when the \(N\) pairs of original and modified messages used in the diffusion and confusion test are re-used in the uniform distribution test.
It can be seen from Fig. 3(a) that the average JS divergences between \(P_{0}\) and \(P_{1}\) for different values of coin parameters fall within a narrow range, indicating that the value of \(\overline{D}_{JS}(P_{0},P_{1})\) is quite stable with respect to \(\theta_{0}\) and \(\theta_{1}\). Similarly, the maximum and minimum values of \(\overline{D}_{JS}(P_{0},P_{2})\) (or \(\overline{D}_{JS}(P_{0},P_{3})\)) are very close to each other, suggesting that the sensitivity of hash value to message of the proposed hash function is stable with respect to the coin parameters. In Fig. 4, the mean changed probability \(P\) fluctuates around \(50\%\); meanwhile, the value of the standard deviation \(\Delta P\) is small and does not vary significantly for different values of \(\theta_{0}\) and \(\theta_{1}\). Hence, the diffusion and confusion properties of QHFM-P is robust with regard to different coin angles. Likewise, the experimental values of \(\overline{T}\) for different coin angles are close to the theoretical value \(N/2\), and the results of \(\Delta T\), \(D_{KL}(P^{e}\|P^{t})\), and \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) displayed in Figs. 5 and 6 are relatively small, which suggests that the uniform distribution property and the collision resistance property of the proposed hash function are also stable with respect to coin parameters.
## VI Time and space complexity
Similar to existing DQW-based hash schemes, the proposed hash algorithm can be efficiently computed using classical computer. Thus, it is more appropriate to consider QHFM-P as a classical algorithm and to concentrate on classical complexity.
By rewriting the 4-term basis state \(\left|x,d_{2},d_{1},c\right\rangle\) of QWHF-P in \(\mathcal{H}_{p}\otimes\mathcal{H}_{d_{2}}\otimes\mathcal{H}_{d_{1}}\otimes \mathcal{H}_{c}\) as a 2-term basis state \(\left|x,j\right\rangle=\left|x,2^{2}d_{2}+2^{1}d_{1}+2^{0}c\right\rangle\) in \(\mathcal{H}_{p}\otimes\mathcal{H}^{8}\), where \(\mathcal{H}^{8}\) is the 8-dimensional Hilbert space, the quantum state of the walker performing parity-dependent quantum walks with one- and two-step memory on a cycle of length \(n\) after \(t\) time steps can be expressed as
\[\left|\psi_{t}\right\rangle=\sum_{x=0}^{n-1}\sum_{j=0}^{7}A_{t}^{x,j}\left|x,j \right\rangle. \tag{11}\]
The results of sequentially performing the coin operator, the direction-determine transform, and the shift operator of QW2M-P on the computational states \(\left|x,j\right\rangle\) are tabularized in Table 4. For ease of notation, we denote \(C^{(1)}\) by \(\left(\begin{smallmatrix}a_{1}&b_{1}\\ c_{1}&d_{1}\end{smallmatrix}\right)\) and write \((x\pm 1)\bmod n\) shortly as \(x\pm 1\). In the first three columns of Table 4, the second terms \(j\) of the basis states and the transformed results are written in binary format, for the binary representation \(j_{2}j_{1}j_{0}\) of \(j\) indicates the values of \(d_{2}\), \(d_{1}\), and \(c\) in a natural way.
Combining Eq. (11) with the first and last columns of Table 4, one can obtain the relation between \(\{A_{t+1}^{x,j}|x\in\mathbb{Z}_{n},j\in\mathbb{Z}_{8}\}\) and \(\{A_{t}^{x,j}|x\in\mathbb{Z}_{n},j\in\mathbb{Z}_{8}\}\) (hereafter, simply \(\{A_{t+1}^{x,j}\}\) and \(\{A_{t}^{x,j}\}\), respectively) after one step of QW2M-P as follows:
\[A_{t+1}^{x,0} =a_{1}A_{t}^{x+1,0}+b_{1}A_{t}^{x+1,1},\] \[A_{t+1}^{x,1} =c_{1}A_{t}^{x+1,4}+d_{1}A_{t}^{x+1,5},\] \[A_{t+1}^{x,2} =a_{1}A_{t}^{x-1,4}+b_{1}A_{t}^{x-1,5},\] \[A_{t+1}^{x,3} =c_{1}A_{t}^{x-1,0}+d_{1}A_{t}^{x-1,1},\] \[A_{t+1}^{x,4} =a_{1}A_{t}^{x+1,6}+b_{1}A_{t}^{x+1,7},\] \[A_{t+1}^{x,5} =c_{1}A_{t}^{x+1,2}+d_{1}A_{t}^{x+1,3},\]
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Hash Instances or Schemes & \(\{W_{K}^{s}(\omega)|\omega=0,1,2,3,4+\}\) & \(D_{KL}\left(P^{e}\|P^{t}\right)\) & \(\overline{d}_{byte}^{e}\) & \(\left|\overline{d}_{byte}^{e}-\overline{d}_{byte}^{t}\right|\) \\ \hline \hline QHFM-P-296 & 8637, \(\left|1260\right|\) 100, 3, 0 & 0.0001461 & 85.30 & 0.03 \\ QHFM-P-264 & 8794, \(\left|1143,61,2,0\right.\) & 0.0001513 & 85.40 & 0.07 \\
**QHFM-296 [2]** & 8637, \(\left|1278,\left|814,\right.\) & 0.0000998 & N/A & 0.15 \\ \hline QHFL-264 [2] & 8784, \(\left|1133,82,1,0\right.\) & 0.0002427 & N/A & 0.02 \\ \hline QHFM-296 [1] & 8905, \(\left|1322,\left|812,0\right.\) & 0.00003610 & 85.36 & 0.03 \\ \hline QHFM-264 [1] & 8762, \(\left|1159,74,5,0\right.\) & 0.0001457 & 85.27 & 0.06 \\ \hline Yang21-296 [5] & 8321,\(\left|1547,110,22,0\right.\) & 0.00086163 & 85.22 & 0.11 \\ Yang19-264 [6] & 9019, \(\left|253,52,2,4\right.\) & 0.0056472 & 89.76 & 4.43 \\ Yang18-264 [7] & 8904, \(\left|1026,68,2,0\right.\) & 0.0009686 & 83.64 & 1.69 \\ \hline \hline \end{tabular}
\end{table} TABLE 3: Test Results of The Collision Resistance property
\[\begin{split} A_{t+1}^{x,6}&=a_{1}A_{t}^{x-1,2}+b_{1}A_{ t}^{x-1,3},\\ A_{t+1}^{x,7}&=c_{1}A_{t}^{x-1,6}+d_{1}A_{t}^{x-1,7}. \end{split} \tag{12}\]
Relation (12) shows that the eight amplitudes of being at position \(x\) at time \(t+1\) can be calculated from the amplitudes of being at positions \(x\pm 1\) at time \(t\) using 16 multiplications and 8 additions, meaning that the \(8n\) amplitudes of being at \(n\) locations can be calculated using \(O(n)\) basic arithmetic operations. Similarly, one can obtain the relation between \(\{A_{t+1}^{x,j}\}\) and \(\{A_{t}^{x,j}\}\) for QW1M-P, where the amplitudes can also be updated using \(O(n)\) basic operations at each time step.
If one wants to obtain an \(L\)-bit hash value (\(L\) is a multiple of \(m\)) of an \(M\)-bit message, then the walker moves \(M\) steps on a cycle with \(L/m=O(L)\) nodes, here \(m\) is constant with respect to \(M\). The assignment of the initial amplitudes \(\{A_{0}^{x,j}\}\) can be carried out with \(O(L)\) time and \(O(L)\) memory space, the values of \(\{A_{M}^{x,j}\}\) can be calculated from \(\{A_{0}^{x,j}\}\) using \(O(ML)\) basic operations with \(O(L)\) space, and the hash value can be computed from \(\{A_{M}^{x,j}\}\) using \(O(L)\) multiplications and \(O(L)\) modulo operations with \(O(L)\) space. As a result, the time and space complexity of QHFM-\(L\) are \(O(ML)\) and \(O(L)\), respectively, which are the same as those of the state-of-the-art hash functions [1, 2, 5, 6, 7] based on discrete quantum walks.
## 6 Conclusion
In this article, a new QWM-based hash function QHFM-P is proposed and analyzed. Similar to the existing QWM-based hash function [1], the proposed scheme is also constructed by using quantum walks with one- and two-step memory; the major distinction lies in that the underlying walks with two-step memory of QHFM and QHFM-P are different extensions of Mc Gettrick's QW1M model [13]. The proposed hash
function has the same time and space complexity as those of QHFM, and it also has near-ideal statistical properties. It is noteworthy that the four kinds of statistical properties of QHFM-P are quite stable with respect to the coin angles, which suggests the robustness of the hashing properties of the proposed scheme.
In the future, we will try to identify the region of stability in the plane of coin and other parameters, examine the stability with respect to various direction-determine transforms, and establish the conditions for constructing good QWM-based hash functions.
|
2306.02194 | PathFinder: A unified approach for handling paths in graph query
languages | Path queries are a core feature of modern graph query languages such as
Cypher, SQL/PGQ, and GQL. These languages provide a rich set of features for
matching paths, such as restricting to certain path modes (shortest, simple,
trail) and constraining the edge labels along the path by a regular expression.
In this paper we present PathFinder, a unifying approach for dealing with path
queries in all these query languages. PathFinder leverages a compact
representation of the (potentially exponential number of) paths that can match
a given query, extends it with pipelined execution, and supports all commonly
used path modes. In the paper we describe the algorithmic backbone of
PathFinder, provide a reference implementation, and test it over a large set of
real-world queries and datasets. Our results show that PathFinder exhibits very
stable behavior, even on large data and complex queries, and its performance is
an order of magnitude better than that of many modern graph engines. | Benjamín Farías, Wim Martens, Carlos Rojas, Domagoj Vrgoč | 2023-06-03T20:53:22Z | http://arxiv.org/abs/2306.02194v2 | # Evaluating Regular Path Queries in GQL and SQL/PGO:
###### Abstract.
We tackle the problem of answering regular path queries over graph databases, while simultaneously returning the paths which witness our answers. We study this problem under the arbitrary, all-shortest, trail, and simple-path semantics, which are the common path matching semantics considered in the research literature, and are also prescribed by the upcoming ISO Graph Query Language (GQL) and SQL/PGO standards. In the paper we present how the classical product construction from the theoretical literature on graph querying can be modified in order to allow returning paths. We then discuss how this approach can be implemented, both when the data resides in a classical B+ tree structure, and when it is assumed to be available in main memory via a compressed sparse row representation. Finally, we perform a detailed experimental study of different trade-offs these algorithms have over real world queries, and compare them with existing approaches.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
are three different shortest paths linking Joe to ENS Paris (traced by the edges \(\mathrm{e}3\to\mathrm{e}5\to\mathrm{e}1\), \(\mathrm{e}4\to\mathrm{e}7\to\mathrm{e}10\), and \(\mathrm{e}3\to\mathrm{e}6\to\mathrm{e}10\), respectively), and we might wish to retrieve all of them.
Overall, GQL and SQL/PGQ provide an interesting challenge in terms of RPQ evaluation, since they combine the requirement to find appropriately labeled paths, with restrictions on such paths. In this paper we study how, given an RPQ, one can attach a GQL or SQL/PGQ path mode to this RPQ and return the required paths.
**Our contribution.** The main objective of our work is to show how _classical graph search algorithms can be used to efficiently evaluate regular path queries under all path modes prescribed by GQL and SQL/PGQ_. As such, our contributions can be summarized as follows:
* We extend the graph product approach (Bordes and Rivest, 1995; Rivest, 1996) used in the theoretical literature to evaluate RPQs, with the ability to return paths. We do so by adapting the classical BFS/DFS graph search algorithms to take into account any RPQ semantics as prescribed by GQL or SQL/PGQ.
* We develop a new algorithm for finding _all shortest paths_ witnessing an RPQ answer, based on an extension of the classical BFS algorithm. We also pinpoint the precise requirements needed for other path modes to work correctly.
* We implement all of the algorithms inside of MillenniumDB, an open-source graph engine (Ross et al., 2017), and test their performance both in a classical setup of B+tree-based storage, and when data is available in a main memory index.
**Limitations.** For all of our algorithms we will assume that the starting point for the path is fixed. Namely, we will consider RPQs such as \((\mathsf{Joe},\mathsf{knows}^{+},\mathcal{I}x)\), which assume that the path starts with the node Joe. While RPQs allow both ends of the path to be variables, we fix the starting point for two reasons: (i) this approach allows us to use standard graph traversal algorithms to evaluate RPQs; and (ii) RPQs are usually part of a larger query which selects one or more nodes from which path searching is done (Bordes and Rivest, 1996). We remark that having multiple starting points is not an issue for our algorithms, but they would act as independent searches. We will also not consider top-\(k\) selectors from GQL (Gopal, 2017), since even without those we have to handle 11 different path evaluation modes. Finally, the GQL and SQL/PGQ standard allow mixing path modes for different parts of a path. For instance, one can define a query that concatenates two paths: one a simple path, and another a trail, and also require that such concatenated path is a shortest path. In this paper we handle only the base case of having a _single_ RPQ and a _single_ path mode.
**Organization.** We define graph databases and RPQs in Section 2. Algorithms for the WALK restrictor are studied in Section 3, while TRAIL and SIMPLE are tackled in Section 4. We discuss implementation details in Section 5, and experimental results in Section 6. Related work is discussed in Section 7. We conclude in Section 8. Additional details, code, and proofs can be found at (Kang et al., 2019).
## 2. Graph Databases and RPQs
In this section we define graph databases and regular path queries.
**Graph databases.** Let Nodes be a set of node identifiers and Edges be a set of edge identifiers, with Nodes and Edges being disjoint. Additionally, let \(\mathsf{Lab}\) be a set of labels. Following the research literature (Bordes and Rivest, 1995; Rivest, 1996; Rivest, 1996), we define graph databases as follows.
Definition 2.1 ().: A _graph database_\(G\) is a tuple \((V,E,\rho,\lambda)\), where:
* \(V\subseteq\mathsf{Nodes}\) is a finite set of nodes.
* \(E\in\mathsf{Edges}\) is a finite set of edges.
* \(\rho:E\mapsto(V\times V)\) is a total function. Intuitively, \(\rho(e)=(v_{1},v_{2})\) means that \(e\) is a directed edge going from \(v_{1}\) to \(v_{2}\).
* \(\lambda:E\rightarrow\mathsf{Lab}\) is a total function assigning a label to an edge.
Notice that, similarly as in (Rivest, 1996), we use a simplified version of property graphs (Gopal, 2017), where we only consider nodes, edges, and edge labels. Most importantly, we omit properties (with their associated values) that can be assigned to nodes and edges, as well as node labels. This is done since the type of queries we consider only use nodes, edges, and edge labels. However, all of our results transfer verbatim to the full version of property graphs. We also remark that our results apply directly to RDF graphs (Rivest, 1996) and edge-labeled graphs (Bordes and Rivest, 1996; Rivest, 1996), which do not use explicit edge identifiers.
**Paths.** A _path_ in a graph database \(G=(V,E,\rho,\lambda)\) is a sequence
\[p=v_{0}e_{1}v_{1}e_{2}v_{2}\cdots e_{n}v_{n}\]
where \(n\geq 0\), \(e_{i}\in E\), and \(\rho(e_{i})=(v_{i-1},v_{i})\) for \(i=1,\ldots,n\). If \(p\) is a path in \(G\), we write \(\mathrm{lab}(p)\) for the sequence of labels \(\mathrm{lab}(p)=\lambda(e_{1})\cdots\lambda(e_{n})\) occurring on the edges of \(p\). We write \(\mathrm{src}(p)\) for the starting node \(v_{0}\) of \(p\), and \(\mathrm{tgt}(p)\) for the end node \(v_{n}\) of \(p\). The length of a path \(p\), denoted \(\mathrm{len}(p)\), is defined as the number \(n\) of edges it uses. We will say that a path \(p\) is a:
* WALK, for any \(p\).1
Footnote 1: The term _path_ is used in the database literature to denote what is called a _walk_ in graph theory. GQL and SQL/PGQ use WALK as a keyword for denoting any path.
* TRAIL, if \(p\) does not repeat an edge. That is, if \(e_{i}\neq e_{j}\) for any pair of edges \(e_{i},e_{j}\) in \(p\) with \(i\neq j\).
* ACYCLIC, if \(p\) does not repeat any node. That is, \(v_{i}\neq e_{j}\) for any pair of nodes \(v_{i},v_{j}\) in \(p\) with \(i\neq j\).
* SIMPLE, if \(p\) does not repeat a node, except that possibly \(\mathrm{src}(p)=\mathrm{tgt}(p)\). That is, if \(v_{i}\neq v_{j}\) for any pair of nodes \(v_{i},v_{j}\) in \(p\) with \((i,j)\in\{0,\ldots,n\}^{2}-\{(0,n)\}\).
Additionally, given a set of paths \(P\) over a graph database \(G\), we will say that \(p\in P\) is a SHORTEST path in \(P\), if it holds that \(\mathrm{len}(p)\leq\mathrm{len}(p^{\prime})\), for each \(p^{\prime}\in P\). We will use \(\mathrm{Paths}(G)\) to denote the (potentially infinite) set of all paths in a graph database \(G\).
**Regular path queries.** The class of queries we study in this paper are _regular path queries_, or _RPQs_ for short. RPQs have been well studied in the research literature (Bordes and Rivest, 1996; Rivest, 1996; Rivest, 1996), they form the backbone of most practical graph query languages (Gopal, 2017; Rivest, 1996; Rivest, 1996; Rivest, 1996), and are also the basis of navigation in the upcoming GQL and SQL/PGQ standards (Gopal, 2017). For us, a regular path query will be an expression of the form
\[\mathsf{selector?}\mathsf{restrictor}\left(\mathsf{\mathit{n}},\mathsf{\mathsf{ regex}},\mathcal{I}x\right)\]
where \(\mathsf{\mathit{n}}\in\mathsf{Nodes}\) is a node, \(\mathsf{\mathsf{regex}}\) is a regular expression, and \(\mathcal{I}x\) is a variable. Following GQL and SQL/PGQ (Gopal, 2017), we use _selectors_ and _restrictors_ to specify which paths are to be returned by the RPQ. The grammar of selectors and restrictors is as follows:
\[\mathsf{restrictor}:\mathsf{WALK}\mid\mathsf{TRAIL}\mid\mathsf{ SIMPLE}\mid\mathsf{ACYCLIC}\] \[\mathsf{selector}:\mathsf{ANY}\mid\mathsf{ANY}\mathsf{ShortEST}\mid \mathsf{ALL}\mathsf{ShortEST}\]
Traditionally (Bordes and Rivest, 1996), the \((\mathsf{\mathit{n}},\mathsf{\mathsf{regex}},\mathcal{I}x)\) part of an RPQ tells us that we wish to find all the nodes \(\mathsf{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathitmathitmathitmathit{\mathit{\mathit{\mathitmathit{\mathit{\mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit \mathit \mathit \mathit{\mathit{\mathit{\mathit{\mathit{\mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathitmathitmathitmathitmathitmathit \mathit{ \mathit{ \mathit{\mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathitmathitmathitmathit{ \mathitmathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathitmathitmathit \mathit{ \mathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathitmathit {\mathitmathitmathitmathitmathitmathitmathit { \mathitmathitmathitmathit
a path \(p\) from \(v\) to \(v^{\prime}\), such that \(\operatorname{lab}(p)\) is a word in the language of the regular expression \(\mathsf{regex}\).2 Since the set of all such paths can be infinite (Brandt et al., 2017), restrictors allow us to specify which paths are considered valid, while selectors filter out results from a given set of valid paths. Next, we formally define the semantics of an RPQ.
Footnote 2: Notice that we assume that the starting node in an RPQ is fixed. RPQs can generally also have a variable in the place of \(v\), but we consider the more limited case because it extends itself naturally to traditional graph search algorithms.
Let \(G\) be a graph database and \(q\) an RPQ of the form:
\[\mathsf{restrictor}\ (v,\mathsf{regex},?x)\]
namely, we omit the optional selector part for now. We use the notation \(\mathsf{Paths}(G,\mathsf{restrictor})\) to denote the set of all paths in \(G\) that are valid according to restrictor. For example, \(\mathsf{Paths}(G,\mathsf{TRAIL})\) is the set of all trails in \(G\). We then define the semantics of \(q\) over \(G\), denoted \(\llbracket q\rrbracket_{G}\), where \(q=\mathsf{restrictor}\ (v,\mathsf{regex},?x)\), as follows:
\[\llbracket\mathsf{restrictor}\ (v,\mathsf{regex},?x)\rrbracket_{G}= \ \{\ (p,v^{\prime})\ \ |\ p\in\mathsf{Paths}(G,\mathsf{restrictor}),\] \[\mathsf{src}(p)=v,\mathsf{tgt}(p)=v^{\prime},\] \[\operatorname{lab}(p)\in\mathcal{L}(\mathsf{regex})\ \}\.\]
Here \(\mathcal{L}(\mathsf{regex})\) denotes the language of the regular expression \(\mathsf{regex}\). Intuitively, for an RPQ "TRAIL \((v,\mathsf{regex},?x)\)", we will return all pairs \((p,v^{\prime})\) such that \(p\) is a TRAIL in our graph that connects \(v\) to \(v^{\prime}\) (and \(\operatorname{lab}(p)\in\mathcal{L}(\mathsf{regex})\)), and similarly for other restrictors. Finally, the semantics of queries that also use is defined on a case-by-case basis. For this, we will use \(q\) to denote the selector-free RPQ \(q=\mathsf{restrictor}\ (v,\mathsf{regex},?x)\). We now have:
* \(\llbracket\mathsf{ANY}\ \mathsf{restrictor}\ (v,\mathsf{regex},?x)\rrbracket_{G}\) returns, for each node \(v^{\prime}\) reachable from \(v\) by a path \(p\) with \((p,v^{\prime})\in\llbracket q\rrbracket_{G}\), a _single such pair_\((p,v^{\prime})\in\llbracket q\rrbracket_{G}\), chosen non-deterministically.
* \(\llbracket\mathsf{ANY}\ \mathsf{SHORTEST}\ \mathsf{restrictor}\ (v,\mathsf{regex},?x)\rrbracket_{G}\) returns, for each node \(v^{\prime}\) reachable from \(v\) by a path \(p\) with \((p,v^{\prime})\in\llbracket q\rrbracket_{G}\), a _single pair_\((p,v^{\prime})\in\llbracket q\rrbracket_{G}\), where \(p\) is SHORTEST among all paths \(p^{\prime}\) for which \((p^{\prime},v^{\prime})\in\llbracket q\rrbracket_{G}\).
* \(\llbracket\mathsf{ALL}\ \mathsf{SHORTEST}\ \mathsf{restrictor}\ (v,\mathsf{regex},?x)\rrbracket_{G}\) will return, for each \(v^{\prime}\) reachable from \(v\) by a path \(p\) with \((p,v^{\prime})\in\llbracket q\rrbracket_{G}\), the _set of all pairs_\((p,v^{\prime})\in\llbracket q\rrbracket_{G}\) with \(p\) SHORTEST among paths \(p^{\prime}\) for which \((p^{\prime},v^{\prime})\in\llbracket q\rrbracket_{G}\).
Intuitively, we can think of paths being grouped by \(v^{\prime}\) before the selector is applied. Notice that the semantics of \(\mathsf{ANY}\) and \(\mathsf{ANY}\) SHORTEST is non-deterministic when there are multiple (shortest) paths connecting some \(v^{\prime}\) with the starting node \(v\). While the selector is optional in our RPQ syntax, GQL and SQL/PGO prohibit the WALK restrictor to be present without any selector attached to it, in order to ensure a finite result set. Therefore, in this paper we will assume that queries using the WALK restrictor will always have an associated selector. This gives rise to 11 total combinations of query prefixes to specify the type of path(s) that are to be returned for each node reachable by the \((v,\mathsf{regex},?x)\) part of the query.
In this paper, we use \(\llbracket q\rrbracket_{G}\) to denote the _size_ of all the paths in \(\llbracket q\rrbracket_{G}\). Notice that this is not simply the cardinality of the set \(\llbracket q\rrbracket_{G}\), but instead, the sum of the lengths of all the paths present in \(\llbracket q\rrbracket_{G}\). Since any algorithm for computing \(\llbracket q\rrbracket_{G}\) has to, at the very least, write down each symbol on the paths in \(\llbracket q\rrbracket_{G}\), this factor needs to be accounted for when analyzing the runtime of our algorithms.
Regular expressions and automataWe assume basic familiarity with regular expressions and finite state automata (Zhu and Kasten, 2017). If \(\mathsf{regex}\) is a regular expression, we will denote by \(\mathcal{L}(\mathsf{regex})\) the language of \(\mathsf{regex}\). We use \(\mathcal{A}=(Q,\Sigma,\delta,q_{0},F)\) to denote a non-deterministic finite automaton (NFA). Here \(Q\) is a set of states, \(\Sigma\) a finite alphabet of edge labels, \(\delta\) the transition relation over \(Q\times\Sigma\times Q\), \(q_{0}\) the initial state, and \(F\) the set of final states, respectively. An NFA is deterministic (or DFA for short), if \(\delta\) is a function. Each \(\mathsf{regex}\) can be converted into an equivalent NFA of size linear in \(|\mathsf{regex}|\). There are several standard ways of converting an expression to an automaton, for instance using Thompson's, or Glushkov's construction (Zhu and Kasten, 2017). In this paper, we assume that the automaton has a single initial state and that no \(e\)-transitions are present. An NFA is called _unambiguous_ if it has at most one accepting run for every word. Every DFA is unambiguous, but the converse is not necessarily true.
## 3. The WALK Semantics
In this section we describe algorithms for evaluating RPQs under the WALK semantics. That is, we treat queries of the form:
\[\mathsf{selector}\ \mathsf{WALK}\ (v,\mathsf{regex},?x)\]
Notice that here the selector part is non-optional, since the set of all walks can be infinite. One method for evaluating RPQs, long standing in the theoretical literature (Brandt et al., 2017; Brandt et al., 2017; Brandt et al., 2017), is known under the name product construction, or product graph, and since it will form the basis of many algorithms we present, we describe it next.
Product graphGiven a graph database \(G=(V,E,\rho,\lambda)\), and an expression of the form \(q=(v,\mathsf{regex},?x)\)3, the _product graph_ is constructed by first converting the regular expression \(\mathsf{regex}\) into an equivalent non-deterministic finite automaton \((Q,\Sigma,\delta,q_{0},F)\). The product graph \(G_{\times}\), is then defined as the graph database \(G_{\times}=(V_{\times},E_{\times},\rho_{\times},\lambda_{\times})\), where:
Footnote 3: Notice that the product construction does not depend on selectors or restrictors, since it was traditionally not used to return paths.
* \(V_{\times}=V\times Q\)
* \(E_{\times}=\{(e,(q_{1},a_{2}))\in E\times\delta\ |\ \lambda(e)=a\}\)
* \(\rho_{\times}(e,d)=((x,q_{1}),(y,q_{2}))\) if:
* \(\lambda_{\times}((e,d))=\lambda(e)\).
Intuitively, the product graph is the graph database obtained by the cross product of the original graph and the automaton for \(\mathsf{regex}\). Considering the expression \(q=(v,\mathsf{regex},?x)\), we can now find all \(v^{\prime}\) reachable from \(v\) in the original graph database \(G\) via a path whose label belongs to \(\mathcal{L}(\mathsf{regex})\), by detecting all nodes of the form \((v^{\prime},q^{\prime})\in G_{\times}\), with \(q^{\prime}\in F\) that are reachable (by any path) from \((v,q_{0})\). Notice that in \(G_{\times}\) we need not worry whether the path belongs to \(\mathcal{L}(\mathsf{regex})\), since this is guaranteed by the product construction, which mimics the standard cross product of two NFAs (Zhu and Kasten, 2017). Therefore all such \(v^{\prime}\)s can be found by using any standard graph search algorithm (BFS/DFS) on \(G_{\times}\) starting in the node \((v,q_{0})\). A by-product of this approach is that it is not necessary to construct \(G_{\times}\) in its entirety, but rather, we build (a portion of) \(G_{\times}\) on-the-fly, as we are exploring the product graph. Next, we show how the product construction can be used to return paths.
### Any (Shortest) Walks
We begin by treating the WALK restrictor combined with selectors ANY and ANY SHORTEST. Namely, we provide the algorithm for queries of the form:
\[q=\text{ANY (SHORTEST)? WALK}\ (v,\text{regex},\gamma x) \tag{1}\]
The idea here follows directly from the product construction; namely, given a graph database \(G\) and a query \(q\) as described above, we can perform a classical graph search algorithm such as BFS or DFS starting at the node \((v,q_{0})\) of the product graph \(G_{\times}\), built from the automaton \(\mathcal{A}\) for regex and \(G\). Since both these algorithms also support reconstructing a single (shortest in the case of BFS) path to any reached node, we obtain the desired semantics for RPQs of the form (1). Our solution is presented in Algorithm 1.
The basic object we will be manipulating is a _search state_, which is simply a quadruple of the form \((n,q,e,prev)\), where \(n\) is a node of \(G\) we are currently exploring, \(q\) is the state of \(\mathcal{A}\) in which we are currently located, while \(e\) is the edge of \(G\) we used to reach \(n\), and \(prev\) is a pointer to the search state we used to reach \((n,q)\) in \(G_{\times}\). Intuitively, the \((n,q)\)-part of the search state allows us to track the node of \(G_{\times}\) we are traversing, while \(e\), together with \(prev\) allows to reconstruct the path from \((v,q_{0})\) that we used to reach \((n,q)\). The algorithm then needs the following three data structures:
* Open, which is a queue (in case of BFS), or stack (in case of DFS) of search states, with usual push() and pop() methods.
* Visited, which is a dictionary of search states we have already visited in our traversal, maintained so that we do not end up in an infinite loop. We assume that \((n,q)\) can be used as a search key to check if some \((n,q,e,prev)\in\) Visited. We remark that \(prev\) always points to a state stored in Visited.
* ReachedFinal is a set containing nodes we already returned as query answers, in case we re-discover them via a different end state (recall that an NFA can have several end states).
The algorithm explores the product of \(G\) and \(\mathcal{A}\) using either BFS or DFS, starting from \((v,q_{0})\). In line 6 we initialize a search state based on \((v,q_{0})\), and add it to Open and Visited. Lines 9-11 check whether the zero-length path containing \(v\) is an answer (in which case \(v\) needs to be a node in \(G\)). The main loop of line 12 is the classical BFS/DFS algorithm that pops an element from Open (line 13), and starts exploring its neighbors in \(G_{\times}\). When exploring each state \((n,q,e,prev)\) in Open, we scan all the transitions \((q,a,q^{\prime})\) of \(\mathcal{A}\) that originate from \(q\), and look for neighbors of \(n\) in \(G\) reachable by an \(a\)-labeled edge (line 14). Here, writing \((n^{\prime},q^{\prime},edge^{\prime})\in\) Neighbors\(((n,q,edge,prev),G,\mathcal{A})\) simply means that \(\rho(edge^{\prime})=(n,n^{\prime})\) in \(G\), and that \((q,\lambda(edge^{\prime}),q^{\prime})\) belongs to the transition relation of \(\mathcal{A}\). If the pair \((n^{\prime},q^{\prime})\) has not been visited yet, we add it to Visited and Open, which allows it to be expanded later on in the algorithm (lines 15-18). Furthermore, if \(q^{\prime}\) is also a final state, and \(n^{\prime}\) was not yet added to Reachedfinal (line 19), we found a new solution; meaning a WALK from \(v\) to \(n^{\prime}\) whose label is in the language of regex. This walk can then be reconstructed using the \(prev\) part of search states stored in Visited in a standard fashion (function getPath). Finally, we need to explain why the ReachedFinal set is used to store each previously undiscovered solution in line 20. Basically, since our automaton is non-deterministic and can have multiple final states, two things can happen:
1. The automaton might be ambiguous, meaning that there could be two different runs of the automaton that accept the same word in \(\mathcal{L}(\text{regex})\). This, in turn, could result in the same path being returned twice, which is incorrect.
2. There could be two different paths \(p\) and \(p^{\prime}\) in \(G\) that link \(v\) to some \(n\), and such that both \(\text{lab}(p)\in\mathcal{L}(\text{regex})\) and \(\text{lab}(p^{\prime})\in\mathcal{L}(\text{regex})\), but the accepting runs of \(\mathcal{A}\) on these two words end up in different end states of \(\mathcal{A}\). Again, this could result with \(n\) and a path to it being returned twice.
Both of these problems are solved by using the ReachedFinal set, which stores a node the first time it is returned as a query answer (i.e. as an endpoint of a path reaching this node). Then, each time we try to return the same node again, we do so only if the node was not returned previously (line 19). This basically means that for each node that is a query answer, a single path is returned, without any restrictions on the automaton used for modeling the query.
The described procedure continues until Open is empty, meaning that there are no more states to expand and all reachable nodes have been found with a WALK from the starting node \(v\). It is important to mention that, when choosing to use BFS, this algorithm has the added benefit of returning the SHORTEST WALK to every reachable node, since it always prioritizes the shortest paths when traversing the product graph. For DFS an arbitrary path is returned.
**Example 3.1**.: Consider again the graph \(G\) in Figure 1, and let
\[q=\textsc{ANYSHORTESTWALK}\ (\mathsf{John},\mathsf{knows}^{+}\cdot\mathsf{lives}, \mathsf{?x}).\]
Namely, we wish to find places where friends of John live. Looking at the graph in Figure 1, we see that Rome is such a place, and the shortest path reaching it starts with John, and loops back to him using the edges e1 and e2, before reaching Rome (via e8), as required. To compute the answer, Algorithm 1 first converts the regular expression \(\mathsf{knows}^{+}\cdot\mathsf{lives}\) into the following automaton:
To find shortest paths, we use the BFS version of Algorithm 1 and explore the product graph starting at \((\mathsf{John},q_{0})\). The algorithm then explores the only reachable neighbor \((\mathsf{Joe},q_{1})\), and continues by visiting \((\mathsf{John},q_{1})\), \((\mathsf{Paul},q_{1})\) and \((\mathsf{Lily},q_{1})\). When expanding \((\mathsf{John},q_{1})\) the first solution, Rome, is found and recorded in ReachedFinal. The algorithm continues by reaching \((\mathsf{Anne},q_{1})\) and \((\mathsf{Jane},q_{1})\) from \((\mathsf{Paul},q_{1})\). When the \((\mathsf{Lily},q_{1})\) node is then expanded, it would try to reach \((\mathsf{Jane},q_{1})\) again, which is blocked in line 15. Expanding \((\mathsf{Anne},q_{1})\) would try to revisit Rome, but since this solution was already returned, we ignore it. The structure of Visited upon executing the algorithm is illustrated in Figure 2. Here we represent the pointer \(\mathit{prev}\) as an arrow to other search states in Visited, and annotate the arrow with the edge witnessing the connection. Notice that we can revisit a node of \(G\) (e.g. John), but not a node of \(G_{\times}\) (e.g. \((\mathsf{Jane},q_{1})\)).
We can summarize the results about Algorithm 1 as follows:
**Theorem 3.2**.: _Let \(G\) be a graph database, and \(q\) the query:_
* \(\textsc{ANY}\ (\textsc{SHORTEST})^{2}\ \textsc{WALK}\ (\mathsf{o},\mathsf{ regex},\mathsf{?x})\)_._
_If \(\mathcal{A}\) is the automaton for \(\mathsf{regex}\), then Algorithm 1 correctly computes \([\![q]\!]_{G}\) in time \(O(|\mathcal{A}|\cdot|G|+|\![q]\!]_{G}|)\)._
### All Shortest Walks
To fully cover the WALK semantics of RPQs, we next show how to evaluate queries of the form:
\[q=\textsc{ALLSHORTESTWALK}\ (\mathsf{o},\mathsf{regex},\mathsf{?x}) \tag{2}\]
over a graph database \(G\). For this, we will extend the BFS version of Algorithm 1 in order to support finding _all_ shortest paths between a pair \((\mathsf{o},\mathsf{o}^{\prime})\) of nodes, instead of a single one. Intuition here is quite simple: to obtain all shortest paths, upon reaching \(\mathsf{o}^{\prime}\) from \(\mathsf{o}\) by a path conforming to \(\mathsf{regex}\)_for the first time_, the BFS algorithm will do so using a shortest path. The length of this path can then be recorded (together with \(\mathsf{o}^{\prime}\)). When a new path reaches the same, already visited node \(\mathsf{o}^{\prime}\), if it has the length equal to the recorded length for \(\mathsf{o}^{\prime}\), then this path is also a valid answer to our query. The algorithm implementing this logic is presented in Algorithm 2.
```
1:functionAllShortestWalk(\(\mathsf{o},\mathsf{query}\))
2:\(\mathcal{A}\leftarrow\textsc{Automaton}(\mathsf{regex})\ \triangleright\ \mathsf{q}\ \mathit{q}\ \mathit{initial}\ \mathsf{state},\ \mathsf{q}_{F}\ \mathit{final}\ \mathsf{state}\)
3: Open.init()
4: Visited.init()
5:if\(\mathsf{o}\in\mathsf{V}\)then
6:\(\mathsf{startState}\leftarrow(\mathsf{o},\mathsf{q}_{0},\mathsf{\bot})\)
7: Visited.push(startState)
8: Open.push(startState)
9:whileOpen \(\neq\varnothing\)do
10: current \(\leftarrow\) Open.pop() \(\triangleright\) current = (\(\mathsf{n},\mathsf{q},\mathit{depth},\mathit{prevList})\)
11:if\(\mathsf{q}==\mathsf{q}_{F}\)then
12: currentPaths \(\leftarrow\) getAllParis(current)
13:Solutions.add(currentPaths)
14:for next = \((\mathsf{n}^{\prime},\mathsf{q}^{\prime},\mathit{edge}^{\prime})\)\(\triangleright\) Neighbors(current, \(G,\mathcal{A}\))do
15:if\((\mathsf{n}^{\prime},\mathsf{q}^{\prime},\mathsf{*},\mathsf{*})\)\(\triangleright\) Visited then
16:\((\mathsf{n}^{\prime},\mathsf{q}^{\prime},\mathsf{depth},\mathit{prevList})\)\(\leftarrow\) Visited.get(\(\mathsf{n}^{\prime},\mathsf{q}^{\prime}\))
17:if\(\mathsf{depth}+1==\mathit{depth}^{\prime}\)then
18:\(\mathit{prevList}^{\prime}\).add(\(\langle\ \mathsf{current},\mathit{edge}^{\prime}\rangle\))
19:else
20:\(\mathit{prevList}\).init()
21:\(\mathit{prevList}\).add(\(\langle\ \mathsf{current},\mathit{edge}^{\prime}\rangle\))
22: newState \(\leftarrow\) \((\mathsf{n}^{\prime},\mathsf{q}^{\prime},\mathsf{depth}+1,\mathit{prevList})\)
23: Visited.push(newState)
24: Open.push(newState) returnSolutions
25:functiongetAllPaths(state = \((\mathsf{n},\mathsf{q},\mathit{depth},\mathit{prevList})\))
26:if\(\mathit{prevList}==\bot\)then\(\triangleright\) Initial state
27:return\([\mathsf{o}]\)
28:for prev = \((\mathit{prevState},\mathit{prevEdge})\)\(\in\)\(\mathit{prevList}\)do
29:for prevPath \(\in\) getAllPaths(\(\mathit{prevState}\)) do
30: paths.add(prevPath.extend(\(\mathsf{n},\mathit{prevEdge}\))) returnpaths
```
**Algorithm 2** Evaluation algorithm for a graph database \(G\) and an RPQ \(query\) = ALLSHORTESTWALK (\(\mathsf{o},\mathsf{regex},\mathsf{?x}\)).
As before, we use \(\mathcal{A}\) to denote the NFA for \(\mathsf{regex}\). We will additionally assume that \(\mathcal{A}\) is unambiguous and that it has a single accepting state \(\mathsf{q}_{F}\). The main difference to Algorithm 1 is in the _search state_ structure. A search state is now a quadruple of the form \((\mathsf{n},\mathsf{q},\mathsf{depth},\mathit{prevList})\), where:
* \(n\) is a node of \(G\) and \(q\) a state of \(\mathcal{A}\);
* \(\mathit{depth}\) is the length of (any) shortest path reaching \(n\) from \(\mathsf{o}\); and
* \(\mathit{prevPath}\) is a _list_ of pointers to any previous search state that allows us to reach \(n\) via a shortest path.
We assume that \(\mathit{prevList}\) is a _linked list_, initialized as an empty node, and accepting sequential insertions of pairs \((\mathit{searchState},\mathit{edge})\) through the add() method. Intuitively, \(\mathit{prevList}\) will allow us to reconstruct _all_ the shortest paths reaching a node, since there can be multiple ones. When adding a pair \((\mathit{searchState},\mathit{edge})\), we assume \(\mathit{searchState}\) to be a pointer to a previous search state, and \(\mathit{edge}\) will
Figure 2. Visited after running Algorithm 1 in Example 3.1.
be used to reconstruct the path passing through the node in this previous search state. Finally, we again assume that Visited is a dictionary of search states, with the pair \((n,q)\) being the search key. Namely, there can be at most one tuple \((n,q,depth,prevList)\) in Visited with the same pair \((n,q)\). We assume that with \(\text{Visited}.\text{get}(n,q)\), we will obtain the unique search state having \(n\) and \(q\) as the first two elements, or a null pointer if no such search state exists.
Algorithm 2 explores the product graph obtained from \(G\) and \(\mathcal{A}\) using BFS, since it needs to find shortest paths. Therefore, we assume Open to be a queue. The execution is very similar to Algorithm 1, with a few key differences. First, if a node \((n^{\prime},q^{\prime})\) of the product graph \(G_{\times}\) has already been visited (line 15), we do not directly discard the new path, but instead choose to keep it if and only if it is also shortest (line 17). In this case, the \(prevList\) for \(n^{\prime}\) is extended by adding the new path (line 18). If a new pair \((n^{\prime},q^{\prime})\) is discovered for the first time, a fresh \(prevList\) is created (lines 19-24). The second difference to Algorithm 1 lies in the fact that we now return solutions only after a state has been removed from Open (lines 10-11). Basically, when a state is popped from the queue, the structure of the BFS algorithm assures that we already explored all shortest paths to this state. We enumerate all the shortest paths reaching this node using getAllPaths, which applies backtracking to be able to compute all _walks_ from the starting node to the current node of interest. The following example illustrates what happens when there are multiple shortest paths between two nodes.
**Example 3.3**.: Consider again the graph \(G\) in Figure 1, and let
\[q=\textsc{ALL}\textsc{SHORTEST}\textsc{WALK}\left(\textsc{ Joe},\textsc{knows}^{*}\cdot\textsc{works},\textsc{?x}\right).\]
In the Introduction, we showed there are three paths in \(\llbracket q\rrbracket_{G}\), all three linking Joe to ENS Paris. The three paths are traced by the edges \(\textsc{e3}\to\textsc{e5}\to\textsc{e11}\), \(\textsc{e4}\to\textsc{e7}\to\textsc{e10}\), and \(\textsc{e3}\to\textsc{e6}\to\textsc{e10}\), respectively. To compute \(\llbracket q\rrbracket_{G}\), Algorithm 2 converts the regular expression knows\({}^{*}\cdot\textsc{works}\) into the following automaton \(\mathcal{A}\):
\[\begin{array}{c}\includegraphics[width=142.26378pt]{images/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/
### Returning all paths
We start by dealing with queries of the form:
\[q=(\text{ALL SHORTEST})?\text{\'{restrictor}}\ (\text{\'{e}},\text{\'{e}} \text{\'{x}})\]
where restrictor is TRAIL, SIMPLE, or ACYCLIC. In Algorithm 3 we present a solution for computing the answer of \(q\) over a graph database \(G\). Intuitively, when run over \(G\), Algorithm 3 will construct the automaton \(\mathcal{A}\) for \(\text{\'{e}}\text{\'{e}}\text{\'{x}}\) with the initial state \(q_{0}\). It will then start enumerating all paths in the product graph of \(G\) and \(\mathcal{A}\) starting in \((\text{\'{e}},q_{0})\), and discarding ones that do not satisfy the restrictor of \(q\). To ensure correctness, we will need the automaton to be unambiguous, and will keep the following auxiliary structures:
* _search state_, is a tuple \((n,q,depth,e,prev)\), where \(n\) is a node, \(q\) an automaton state, \(depth\) the length of the shortest path reaching \(n\) in \(G\), \(e\) an edge used to reach the node \(n\); and \(prev\) a pointer to another search state stored in Visited.
* Visited is a set storing already explored search states.
* ReachedFinal is a _dictionary_ of pairs \((n,depth)\), where \(n\) is a node reached in some query answer, and \(depth\) the length of the shortest paths to this node. Here \(n\) is the dictionary key, so ReachedFinal.get\((n)\) returns the pair \((n,depth)\).
Algorithm 3 explores the product graph by enumerating all paths starting in \((\text{\'{e}},q_{0})\). For this, we can use a breadth-first (Open is a queue), or a depth-first (Open is a stack) strategy. For the ALL SHORTEST selector, a queue needs to be used. The execution is very similar to Algorithm 1, but Visited is not used to discard solutions. Instead, every time a state from Open is expanded (lines 13-15), we check if the resulting path satisfies the restrictor of \(q\) via the isValid function. The function traverses the path defined by the search states stored in Visited recursively. Here we are checking whether the path in the _original graph_\(G\) satisfies the restrictor, and not the path in the product graph. If the explored neighbor allows to extend the current path according to the restrictor, we add the new search state to Visited, and Open (lines 17-18). If we reach a final state of the automaton (line 19), we retrieve the explored path using the getPath function, which is identical to the one from Algorithm 1, but now uses extended search states. Since the \(prev\) pointer in each search state is unique, this is always a single path. If the ALL SHORTEST selector is _not_ present, we simply add the newly found solution (lines 21-22). In the presence of the selector, we need to make sure to add only _shortest_ paths to the solution set. The dictionary ReachedFinal is used to track discovered nodes, and stores the length of the shortest path to this node. If the node is seen for the first time, the dictionary is updated, and a new solution added (lines 23-25). Upon discovering the same node again (lines 26-28), a new solution is added only if it is shortest.
Example 4.1 ().: Consider again the graph \(G\) in Figure 1, and
\[q=\text{SIMPLE}\ (\text{John},\text{knows}^{+}\cdot\text{lives},\text{\'{x}}).\]
Namely, we wish to use the same regular pattern as in Example 3.1, but now allowing only simple paths. Same automaton as in Example 3.1 is used in Algorithm 3. The algorithm will start by visiting \((\text{John},q_{0})\), followed by \((\text{Joe},q_{1})\). After this we will visit \((\text{John},q_{1})\), \((\text{Paul},q_{1})\) and \((\text{Lily},q_{1})\). In the next step we will try to expand \((\text{John},q_{1})\), but will detect that this leads to a path which is not simple (we could have detected this in the previous step, but this way the pseudo-code is more compact). We will continue exploring neighbors, building the Visited structure depicted in Figure 4. In Figure 4 we use the same notion for \(prev\) pointers as in previous examples. For brevity, we do not show \(depth\), but this is simply the length of the path needed to reach \((\text{John},q_{0})\) in Figure 4. We remark that the node \((\text{Jane},q_{1})\) appears twice since it will be present in the search state \((\text{Jane},q_{1},3,\text{e}6,prev)\) and in \((\text{Jane},q_{1},3,\text{e}7,prev^{\prime})\).
The correctness of the algorithm crucially depends on the fact that \(\mathcal{A}\) is unambiguous, since otherwise we could record the same solution twice. Termination is assured by the fact that eventually all paths that are valid according to the restrictor will be explored, and no new search states will be added to Open. Unfortunately, since we will potentially enumerate all the paths in the product graph, the complexity is exponential. More precisely, we obtain the following:
Theorem 4.2 ().: _Let \(G\) be a graph database, and \(q\) the query:_
(ALL SHORTEST)? restrictor \((v,\text{\emph{regex}},?x)\)_,_
_where restrictor is TRAIL, SIMPLE, or ACYCLIC. If the automaton \(\mathcal{A}\) for \(\text{\emph{regex}}\) is unambiguous, then Algorithm 3 correctly computes \(\llbracket q\rrbracket_{G}\) in time \(O\big{(}(|\mathcal{A}|\cdot|G|)^{|G|}+|\llbracket q\rrbracket_{G}|\big{)}\)._
### ANY and ANY SHORTEST
To treat queries of the form:
\[q=\text{\emph{ANY}}\ (\text{\emph{SHORTEST}})?\ \text{\emph{restrictor}}\ (v,\text{\emph{regex}},?x)\]
where restrictor is TRAIL, SIMPLE, or ACYCLIC, minimal changes to Algorithm 3 are required. Namely, for this case we would assume that ReachedFinal is a _set_ instead of a dictionary, and that it only stores nodes \(n\) reachable from \(v\) by a path conforming to restrictor, and whose label belongs to \(\mathcal{L}(\text{\emph{regex}})\). We would then replace lines 21-29 of Algorithm 3 with the following:
```
if\(n^{\prime}\notin\text{\emph{ReachedFinal}}\)then ReachedFinal.add(\(n^{\prime}\)) Solutions.add(path)
```
Basically, when \(n^{\prime}\) is discovered as a solution for the first time, we return a path associated with it, and never return a path reaching \(n^{\prime}\) as a solution again. To make ANY SHORTEST work correctly, we need to use BFS (i.e. Open is a queue), while for ANY we can use either BFS or DFS. Unfortunately, due to the aforementioned results of (Bartos et al., 2017) stating that checking the existence of a simple path between a fixed pair of nodes is NP-complete, we cannot simplify the brute-force search of Algorithm 3. One interesting feature is that, due to the fact that ReachedFinal is a set, and therefore for each solution node a single path is returned, we no longer need the requirement that \(\mathcal{A}\) be unambiguous. That is, we obtain:
Theorem 4.3 ().: _Let \(G\) be a graph database, and \(q\) the query:_
(SHORTEST)? restrictor \((v,\text{\emph{regex}},?x)\)_,_
_where restrictor is TRAIL, SIMPLE, or ACYCLIC. Then we can compute \(\llbracket q\rrbracket_{G}\) in time \(O\big{(}(|\mathcal{A}|\cdot|G|)^{|G|}+|\llbracket q\rrbracket_{G}|\big{)}\)._
## 5. Implementation Details
We implemented the algorithms of Section 3 and Section 4 inside of MillenniumDB (Miller et al., 2017), a recent open-source graph database engine developed by the authors. MillenniumDB provides the infrastructure necessary to process generic queries such as RPQs, and takes care of parsing, generation of execution plans, and data storage. The source code of our implementation can be obtained at (Kapelfon et al., 2018).
**Data access.** By default, MillenniumDB stores graph data on disk using B+trees, and loads the necessary pages into a main memory buffer during query execution. MillenniumDB indexes several relations, however, our algorithms only use the following two:
```
Edges(NodeFrom, Label, NodeTo, EdgeId) Nodes(NodeId)
```
The first relation allows us to find neighbors of a certain node reachable by an edge with a specific label, as used, for instance, in Algorithm 1 (line 14). The second table keeps track of nodes that are materialized in the database. In essence, the table Nodes is only used when checking whether the start node of an RPQ actually exists in the database (see e.g. Algorithm 1, line 9). All B+trees in MillenniumDB are accessed using the standard linear iterator interface used in relational databases (Kapelfon et al., 2018). Additionally, all of the proposed algorithms can be easily extended with the ability to traverse edges backwards, as required, for instance, by C2RPQs (Carpa et al., 2018) and SPARQL property paths (Kapelfon et al., 2018). To support this, MillenniumDB also includes the inverse Edges relation, with the following schema:
```
Edges(NodeTo, Label, NodeFrom, EdgeId)
```
We can then easily extend all of the described algorithms to allow RPQs with backward edges, and use the EDGES\({}^{-}\) table whenever looking for a neighbor accessed via a backward edge.
**Compressed Sparse Row.** In addition to B+trees, we also support data access via the _Compressed Sparse Row (CSR)_ representation (Carpa et al., 2018), which is a popular in-memory index for fast access to edge data (Kapelfon et al., 2018). Basically, a CSR allows us to compress the standard matrix representation of a graph, which is of size \(n\times n\) for a graph of \(n\) vertices, to roughly size \(|E|\), where \(E\) is the set of edges, while still allowing fast access to each node's neighbors. For us, a CSR consists of three arrays:
* src, an array of source nodes, ordered by their identifier;
* index, an array of integers, where the number in position \(i\) tells us where the neighbors of the \(i\)th source node start in the array tgt; and
* tgt an array of target nodes.
Given that in regular path queries we only need to access neighbors reachable by a specific label, we support creating the CSR of a subgraph consisting of edges with this particular label, which can be much smaller than the CSR of the entire graph. As an illustration, Figure 5 shows a graph database (left), and its CSR for the edges labeled with a (right). Notice that the node n3 is never mentioned in the CSR since it has no \(\mathtt{a}\)-labeled edges associated with it.
Basic intuition in this example is that, for instance, n4 is the second source node for a-labeled edges. Therefore in the second position of the index array, we store the index of the array tgt where we start listing the neighbors of n4 that can be reached by an \(\mathtt{a}\)-labeled edge. In our example, these are n1 and n2.
Figure 4. Visited after running Algorithm 3 in Example 4.1.
In our implementation, we will use CSR as an in-memory index for fast query evaluation, and will also allow to construct it on-the-fly as needed by the query. We remark that CSR is always created from the B+tree data residing on disk, and supports both backward and forward looking edges (each with their separate CSR).
**Pipelined execution.** In MillenniumDB all the algorithms are implemented in a pipelined fashion using linear iterators. In particular, this means that the solution is returned to the user as soon as it is encountered. This requires, for instance, that the main while loop of Algorithm 1 be halted as soon as a new solution is detected in line 19, and similarly for Algorithm 3. In the case of Algorithm 2 the situation is a bit more complex, as solutions can be detected while scanning the neighbors (lines 14-24), instead of upon popping an element from the queue (lines 11-13). All of these issues can be resolved by noting that linear iterators for neighbors of a node can be paused, and their execution resumed. For completeness, we include an expanded pseudo-code of our algorithms written in a pipelined fashion, in our online appendix (Han et al., 2016). The main benefit of the pipelined execution is that paths can be encountered on demand, and the entire solution set need not be constructed in advance.
**Query variables.** Throughout the paper we assumed that the RPQ part of our query takes the form \((v,\mathsf{regex},?x)\), where \(v\) is a node identifier, and \(?x\) is a variable. Namely, we were searching for all nodes reachable from a fixed point \(v\). It is straightforward to extend our algorithms to patterns of the form \((v,\mathsf{regex},\sigma^{\prime})\), where both endpoints of the path are known. That is, we can run any of the algorithms as described in the paper, and check whether \(v^{\prime}\) is a query answer.
## 6. Experimental Evaluation
Here we empirically evaluate our hypothesis that classical search algorithms over graphs provide a good basis for returning paths that belong to an RPQ query answer. We start by describing the experimental setup, and then discuss the obtained results. The source code of our implementation, together with the information needed to replicate the experiments, can be downloaded at (Han et al., 2016).
### Experimental setup
**Datasets and queries.** For our experiments we use MillenniumDB Path Query Challenge (Han et al., 2016), a benchmarking suite for path queries recently developed by the authors of this paper. The purpose of this query challenge is to test the performance of algorithms returning paths that witness RPQ query answers under different semantics, as described in Section 2. As such, (Han et al., 2016) uses two data/query set combinations called Real-world and Synthetic. The challenge is based on a more extensive SPARQL benchmark called WDBench (Brands et al., 2016). Next we describe the two test beds:
**Real-world.** The idea of this test is to gauge the potential of the described algorithms in a real-world setting. In terms of data, a curated version of the Wikidata (Wikidata, 2017) dataset is used. More precisely, we use the truthy dump of Wikidata, and keep only direct properties from this RDF dataset. This allows us to have the graph structure in the dataset, which is the only thing explored by the RPQs. Transforming this dataset into a graph database results in a graph with 364 million nodes and 1.257 billion edges. The dataset can be downloaded at (Brands et al., 2016). As for queries, these are extracted from the real-world queries posted by Wikidata users, as found in Wikidata's query logs (Wikidata, 2017). From these, a total of 659 RPQ patterns were selected, of which 592 have at least one endpoint fixed, and are thus supported by our algorithms. These 592 are used in our tests under the restrictor and selector options described in Section 2.
**Synthetic.** Here we check scalability by exhibiting a dataset with a large number of paths between two fixed nodes. The database used here, taken from (Zhou et al., 2017), is presented in Figure 6. The single RPQ we use has the pattern (start, a\({}^{*}\), end) and is combined with different selectors and restrictors. Notice that in our graph, all paths from start to end match a\({}^{*}\), and are, at the same time, shortest, trails and simple paths. Furthermore, there are \(2^{n}\) of such paths, while the graph has only \(3n+1\) nodes and \(4n\) edges. While returning all these paths is unfeasible for any algorithm, returning some portion of them (100,000 in our experiments) can be done efficiently by our algorithms. This test will also allow us to detect difficult cases for BFS vs DFS implementations we described thus far.
**Tested systems.** As described in Section 5, our implementation is tested with three different memory management strategies:
* B+tree version, which is the default (data on disk);
* CSR-c, which creates a CSR for each label as needed by the query, but catches the already created CSRs; and
* CSR-f version, which assumes CSRs for all labels to be available in main memory.
Additionally, we will use both a BFS traversal strategy and a DFS traversal strategy (when available) to run our algorithms.
In order to compare with state of the art in the area, we selected four publicly available graph database systems that allow for benchmarking with no legal restrictions. These are:
* Neo4J version 4.4.12 (Nao4J for short);
* Jena TDB version 4.1.0 (Jena);
* Blazegraph version 2.1.6 (Blazegraph); and
* Virtuoso version 7.2.6 (Virtuoso).
From the aforementioned systems, Neo4J uses the ALL TRAIL semantics by default, and also supports ANY SHORTEST WALK and ALL SHORTEST WALK. We therefore compare with Neo4J for these
Figure 5. A graph database and a CSR for the label a.
Figure 6. Graph database with exponentially many paths.
query modes. On the other hand, while Jena, BlazeGraph, and Virtuoso do not return paths, according to the SPARQL standard (Kumar et al., 2017), they should be detecting the presence of an arbitrary walk, so we incorporate them in our comparison when testing the ANY WALK semantics. Other systems we considered are DuckDB (Kumar et al., 2017), Oracle Graph Database (Kumar et al., 2017) and Tiger Graph (Kumar et al., 2017), which support (parts of) SQL/PGQ. Unfortunately, (Kumar et al., 2017) and (Kumar et al., 2017) are commercial systems with limited free versions, while the SQL/PGQ module for DuckDB (Kumar et al., 2017) is still in development.
**How we ran the experiments.** All experiments were run on a commodity server with an Intel(r)Xeon(r)Silver 4110 CPU, and 128GB of DDR4/2666MHz RAM, running Linux Debian 10 with the kernel version 5.10. The hard disk used to store the data was a SEAGATE model ST14000NM001G with 14TB of storage. Note that this is a classical HDD, and not an SSD. Neo4J was used with default settings and no limit on RAM usage. Jena and BlazeGraph were assigned 64GB of RAM, and Virtuoso was set up with 64GB or more as recommended in the user instructions. MillenniumDB was allowed a 32GB buffer for handling the queries. Since we run large batches of queries (100+), these are executed in succession, in order to simulate a realistic load to a database system. When it comes to MillenniumDB, we run tests both with data residing on disk and being buffered into main memory, and with data being pre-loaded into CSRs. As recommended by WDBench (Bondond et al., 2016) and MillenniumDB Path Query Challenge (Kumar et al., 2017), _all queries were run with a limit of 100,000 results and a timeout of 1 minute._
### Real-world data
Due to the large number of different versions of the algorithms we presented, coupled with the different storage models used, we group our results by the path modes tested in each experiment.
**ANY (Shortest)? WALK mode.** Here we are testing the performance of Algorithm 1 from Subsection 3.1. When the BFS strategy is used, we obtain a shortest path for each reachable node, while DFS returns an arbitrary path. Here we compare with SPARQL systems, since they are required to find a single path for each reachable node. We discuss the results along the following dimensions:
* Runtime. The results are presented in Figure 7 (left). As we see, the six implementations of Algorithm 1 significantly outperform SPARQL systems. For different implementations of Algorithm 1, we can see that the B+tree versions are the slowest since they have to load all the data from disk to main memory, while having CSRs in main memory gives a significant boost to performance. Building CSRs on demand is in-between the two as expected. Comparing BFS vs. DFS based algorithms, we can observe that BFS-based ones are marginally faster.
* Memory. The total size of all CSRs was 4GB4. For B+tree versions, initializing MillenniumDB already uses around 9GB of buffer, while usage for the query runs topped at 11.8GB (BFS) and 11GB (DFS); meaning that only 2.8GB (BFS) and 2GB (DFS) were used for execution, showing that B+tree versions can take advantage of data locality in RPQs. Footnote 4: Constructing all the CSRs from B+tree storage takes around 52 seconds. In cached CSR versions this time is included in the query execution.
* Timeouts. Implementations of Algorithm 1 had few timeouts. More precisely, the B+tree versions had one (BFS) and two (DFS) timeouts, while both CSR versions of DFS had one timeout. As far as SPARQL systems are concerned, Jena had 96 timeouts, BlazeGraph 87, and Virtuoso 24. Neo4J, using the ANY SHORTEST selector, timed out on all but two queries, so we leave it out of the comparison.
Overall, we can see that Algorithm 1 shows stable performance when a single path needs to be returned, both when coupled with standard B+tree storage, or in-memory indices such as CSRs.
**ALL SHORTEST WALK mode.** Next, we discuss the performance of Algorithm 2 from Subsection 3.2 as follows:
* Runtime. Runtimes are illustrated in Figure 7 (right). As we can see, the performance is similar as when returning a single path, as Theorem 3.4 would suggest.
* Memory. CSRs again use 4GB, while the B+tree version topped at 11.5GB, similar to the single path version.
* Timeouts. All versions had a single timeout on the same query. Neo4J also supports this path mode, but it timed out on all but two queries, so we exclude it from Figure 7.
Recall that for Algorithm 2 only BFS can be used. Interestingly, our results suggest that there is no significant difference between returning a single shortest path, or all shortest paths. We stress here that the limit of 100,000 applies to the returned paths (not the end nodes). Namely, if there are 100,000 shortest paths between two nodes, this will count as 100,000 results, while the ANY WALK mode would find 100,000 paths between different pairs of nodes.
**ANY TRAIL and ANY SIMPLE modes.** Next, we test the performance of returning a single trail or a single simple path. Here we use the variant of Algorithm 3 from Subsection 4.2. Since ACYCLC evaluation is virtually identical to SIMPLE, we do not include it in our experiments. The results are as follows:
Figure 7. Runtime for the WALK semantics with a limit of 100,000 results and a timeout of 60 seconds.
* Runtime. The runtimes for ANY TRAIL are reported in Figure 8 (left) and for ANY SIMPLE in Figure 9 (left). When BFS is used we get ANY SHORTEST. Unlike Theorem 4.3 would suggest, we see that the runtimes are only marginally slower than for the WALK restrictor. The main reason is that we are looking for the first 100,000 results, so the algorithm will not need to explore a huge set of paths, especially if the out-degree of explored nodes is not large.
* Memory. CSRs again use 4GB, while the B+tree BFS version topped at 13.9GB, and the DFS version at 11.4GB, only slightly more than for the WALK restrictor.
* Timeouts. All BFS-based versions timed out on 14 experiments, while the DFS ones did so on 13 of them.
Interestingly, while the theoretical literature classifies these evaluation modes as intractable (Beng et al., 2015), and algorithms proposed in Section 4 take a brute-force approach to solving them, over real-world data they do not seem to fare significantly worse than algorithms for the WALK restrictor. As already stated, this is most likely due to the fact that they can either detect 100,000 results rather fast, and abandon the exhaustive search, or because the data itself permits no further graph exploration.
**Retrieving all TRAILS and SIMPLE paths.** Unlike for the WALK restrictor, for TRAIL and SIMPLE, the number of paths is finite, so these restrictors can appear without an additional selector. Here we are testing the performance of Algorithm 3 from Subsection 4.1. The results are as follows:
* Runtime. Times for retrieving all trails are reported in Figure 8 (middle), and for all simple paths in Figure 9 (middle). The runtimes are again similar as for ANY WALK, so finding the first 100,000 results seems not to be an issue. Comparing to Neo4j, in the TRAIL case (Figure 8 (middle)), we can see a significant improvement when Algorithm 3 is used.
* Memory. Memory usage for CSRs is as before, while for B+trees it topped at 11.6GB for BFS and 11.5GB for DFS, meaning 2.6GB (BFS) and 2.5GB (DFS) effective usage.
* Timeouts. All BFS versions timed out twice, and all DFS versions 3 times. Neo4j timed out in 99 cases.
The conclusions here are quite similar as for ANY TRAIL and ANY SIMPLE: despite the theoretical limitations of these algorithms, they seem to work sufficiently well for real-world use cases. Finally, the results for ALL SHORTEST TRAIL (Figure 8 (right)) and ALL SHORTEST SIMPLE (Figure 9 (right)), are analogous. We remark that unlike for e.g. ANY TRAIL, the BFS version for ALL TRAIL is not the same as ALL SHORTEST TRAIL, and similarly for SIMPLE, as discussed in Section 4.
### Synthetic data
Here we test the performance of the query looking for paths between the node start and the node end in the graph of Figure 6. We scale the size of the database by setting \(n=10,20,30,\ldots,100\). This allows us to test how the algorithms perform when the number of paths is large, i.e. \(2^{n}\). For each value of \(n\) we will look for the first 100,000 results. Since all returned paths are shortest, trails, simple paths, and acyclic paths, for brevity we report only on performance considering the WALK restrictor and the TRAIL restrictor. This also gives us the opportunity to compare with Neo4j.
**SHORTEST WALKS.** The runtimes for the ANY SHORTEST WALK and ALL SHORTEST WALK modes is presented in Figure 9(a). Due to the small size of the graph, we only test the B+tree version of our implementation, and perform a hot run; meaning that we run the query twice and report the second result. This is due to minuscule runtimes which get heavily affected by initial data loading. As we can observe, for ANY SHORTEST WALK (Figure 9(a) (left)) both engines perform well (we also pushed this experiment to \(n=1000\) with no issues). Memory usage is negligible in this experiment, and there are no timeouts. In the case of ALL SHORTEST WALK (Figure 9(a) (right)), Neo4j timed out on all but the smallest graph, while our implementation of Algorithm 2 shows stable performance when returning 100,000 paths. We tried scaling to \(n=1000\) and the results were quite similar, with almost no memory being used.
Figure 8. Runtime for the TRAIL semantics with a limit of 100,000 results and a timeout of 60 seconds.
Figure 9. Runtime for the SIMPLE semantics with a limit of 100,000 results and a timeout of 60 seconds.
**TRAILS.** This experiment allows us to determine which traversal strategy (BFS or DFS) is better suited for Algorithm 3 in extreme cases such as the graph of Figure 6. We present the results for ANY TRAIL and TRAIL modes in Figure (b)b. Considering first the ANY TRAIL case, the BFS-based algorithm will time out already for \(n=30\), which is to be expected, since it will construct all paths of length \(1,2,\ldots,29\), before considering the first path of length \(30\). In contrast, DFS will find the required paths rather fast. Memory usage also reflects this, since the BFS-based algorithm use \(4.7\)GB of RAM when timing out, while DFS uses \(0.4\)GB in all cases. When it comes to the TRAIL mode, which retrieves _all_ trails, the situation is rather similar (here we also compare with Neo4J since it finds all trails). This shows us that for long paths DFS is the strategy of choice, since the BFS-based traversal needs to consider a huge number of paths before encountering the first query result. Memory usage here mimics the ANY TRAIL case. Overall, this experiment allows us to pinpoint problems that BFS-based approaches might have in Algorithm 3, particularly when there is a large number of paths that merge, and when the paths are of large length. Interestingly, this behavior never showed up in the Real-world dataset.
## 7. Related Work
Our work is a continuation of (Krishnan et al., 2017), where we studied an abstract approach to RPQ query answering and _representing_ paths in query answers. In the present paper we describe concrete algorithms for the RPQ evaluation modes considered in (Krishnan et al., 2017) (and multiple new ones), discuss how these can be implemented, and test their performance when integrated in different graph database pipelines. In terms of related work, we identify two main lines: the first one on evaluating RPQs, and the second one on reachability indices.
**RPQ evaluation.** Some of the most representative works in this area are (Bahdan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), where the focus is on finding nodes reachable by an RPQ-conforming path, and not on returning paths. Most of these works propose methods similar to Algorithm 1. In (Krishnan et al., 2017) the authors focus on retrieving paths, but not conforming to RPQ queries. An approach for finding top-\(k\) shortest paths in RPQ answers is explored in (Krishnan et al., 2017), using BFS-style search. Here the first \(k\) paths discovered by BFS are retrieved, so some non-shortest paths might be returned, unlike in the GQL and SQL/PQQ semantics we use in Algorithm 2. Interesting optimizations for the base BFS-style algorithm are presented in (Krishnan et al., 2017; Krishnan et al., 2017), where vectorized execution of BFS with multiple starting points is explored. Since our focus is the performance of classical graph algorithms, we leave incorporating these optimizations into our workflow for future work. Regarding the simple path semantics, (Krishnan et al., 2017) proposes a sampling-based algorithm which is an efficient approximate answering technique, but will not necessarily find _all_ answers. There is also a rich body of theoretical work on RPQ evaluation (see (Bahdan et al., 2017) for an overview).
**Reachability indices.** There is extensive work on speeding-up graph traversal algorithms via reachability indexes (Krishnan et al., 2017; Krishnan et al., 2017). Some of the more famous ones are (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017), with (Krishnan et al., 2017) being incorporated in (Krishnan et al., 2017) to find nodes reachable by paths conforming to an RPQ. In terms of indices developed for regular path queries, apart from some early theoretical work (Krishnan et al., 2017), there is also a recent proposal (Krishnan et al., 2017), that allows quick detection of RPQ-conforming paths up to a fixed length. All of these works are somewhat orthogonal to our approach, since our main aim is to test algorithms for retrieving paths witnessing RPQ answers. Incorporating one or several of these indices into our algorithms is a promising direction of future work.
**Our work.** In summary, we believe our work to be the first to describe evaluation algorithms for regular path queries under _all_ evaluation modes supported by the upcoming GQL and SQL/PQQ standards (Krishnan et al., 2017), and study how classical graph algorithms can be incorporated into a graph database pipeline to evaluate such queries.
## 8. Conclusions and Looking Ahead
In this paper we tested the hypothesis claiming that classical graph search algorithms can be adopted to find paths that are returned by RPQs under GQL and SQL/PQQ evaluation modes. For this, we used the standard BFS and DFS algorithms and adapted them to the setting of RPQ query answering. We also showed how such algorithms can be implemented, and performed an extensive experimental evaluation of their performance. We believe that the presented results confirm our initial hypothesis, and show that classical algorithms, at least in the case of queries that have a fixed starting point, are a valuable tool for evaluating RPQs. In future work we would like to extend the presented algorithms for the case when the starting point in the path is unknown, cover the GQL top-\(k\) semantics, and see how optimization techniques such as reachability indices (Krishnan et al., 2017; Krishnan et al., 2017), or vectorized and parallel execution of BFS and DFS (Krishnan et al., 2017; Krishnan et al., 2017) can speed up the algorithms we proposed.
###### Acknowledgements.
Work supported by ANID - Millennium Science Initiative Program - Code ICN17_002 and ANID Fondecyt Regular project 1221799.
Figure 10. Runtime on the graph database of Figure 6. |
2302.08628 | New measurement of the diffusion of carbon dioxide on non-porous
amorphous solid water | The diffusion of molecules on interstellar grain surfaces is one of the most
important driving forces for the molecular complexity in the interstellar
medium. Due to the lack of laboratory measurements, astrochemical modeling of
grain surface processes usually assumes a constant ratio between the diffusion
energy barrier and the desorption energy. This over-simplification inevitably
causes large uncertainty in model predictions. We present a new measurement of
the diffusion of CO$_2$ molecules on the surface of non-porous amorphous solid
water (np-ASW), an analog of the ice mantle that covers cosmic dust grains. A
small coverage of CO$_2$ was deposited onto an np-ASW surface at 40~K, the
subsequent warming of the ice activated the diffusion of CO$_2$ molecules, and
a transition from isolated CO$_2$ to CO$_2$ clusters was seen in the infrared
spectra. To obtain the diffusion energy barrier and pre-exponential factor
simultaneously, a set of isothermal experiments were carried out. The values
for the diffusion energy barrier and pre-exponential factor were found to be
$1300\pm110$~K and $10^{7.6\pm0.8}$~s$^{-1}$. A comparison with prior
laboratory measurements on diffusion is discussed. | Jiao He, Paula Caroline Pérez Rickert, Tushar Suhasaria, Orianne Sohier, Tia Bäcker, Dimitra Demertzi, Gianfranco Vidali, Thomas K. Henning | 2023-02-16T23:47:17Z | http://arxiv.org/abs/2302.08628v1 | # New measurement of the diffusion of carbon dioxide on non-porous amorphous solid water
###### Abstract
The diffusion of molecules on interstellar grain surfaces is one of the most important driving forces for the molecular complexity in the interstellar medium. Due to the lack of laboratory measurements, astrochemical modeling of grain surface processes usually assumes a constant ratio between the diffusion energy barrier and the desorption energy. This over-simplification inevitably causes large uncertainty in model predictions. We present a new measurement of the diffusion of CO\({}_{2}\) molecules on the surface of non-porous amorphous solid water (np-ASW), an analog of the ice mantle that covers cosmic dust grains. A small coverage of CO\({}_{2}\) was deposited onto an np-ASW surface at 40 K, the subsequent warming of the ice activated the diffusion of CO\({}_{2}\) molecules, and a transition from isolated CO\({}_{2}\) to CO\({}_{2}\) clusters was seen in the infrared spectra. To obtain the diffusion energy barrier and pre-exponential factor simultaneously, a set of isothermal experiments were carried out. The values for the diffusion energy barrier and pre-exponential factor were found to be \(1300\pm 110\) K and \(10^{7.6\pm 0.8}\) s\({}^{-1}\). A comparison with prior laboratory measurements on diffusion is discussed.
ISM; diffusion; molecules; astrochemistry; solid state; carbon dioxide
## 1 Introduction
Dust grains in molecular clouds are covered by an ice mantle consisting of H\({}_{2}\)O, CO\({}_{2}\), CO, NH\({}_{3}\), CH\({}_{4}\), and CH\({}_{3}\)OH[1]. Other molecules, particularly complex organic molecules, which are defined in the astronomy context as carbon-containing molecules with at least six atoms, should also be present in the ice[2]. Various laboratory experiments and astrochemical modeling suggest that most of these molecules are formed by chemical reactions in the solid state rather than formed in the gas phase and then condensed on the grain surface[2]. After their formation, they may desorb, either thermally or non-thermally, and return to the gas phase and be identified by radio
telescopes.
One of the most important mechanisms for chemical reactions on grain surfaces is diffusion. Reactants on the surface need to diffuse in order to encounter the reaction counterparts. There are two types of diffusion, quantum diffusion, which is only important for very light species such as hydrogen, and thermal diffusion, which is relevant for almost all reactants. In most gas-grain astrochemical models, thermal diffusion is the dominant pathway to chemical reactions on grains[3; 4], although in recent years the role of thermal diffusion has been challenged and non-diffusive mechanisms have been proposed as well to account for the formation of complex organic molecules in interstellar ices[5; 6; 7; 8; 9].
Thermal diffusion is normally described by an Arrhenius equation with two parameters, the pre-exponential factor \(\nu_{\rm dif}\), and the diffusion energy barrier \(E_{\rm dif}\). The goal of laboratory experiments on diffusion is to quantify \(E_{\rm dif}\) and \(\nu_{\rm dif}\) for various atoms, radicals, and molecules on representative dust grain surfaces, such as water ice, silicate, or carbonaceous surfaces. Diffusion of reactants on silicate and carbonaceous surfaces is only important before the dust grains are covered by an ice layer. Once water starts to form on the bare grain surface, subsequent condensation of particles happens on top of the ice surface dominated by H\({}_{2}\)O. Most of the existing laboratory experiments on diffusion focused on amorphous water ice, i.e., the so-called amorphous solid water (ASW), which is the dominant form of water ice on the grain surface. This current study also focuses on the diffusion on ASW surface.
Several laboratory experimental studies focused on the diffusion of molecules on the surface of ASW, either on the outer surface of non-porous ASW (np-ASW) or through the inner pores of porous ASW (p-ASW). Zubkov et al. [10] deposited N\({}_{2}\) on ASW of different thicknesses and performed temperature-programmed desorption (TPD) experiments, from which they obtained \(E_{\rm dif}=890\) K and \(\nu_{\rm dif}=9\times 10^{7}\)-\(4\times 10^{8}\) s\({}^{-1}\), respectively. Mispelaer et al. [11] presented one of the first systematic experimental studies on the diffusion of astro-relevant molecules in p-ASW. They covered the target molecule with ASW layers, and then warmed up the ice to a temperature at which the target molecules start to penetrate through the p-ASW and desorb from the ice. By monitoring the remaining amount of target molecules in the ice, they were able to quantify the diffusion rate through the pores. Though it is an important step ahead, it still has some limitations: (1) The diffusion might have already started even during the growth of the ASW layers, and it is not accurate to treat the beginning of the isotherm as the beginning of the diffusion process; (2) The structure of the ASW might be changing during the diffusion/desorption, since no annealing was done to stabilize the structure; (3) The effect of diffusion and desorption are mixed and very difficult to separate, therefore introducing large uncertainty; (4) Based on the design of the experiment, only a narrow temperature range can be explored, and the accuracy in diffusion energy barrier determination is limited. More recently, Mate et al. [12] used a similar approach to measure the diffusion rate of CH\({}_{4}\) in ASW, and therefore inherited the same advantages and drawbacks.
Lauck et al. [13] measured the diffusion of CO in p-ASW. In their experiments, CO and water were deposited at 12 K. Then the ice was flash heated to a temperature between 15 and 23 K for isothermal experiments. They used the fact that pure CO and CO mixed with water ice has different infrared absorption peak shapes to trace the diffusion of CO in porous water ice. Pure CO ice has a sharp peak centered at about 2139 cm\({}^{-1}\), while the mixture of CO and ASW has two peaks at 2139 and 2152 cm\({}^{-1}\), respectively. As diffusion goes on, there is a gradual transition from pure CO to a mixture of CO and water. Therefore, the diffusion rate can be calculated from the
time evolution of this transition. Compared to Mispelaer et al., Lauck et al. solved the problem of a limited temperature span, and the problem with desorption. However, the method by Lauck et al. suffers from an unstable ice structure during the diffusion process, and this method only works for CO diffusion on ASW.
He et al. (2017) [14] (referred to as He17 later) measured the diffusion of CO\({}_{2}\) on np-ASW. Discrete CO\({}_{2}\) molecules and CO\({}_{2}\) clusters have different infrared absorption peaks. With less than a monolayer of CO\({}_{2}\) present on the surface, when the surface temperature increases, CO\({}_{2}\) diffuses and forms clusters. Therefore, there is a transition from discrete CO\({}_{2}\) to CO\({}_{2}\) clusters which is observable in infrared measurement. By using a rate equation model to simulate the diffusion process and to fit the experimental data, the diffusion energy barrier was calculated to be 2100 K. It has to be noted that in this study, the prefactor was assumed to be \(10^{12}\) s\({}^{-1}\), as is customarily done in these studies. If a different prefactor is assumed, the diffusion energy value would be different. It is important to determine both the prefactor and the diffusion energy value simultaneously. An energy value without the correct prefactor could be in error.
Kouchi et al. (2020) [15] used a transmission electron microscope (TEM) to measure the diffusion energy barrier of CO and CO\({}_{2}\) on p-ASW. This is the first attempt to use the TEM technique to measure the diffusion of astro-relevant molecules. In their experiment, they first deposited 10 nm of water ice at 10 K and then deposited, for example CO, while taking TEM scans during deposition. When the ice is covered by the required amount of CO, and if the temperature is high enough for the crystallization to proceed[8], crystals of CO are formed in the ice. They interpreted the forming of crystals as a result of the diffusion of CO and used the distance between crystals to represent the average diffusion distance of CO molecules. By analyzing the diffusion distance at different temperatures, they calculated the diffusion energy barrier. The study by Kouchi et al. suffers from a few drawbacks: (1) The interpretation of the experimental result is not intuitive, and alternative interpretations exist. The observed temperature dependence of crystal distance may be a result of the temperature dependence of the phase transition of CO ice. Such alternative interpretations have to be checked before extending this method to a wider range of systems; (2) The water ice was unstable during the diffusion and could have affected the experimental results; (3) They could not determine the prefactor from the experiments; as mentioned above, it is important to determine both values simultaneously. A similar approach has recently been applied to a wider range of molecules[16].
He et al. (2018)[17] (referred to as He18 later) designed an experiment to measure diffusion based on the fact that when a molecule is binding to the dangling OH (dOH) site of p-ASW, the weak dOH absorption band in the infrared is red-shifted. In the experiment, porous water ice was grown at 10 K, stabilized by annealing to 70 K, and then cooled down to 10 K to deposit the target molecule. Then the ice was warmed to a specific temperature, for example, 20 K, and infrared spectra were measured continuously to monitor the shifting of dOH bands. When diffusion happens, a gradient in concentration of the target molecule is expected in the ice. After a long time, the whole pore surface is evenly covered by the target molecule, and almost all the dOH is at the shifted position. This method was applied to molecules including N\({}_{2}\), O\({}_{2}\), CO, CH\({}_{4}\), and Ar, all of which wet the ASW surface. From the experiments, both the prefactor and energy barrier for diffusion were obtained. The prefactor was found to be mostly in the \(10^{7}\)-\(10^{9}\) s\({}^{-1}\) range, and the diffusion energy barrier was roughly 0.3 times the upper bound of the desorption energy distribution on ASW. Table 1 shows a summary of the above-mentioned experimental studies and some key aspects of the experiments. The same method cannot be applied directly to non
wetting molecules such as CO\({}_{2}\)[18]. In fact, we attempted to measure the diffusion of CO\({}_{2}\) through p-ASW using the same method, but found no clear evidence of CO\({}_{2}\) penetration into the p-ASW to form monomers. This is somehow in agreement with prior experimental studies on the segregation of CO\({}_{2}\) from CO\({}_{2}\):H\({}_{2}\)O ice mixtures. Several groups found that by warming up the mixture of CO\({}_{2}\):H\({}_{2}\)O ice, CO\({}_{2}\) tends to segregate and form clusters or islands, rather than mixing. This indicates that the interaction force between CO\({}_{2}\) molecules is stronger than the interaction force between CO\({}_{2}\) and ASW[19, 20, 21]. A TPD study of CO\({}_{2}\) on ASW also supports this argument[14].
The mixed ice of CO\({}_{2}\):H\({}_{2}\)O had been the focus of a number of experimental studies because of the presence of such mixture in interstellar ices[22, 23, 24, 25, 19, 20]. However, diffusion of CO\({}_{2}\) on ASW surface was not addressed in those studies. Combining the strengths of both He17 and He18, we perform a new set of experiments to measure the diffusion rate of CO\({}_{2}\) on np-ASW. Isothermal experiments are carried out at different temperatures and CO\({}_{2}\) are allowed to diffuse and form clusters. We obtain the diffusion energy barrier \(E_{\rm dif}\) and prefactor \(\nu_{\rm dif}\) values simultaneously from the experiments.
## 2 Experimental
### Experimental setup
Experiments were performed using an ultra-high vacuum (UHV) setup, which has been described previously [14, 17, 21]. The setup consists of a main UHV chamber and two molecular beamlines. In the present study, only the main UHV chamber was used. The chamber is pumped by a combination of scroll pumps, turbomolecular pumps, and a cryopump, and a base pressure of \(2\times 10^{-10}\) mbar is achieved after bake-out. Located at the center of the chamber is a gold-plated copper disk, which is used as the substrate onto which ice samples are grown. The substrate can be cooled down to 6 K using a closed-cycle helium cryostat, and heated up using a resistive heater. The temperature was measured using a silicon diode temperature sensor. A Lakeshore 336 temperature controller recorded and controlled the temperature to an accuracy of 0.05 K. Ice was grown on the substrate by gas/vapor deposition from the chamber background, following a procedure described in detail in He et al. (2018) [21]. Two separate computer-controlled leak valves (VAT variable leak valve 590) were used to deposit H\({}_{2}\)O and CO\({}_{2}\), respectively. The thickness of the ice in monolayer (ML, defined as \(10^{15}\) molecules per cm\({}^{2}\)) was calculated by integrating the chamber pressure during deposition, assuming the sticking as unity [26]. The ionization pressure gauge calibration factors for H\({}_{2}\)O and CO\({}_{2}\) were taken into account in the calculation. Throughout the experiments, a Thermo Fisher Nicolet 6700 Fourier Transform Infrared Spectrome
\begin{table}
\begin{tabular}{l|c c c c c} \hline Des. not interfere with dif. & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ H\({}_{2}\)O ice stable during dif. & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ Applicable to IR-inactive mol. & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ Both \(E_{\rm dif}\) and \(\nu_{\rm dif}\) obtained & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\times\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of a few previous laboratory studies on diffusion. Mi13, La15, He17, He18, Ko20, Ma20 and Fu22 stand for Mispelaer et al. (2013)[11], Lauck et al. (2015)[13], He et al. (2017)[14], He et al. (2018)[17], Kouchi et al. (2020)[15], Mate et al. (2020)[12], and Furuya et al. (2022) [16], respectively. The following aspects are considered: (1) whether desorption interferes with diffusion; (2) whether the water ice is stable during the whole diffusion process; (3) whether it is applicable to IR-inactive molecules; (4) whether both \(E_{\rm dif}\) and \(\nu_{\rm dif}\) are obtained simultaneously.
ter (FTIR) in the Reflection Absorption Infrared Spectroscopy (RAIRS) configuration was used to monitor the ice composition over time. A spectrum was created by taking an average of nine scans every 12 seconds.
### Experimental procedure
The gold surface was first covered with 30 ML of non-porous amorphous solid water (np-ASW) by vapor deposition at a rate of 6 ML/minute over 5 minutes while the substrate was at 130 K. It remained at 130 K for 20 minutes to stabilize the ice and pump out the residual gas. Subsequently, the substrate was cooled down to 40 K for CO\({}_{2}\) deposition. 0.075 ML of CO\({}_{2}\) was deposited on top of the np-ASW at a deposition rate of 0.1 ML/minute. At 40 K, CO\({}_{2}\) does not diffuse[14], and the coverage is low enough so that almost all the CO\({}_{2}\) molecules are monomers. After CO\({}_{2}\) deposition, flash heating was applied to heat the substrate to the desired target temperature (\(T_{\rm iso}\)) at a ramp rate of 30 K/minute for the isothermal experiment. During the isothermal evolution, CO\({}_{2}\) molecules diffuse and form clusters. For \(T_{\rm iso}>62\) K, the temperature was held for 30 minutes to allow CO\({}_{2}\) to diffuse and form clusters. For \(T_{\rm iso}\leq 62\) K, at which the diffusion is slower, 60 minutes were selected to provide sufficient time for diffusion. After the isothermal measurement, the sample was heated to 110 K to remove the CO\({}_{2}\) ice and leave the np-ASW intact. By 110 K, there is no CO\({}_{2}\) left on the surface, as evidenced by no signature of CO\({}_{2}\) in the RAIRS spectra. The substrate was then cooled down to 40 K again for the next CO\({}_{2}\) deposition. Figure 1 shows the temperature during one segment of the experiment. Different stages of the experiment are marked. To minimize human error, we used a LabVIEW program to regulate the temperature and gas deposition throughout the entire experiment. We explored the \(T_{\rm iso}\) range between 56 and 68 K in 1 K steps.
Figure 1: Temperature of the substrate during one segment of the experiment (isothermal experiment at 67 K). Different stages of the experiment are marked.
## 3 Results and analysis
During the isothermal evolution at each temperature, RAIRS spectra are collected every 12 seconds, and a selection of them is displayed in Figure 2. At the very beginning of the isothermal evolution, the CO\({}_{2}\) absorption profile shows a single peak at 2330 cm\({}^{-1}\), which indicates isolated CO\({}_{2}\) molecules, i.e., monomers, on the surface. This is to be expected given that the coverage is only 0.075 ML and the likelihood of cluster formation is very low before the diffusion process kicks in. After a certain period of time, the 2330 cm\({}^{-1}\) peak drops and a new peak at \(\sim\)2355 cm\({}^{-1}\), indicative of CO\({}_{2}\) clusters [14], grows over time. This suggests that CO\({}_{2}\) is diffusing to form clusters. Considering the very low coverage of CO\({}_{2}\), we assume that the shifting of the adsorption peak position and the formation of clusters are only governed by the diffusion of CO\({}_{2}\) molecules. The rate of diffusion should be temperature-dependent. Indeed, the shifting from 2330 to 2355 cm\({}^{-1}\) peak is faster at 68 K than at 58 K. To quantify the rate of diffusion, we use two Gaussian functions to fit the two peaks and analyze the time and temperature dependence of them. Figure 2 shows the fitting of selected spectra using the sum of two Gaussian functions. This fitting is only an approximation, particularly, since the profile of the 2355 cm\({}^{-1}\) peak is not exactly a Gaussian line shape [21]. Furthermore, even if all the CO\({}_{2}\) are in the form of clusters or in the form of bulk CO\({}_{2}\), the 2330 cm\({}^{-1}\) component is not absent. This is seen in a prior paper [14], which shows that even if CO\({}_{2}\) is allowed to have sufficient diffusion, both peaks are still present. Nonetheless, the shifting from one peak to the other still allows us to derive information about the kinetics. Figure 3 shows the resulting area of the two peaks during the isothermal evolution at each \(T_{\rm iso}\). When the 2330 cm\({}^{-1}\) peak area decreases, the 2355 cm\({}^{-1}\) peak area increases, except when the temperature is higher than 67 K, at which the 2355 cm\({}^{-1}\) peak also drops. This is because the desorption of CO\({}_{2}\) starts when the temperature is close to \(\sim\)67 K [14], and the total area of the two peaks decreases over time. In the other extreme, when the temperature is at 56 K and the diffusion is too slow, the area of the 2355 cm\({}^{-1}\) peak is much noisier than the rest and is considered less reliable. As a result, for the diffusion analysis, we only consider the temperature range of 57 to 65 K.
Next, we use a model to describe the diffusion. Surface diffusion has been reviewed in previous publications [27, 28]. We refer interested readers to those publications. When the concentration of molecules on the substrate surface is low enough so that the interaction force between them is negligible, the diffusion of molecules can be analyzed by assuming that they follow random walk. This type of diffusion is called tracer diffusion or intrinsic diffusion. On a 2D surface such as the np-ASW in the present study, CO\({}_{2}\) molecules jumps from one adsorption site to the nearest neighbour. Although the np-ASW surface is by no means a regular periodic surface, for the sake of simplicity, we still use a 2D lattice to represent it. We have assumed that 1 ML of CO\({}_{2}\) is equivalent to \(10^{15}\) molecules per cm\({}^{2}\); correspondingly, the lattice size is \(l=3.2\) A. In the random walk analysis, diffusion rate \(k_{D}(T)\) can be described by an Arrhenius equation:
\[k_{D}(T)=\nu_{\rm dif}\exp(\frac{-E_{\rm dif}}{k_{\rm B}T}) \tag{1}\]
\(E_{\rm dif}\) is the diffusion energy barrier, and \(\nu_{\rm dif}\) is the prefactor for diffusion. In the literature, the prefactor is sometimes referred to as the attempt frequency. In astronomy, it is a convention to omit the Boltzmann constant \(k_{\rm B}\), and then the diffusion energy
barrier has the unit Kelvin. We denote the time-dependent coverage of CO\({}_{2}\) monomers and dimers as \(C_{1}(t)\) and \(C_{2}(t)\), respectively. They are unitless variables defined to be the fraction of absorption sites holding monomers and dimers. In theory, larger clusters such as trimers should also be considered. However, due to the low coverage of CO\({}_{2}\), the coverage of trimers should be much smaller than dimers and can be ignored. Once dimers are formed, we assume that they do not break apart before desorption. This is reasonable assumption, since the binding force between CO\({}_{2}\) molecules is stronger than the binding force between CO\({}_{2}\) and water [14], and CO\({}_{2}\) molecules are prone to form clusters on ASW. Following He et al. (2017)[14], we describe the coverage of monomers and dimers using the following rate equations:
\[\frac{dC_{1}(t)}{dt} =-2C_{1}(t)^{2}k_{D}(T) \tag{2}\] \[\frac{dC_{2}(t)}{dt} =C_{1}(t)^{2}k_{D}(T) \tag{3}\]
Note that both the left and the right sides of the equations have the unit of s\({}^{-1}\), since \(C_{1}\) and \(C_{2}\) are unitless. The initial condition of the rate equation is \(C_{1}(0)=0.075\) and \(C_{2}(0)=0\). Solving the rate equation analytically, we have:
\[C_{1}(t)=\frac{1}{2k_{D}(T)t+\frac{1}{C_{1}(0)}}=\frac{1}{2k_{D}(T)t+13.33} \tag{4}\]
\(C_{2}(t)\) can be expressed as \(0.5\times(C_{1}(0)-C_{1}(t))\). The next step is to relate \(C_{1}(t)\) and \(C_{2}(t)\) to the area of the 2330 and 2355 cm\({}^{-1}\) peaks. As discussed above, the 2330 cm\({}^{-1}\) peak area is not exactly proportional to \(C_{1}(t)\) because \(C_{2}(t)\) also contributes to it. However,
Figure 2: Selected RAIRS spectra in the range of the CO\({}_{2}\)\(\nu_{3}\) mode during the isothermal experiment at different temperatures. Each panel depicts the CO\({}_{2}\) absorption peak at various times since the start of an isothermal experiment. The time and temperature of the isotherm are indicated in the figure. The black curves represent the experimental data, while the orange curves represent the fittings made with a sum of two Gaussian functions.
it is reasonable to assume they have a linear relation, that is:
\[C_{1}(t)=\frac{a}{2k_{D}(T)t+13.33}+b \tag{5}\]
where \(a\) and \(b\) are two free parameters. This is then used to fit the curves in the top panel of Figure 3, and the fitting as well as the corresponding \(k_{D}(T)\) value are shown in Figure 4. The diffusion rate expression in Equation 1 can be rewritten as:
\[\log(k_{D}(T))=\log(\nu_{\rm dif})-\frac{E_{\rm dif}}{T} \tag{6}\]
This shows that \(\log(k_{D}(T))\) is a linear function of \(1/T\) with a slope of \(-E_{\rm dif}\). In Figure 5 we make an Arrhenius-type plot. A linear fitting of it yields the slope and intercept, from which we obtain the diffusion energy barrier \(E_{\rm dif}=1300\pm 110\) K, and prefactor \(\nu_{\rm dif}=10^{7.6\pm 0.8}\) s\({}^{-1}\). In many other studies on diffusion, it is custom to report the Arrhenius expression of the diffusion constant (units of cm\({}^{2}\) s\({}^{-1}\)) [27]:
\[D=D_{0}\exp(\frac{-E_{\rm dif}}{k_{\rm B}T}) \tag{7}\]
\(D_{0}\) is often called the diffusivity:
\[D_{0}=\exp(\frac{\Delta S_{\rm dif}}{k_{\rm B}T})\frac{\nu_{0}l^{2}}{4}=\frac {\nu_{\rm dif}l^{2}}{4}; \tag{8}\]
Figure 3: The band areas of the 2330 and 2355 cm\({}^{-1}\) components of the CO\({}_{2}\) absorption peak during the isothermal evolution at different temperatures.
where \(\Delta S_{dif}\) is the change in entropy between the saddle point and the adsorption site, and \(\nu_{0}\) is the small-amplitude frequency (\(\approx 10^{13}\) sec\({}^{-1}\)) of oscillation of the particle in the adsorption well. In this case, D\({}_{0}\)=\(10^{-8.0\pm 0.8}\) cm\({}^{2}\) s\({}^{-1}\).
## 4 Discussions
Molecule diffusion is crucial for the chemistry on/in interstellar ice. However, laboratory measurements of diffusion are lacking, limiting the accuracy of astrochemical modeling of grain surface processes. The current study builds on prior studies [14, 17] to measure the diffusion energy barrier and prefactor for CO\({}_{2}\) on np-ASW surface simultaneously. A whole set of isothermal experiments were carried out to quantify the diffusion rate at different temperatures, from which the energy barrier and the prefactor for diffusion were then obtained.
It is worthwhile to compare this study to prior experimental studies. He17 deposited a small coverage of CO\({}_{2}\) on np-ASW and warmed up the ice slowly to monitor the shifting of infrared absorption peaks with increasing temperature. Because it was only a single measurement, it was not possible to derive the diffusion energy barrier and the prefactor simultaneously. It was assumed that the diffusion and desorption prefactor were the same, \(\nu_{\rm dif}=\nu_{\rm des}=10^{12}\) s\({}^{-1}\), and based on a rate equation model, the \(E_{\rm dif}\) value was calculated to be \(2150\pm 50\) K, which is about 0.96 times the desorption energy of CO\({}_{2}\) from np-ASW surface. The main drawback of He17 is the assumption that the diffusion and desorption prefactors are equivalent, which is typically not the case, as demonstrated by He18. Here we recalculate the \(E_{\rm dif}\) value for the experiment that is shown in Fig. 10 of He17, but replacing the \(\nu_{\rm dif}\) value \(10^{12}\) s\({}^{-1}\) by \(10^{7.6}\) s\({}^{-1}\) obtained from the present study. Figure 6 shows the equivalent of Fig. 10 in He17. By lowering the diffusion prefactor, the diffusion energy barrier is reduced to \(1400\pm 100\) K, which is in agreement with the value obtained by the present study \(1300\pm 110\) K within error allowance.
In the present study, we carried out isothermal experiments, similar to what He18
Figure 4: The band area of the 2330 cm\({}^{-1}\) component during isothermal experiment at different temperatures and the fitting using Equation 5. The best fit \(k_{D}\) values are shown in the figure.
did, and indeed obtained a diffusion prefactor much lower than desorption, in agreement with He18. Based on prior studies on diffusion (see a review in [29]), it is not uncommon to see different diffusion prefactors, either higher or lower than the one for desorption. This present study, along with He18, makes a strong case that in astrochemical modeling one should make a distinction between diffusion prefactor \(\nu_{\rm dif}\) and desorption prefactor \(\nu_{\rm des}\). Based on these two studies, we suggest using a \(\nu_{\rm dif}\) value about 3-5 orders of magnitude lower than \(\nu_{\rm des}\) in modeling. However, we acknowledge that this is only based on the measurement of a few volatile molecules on the ASW surface and that exceptions may be possible for other molecules. More diffusion measurements are required to confirm it.
Compared to He18, the Arrhenius fitting in Figure 5 has a much larger error, and correspondingly, the uncertainty of diffusion energy barrier and prefactor values are larger. At least the following factors contribute to the larger uncertainty:
1. The peak profile of CO\({}_{2}\) is more complicated than the dOH profile, as in He18. One cannot simply attribute the 2330 cm\({}^{-1}\) peak exclusively to CO\({}_{2}\) monomers. In our model, we had to simplify the problem by assuming a linear dependence between the coverage of monomers and the 2330 cm\({}^{-1}\) peak area. This could be one important source of uncertainty.
2. We assumed that there is only a single diffusion energy barrier that governs the formation of clusters. In practice, we cannot exclude the possibility that more than one activation energy is contributing.
3. A Gaussian function does not describe the 2355 cm\({}^{-1}\) component exactly. It is clear in Figure 2 that the fitting of the 2355 cm\({}^{-1}\) peak is not ideal.
4. Ignoring clusters larger than dimers may also induce some uncertainty.
A prior experimental determination of CO\({}_{2}\) on ASW surface was also done by Kouchi et al. (2020)[15], who reported an energy value of 1500 K, which is slightly higher than
Figure 5: Arrhenius plot \(log(k_{D})\) over \(1/T\), where \(k_{D}\) is the diffusion rate, and \(T\) is the isotherm temperature. The linear fitting is shown in orange.
what we found in the present study. This difference could be due to the uncertainty in the prefactor that is assumed in Kouchi et al. They were unable to determine the prefactor value simultaneously with the energy value. It would be interesting to repeat the experiments in Kouchi et al. at different temperatures and derive the values of both the prefactor and the energy simultaneously. Karssemeijer & Cuppen (2014)[30] did molecular dynamics modeling calculation of CO\({}_{2}\) diffusion on water ice, and found the diffusion energy value to be \(\sim 1470\) K on a disordered water surface, which is close to our measured value.
In the present study, we only presented the experiments on np-ASW. Although the porosity of the ice mantle on interstellar grains is still under debate, it is probably the case that it is neither highly porous nor completely compact. It is still relevant to consider the diffusion on the surface of both p-ASW and np-ASW. We attempted to run experiments similar to He18, i.e., tracing the diffusion of CO\({}_{2}\) into p-ASW by monitoring the infrared spectra. Unfortunately, the diffusion of CO\({}_{2}\) on p-ASW surface is not efficient enough to allow entry into the pores. CO\({}_{2}\) primarily remains in the top layers as "pure" CO\({}_{2}\). The results are not shown here. Nonetheless, the results on np-ASW may apply to p-ASW as well. Previous experimental studies [10; 31] have suggested a similarity between the surface of a np-ASW and the pore surface of a p-ASW that is annealed to 60 K or above in terms of binding energy distribution and the fraction of dangling OH groups. The diffusion energy barrier and prefactor measured from np-ASW are likely to apply to the pore surface of p-ASW. We suggest that, in general, more experimental work is needed to measure diffusion on interstellar dust grain analogs.
Figure 6: Recalculation of the diffusion energy barrier for the experiment in Fig. 10 of He17. The prefactor value \(10^{12}\) s\({}^{-1}\) is replaced by \(10^{7.6}\) s\({}^{-1}\). See He17 for more details.
## Acknowledgement
We dedicate this paper to the memory of Dieter Gerlich, who was a good friend and colleague of some of us. We acknowledge the support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28, and from NSF Astronomy & Astrophysics Research Grant #1615897.
|
2303.08501 | Floquet nonadiabatic dynamics in open quantum systems | The Born-Oppenheimer (BO) approximation has shaped our understanding on
molecular dynamics microscopically in many physical and chemical systems.
However, there are many cases that we must go beyond the BO approximation,
particularly when strong light-matter interactions are considered. Floquet
theory offers a powerful tool to treat time-periodic quantum systems. In this
overview, we briefly review recent developments on Floquet nonadiabatic
dynamics, with a special focus on open quantum systems. We first present the
general Floquet Liouville von-Neumann (LvN) equation. We then show how to
connect Floquet operators to real time observables. We proceed to outline the
derivation of the Floquet quantum master equation in treating the dynamics
under periodic driving in open quantum systems. We further present the Floquet
mixed quantum classical Liouville equation (QCLE) to deal with coupled
electron-nuclear dynamics. Finally, we embed FQCLE into a classical master
equation (CME) to deal with Floquet nonadiabatic dynamics in open quantum
systems. The formulations are general platforms for developing trajectory based
dynamical approaches. As an example, we show how Floquet QCLE and Floquet CME
can be implemented into a Langevin dynamics with Lorentz force and surface
hopping algorithms. | Vahid Mosallanejad, Yu Wang, Jingqi Chen, Wenjie Dou | 2023-03-15T10:21:41Z | http://arxiv.org/abs/2303.08501v1 | # Floquet nonadiabatic dynamics in open quantum systems
###### Abstract
The Born-Oppenheimer (BO) approximation has shaped our understanding on molecular dynamics microscopically in many physical and chemical systems. However, there are many cases that we must go beyond the BO approximation, particularly when strong light-matter interactions are considered. Floquet theory offers a powerful tool to treat time-periodic quantum systems. In this overview, we briefly review recent developments on Floquet nonadiabatic dynamics, with a special focus on open quantum systems. We first present the general Floquet Liouville von-Neumann (LvN) equation. We then show how to connect Floquet operators to real time observables. We proceed to outline the derivation of the Floquet quantum master equation in treating the dynamics under periodic driving in open quantum systems. We further present the Floquet mixed quantum classical Liouville equation (QCLE) to deal with coupled electron-nuclear dynamics. Finally, we embed FQCLE into a classical master equation (CME) to deal with Floquet nonadiabatic dynamics in open quantum systems. The formulations are general platforms for developing trajectory based dynamical approaches. As an example, we show how Floquet QCLE and Floquet CME can be implemented into a Langevin dynamics with Lorentz force and surface hopping algorithms.
## 1 Introduction
Molecular dynamics (MD) in the presence of an external periodic driving field is a research topic of intense interests both experimentally and theoretically [1, 2, 3, 4]. For instance, using laser as the driving field, phenomena such as higher-order harmonic generation, molecular structural deformations, photodissociation and ionization have been observed [5, 6]. Early MD simulations were based on the Born-Oppenheimer (BO) or adiabatic approximation which simply implies that nuclei classically evolve on one Potential Energy Surface (PES), typically the ground state PES [7]. However, growing evidence supports the breakdown of the BO approximation in many systems [8, 9]. For instance, the appearance of
non-adiabatic coupling is evident in a molecular junction subjected to an intense laser light [10]. In general, non-adiabatic phenomena can happen during: (I) excited state dynamics, and (II) in presence of external driving [11]. Varieties of theoretical approaches such as time-dependent Schrodinger equation (which is practical only for small systems like \(H_{2}^{+}\)), multi-configurational time dependent Hartree (MCTDH) [12], hierarchical quantum master equation (HQME) [13, 14], exact factorization [15], Ab Initio Multiple Spawning [16], decoherence corrected fewest-switches surface hopping [17, 18], and other methods are proposed to incorporate nonadiabatic effects into the molecular dynamics. Currently, developing strategies that can incorporate a periodic external driving to the current non-adiabatic MD methods are one of the main research tasks [19, 20, 21].
The modest aim of this Overview is to highlight how the mixed Quantum-Classical Liouville equation (QCLE), and QCLE-classical master equation (QCLE-CME) can be complemented by the Floquet theorem to deal with nonadiabatic dynamics with periodic driving in closed and open quantum systems. Floquet theory was first introduced by Gaston Floquet in 1883, when he analysed the stability of a first order linear partial differential equation (PDE), \(\dot{X}(t)=A(t)X(t)\), provided that the matrix \(A(t)\) is periodic [22]. Floquet theorem thus provides a method for the analysis of dynamical systems subjected to a periodic driving. Since Schrodinger equation is a first order time derivative PDE, the Floquet theorem is applicable for variety of quantum systems with time-periodic Hamiltonians. Nearly 80 years later, Shirley introduced Floquet time-evolution operator to solve the time-periodic Schrodinger equation [23]. The Floquet theorem for a time-periodic Hamiltonian is analogous to the Bloch's theorem for position-periodic Hamiltonians in solids [24, 25]. Floquet engineering in the solid-state usually results in splitting of the Bloch bands (a hybrid state often called Floquet-Bloch's states or Floquet quasi-energy spectrum) [26, 27], and in many cases such driven systems support nontrivial topology [28]. Knowing BO approximation breaks when electronic states getting close in energy (nontrivial avoid crossing) and the PES in molecular dynamics is the analogues concept to the Bloch's bands in solid, one may get this impression that equivalent Floquet PES (replicas) can gives us a powerful tool to deal with non-adiabatic dynamics in presence of intense light.
This paper is structured as follows. In section 2, we will first introduce the Floquet Liouville von-Neumann equation. Then, we explain two types of Floquet Redfield quantum master equations (QME) to deal with Floquet dynamics in open quantum systems. In section 3, we will introduce the Floquet quantum classical Liouville equation (QCLE) to incorporate nuclear motion classically. Then, we will
introduce the Floquet electronic friction and its numerical realization using the concept of one-body Floquet Green's function. In section 4, we introduce our Floquet QCLE-CME approaches to deal with nonadiabatic dynamics in open quantum systems in general. Then, we employ our surface hopping algorithms to solve this Floquet QCLE-CME for a time-periodic Anderson-Holstein model.
## 2 Floquet Quantum Master Equation
### Fourier and Floquet representations of Liouville von-Neumann (LvN) equation
In the Schrodinger picture, the Liouville von-Neumann (LvN) equation is
\[\frac{d}{dt}\hat{\rho}(t)=-i\big{[}\hat{H}(t),\hat{\rho}(t)\big{]}. \tag{1}\]
We have set \(\hbar=1\). \(\hat{\rho}(t)\) is the total density operator that includes the system and the bath. \(\hat{H}(t)\) is the total Hamiltonian that is time-periodic, \(\hat{H}(t+T)=\hat{H}(t)\). A convenient and elegant way to derive time evolution of a time-periodic quantum system is based on a sequence of three closely related representations of LvN equation as: (I) The Fourier vector-like (V-like) representation, (II) Fourier matrix-like (M-like) representation and (III) Floquet M-like representation. In the first representation, we substitute the following Fourier expansions
\[\hat{\rho}(t)=\sum_{n}\hat{\rho}^{(n)}(t)e^{inat}, \tag{2}\]
\[\hat{H}(t)=\sum_{n}\hat{H}^{(n)}e^{inat}, \tag{3}\]
into LvN equation:
\[\sum_{n}\frac{d\hat{\rho}^{(n)}(t)}{dt}e^{inat}+in\omega\hat{\rho}^{(n)}(t)e^ {inat}=\sum_{n,m}-i\big{[}\hat{H}^{(n-m)},\hat{\rho}^{(m)}(t)\big{]}e^{inat}. \tag{4}\]
We call the above expression as the Fourier V-like representation of LvN equation.
To obtain the type (II), Fourier M-like representation, we now introduce ladder operator \(\hat{L}_{n}\), which is an operator in the Flourier space \(\{|n\rangle\}\) with following properties
\[\hat{L}_{n}|m\rangle=|m+n\rangle,\ \ \ \ \ \ \hat{L}_{n}\hat{L}_{m}=\hat{L}_{m} \hat{L}_{n}=\hat{L}_{n+m}\Rightarrow[\hat{L}_{n},\hat{L}_{m}]=0. \tag{5}\]
Such that we can define the density operator and Hamiltonian in the Fourier M-like representation
\[\hat{\rho}^{f}(t)=\sum_{n}\hat{L}_{n}\otimes\hat{\rho}^{(n)}(t)e^{inat}, \tag{6}\]
\[\hat{H}^{f}(t)=\sum_{n}\hat{L}_{n}\otimes\hat{H}^{(n)}e^{inat}. \tag{7}\]
We expect that the LvN equation also holds in the Fourier M-like representation
\[\frac{d}{dt}\hat{\rho}^{f}(t)=-i\big{[}\bar{H}^{f}(t),\hat{\rho}^{f}(t)\big{]}. \tag{8}\]
Because if we substitute \(\hat{\rho}^{f}(t)\) and \(\bar{H}^{f}(t)\) in the LvN, we arrive at
\[\sum_{n}\underline{\hat{L}}_{n}\otimes\frac{d\hat{\rho}^{(n)}(t)}{dt}e^{inot} +in\omega\underline{\hat{L}}_{n}\otimes\hat{\rho}^{(n)}(t)e^{in\omega t}=\sum_ {n,m}-i\underline{\hat{L}}_{n}\otimes\big{[}\bar{H}^{(n-m)},\hat{\rho}^{(m)}(t )\big{]}e^{in\omega t}, \tag{9}\]
which is the corresponding Fourier M-like representation of LvN in Fourier V-like representation.
To arrive at the Floquet M-like presentation, we define
\[\hat{\rho}^{F}(t)=e^{-i\bar{N}\otimes 1\omega t}\ \hat{\rho}^{f}(t)\ \ e^{i\bar{N} \otimes 1\omega t}. \tag{10}\]
Here, \(\bar{N}\) is the number operator in the Fourier space with the following properties:
\[\bar{N}|n\rangle=n|n\rangle,\qquad\big{[}\bar{N},\underline{\hat{L}}_{n}\big{]} =+n\underline{\hat{L}}_{n}. \tag{11}\]
Such that we can rewrite the density operator in the Floquet M-like representation as
\[\hat{\rho}^{F}(t)=\sum_{n}e^{-i\bar{N}\otimes 1\omega t}\underline{\hat{L}}_{n }\otimes\hat{\rho}^{(n)}(t)e^{i\bar{N}\otimes 1\omega t}e^{in\omega t}=\sum_{n} \underline{\hat{L}}_{n}\otimes\hat{\rho}^{(n)}(t). \tag{12}\]
In the above equation, we have used Baker-Campbell-Hausdorff (BCH). Notice that the phase factor \(e^{in\omega t}\) is vanished. We find that \(\hat{\rho}^{F}(t)\) follows the LvN equation in the Floquet representation:
\[\frac{d}{dt}\hat{\rho}^{F}(t)=-i\big{[}\bar{N}\otimes 1\omega,\hat{\rho}^{F} (t)\big{]}-i\left[\sum_{n}\underline{\hat{L}}_{n}\otimes H^{(n)},\hat{\rho}^ {F}(t)\right]\equiv-i\big{[}\bar{H}^{F},\hat{\rho}^{F}(t)\big{]}. \tag{13}\]
Here we define the Floquet Hamiltonian as:
\[\bar{H}^{F}=\sum_{n}\underline{\hat{L}}_{n}\otimes H^{(n)}+\bar{N}\otimes 1\omega \tag{14}\]
The above equation is what we refer to as the Floquet LvN equation. We have reduced the time-dependent Hamiltonian into a time-independent Hamiltonian. As a result, the Floquet basis is a tensor product of the Hilbert-space basis \(\{|\alpha\rangle\}\) and the Fourier basis \(\{|n\rangle\}\): \(\{|\alpha,n\rangle\}=\{|n\rangle\}\otimes\{|\alpha\rangle\}\). The form of \(\bar{H}^{F}\) is consistent with the form that is obtained from the expansion of the time-evolution operator or the form that is derived from the concept of quasi-energy [23, 24]. However, in our derivation, we arrive at the Floquet LvN equation that is formally equivalent to the non-Floquet LvN equation, which will allow us to derive formal results that will show below.
### Evolution in Floquet representation
Since the Floquet Hamiltonian is time-independent, we can formally solve the Floquet LvN equation:
\[\hat{\rho}^{F}(t)=\bar{U}^{F}(t,t_{0})\hat{\rho}^{F}(t_{0})\bar{U}^{F-1}(t,t_{ 0}),\qquad\bar{U}^{F}(t,t_{0})=e^{-i\bar{H}^{F}(t-t_{0})}. \tag{15}\]
\(\hat{U}(t,t_{0})\) is the Floquet time evolution operator. With this solution, we can calculate observables. For any operator, we can define the corresponding operator in Floquet M-like representation,
\[\hat{\partial}^{F}=\sum_{k}\hat{L}_{k}\otimes\hat{\partial}^{(k)} \tag{16}\]
Here \(\hat{\partial}^{(k)}\) is the corresponding Fourier component, \(\hat{\partial}(t)=\sum_{k}\hat{\partial}^{(k)}e^{ik\omega t}\). The expectation value of observable, \(\langle\hat{\partial}(t)\rangle\) can be expressed as
\[\langle\hat{\partial}(t)\rangle=\sum_{k,n}Tr(\hat{\partial}^{(k)}\hat{\rho}^{ (n)}(t))e^{i(n+k)\omega t}=\sum_{m,n}Tr(\hat{\partial}^{(m-n)}\hat{\rho}^{(n)}( t))e^{im\omega t}. \tag{17}\]
Here, \(Tr\) denotes trace over Hilbert space. Notice that in the Floquet M-like representation, we have
\[\hat{\partial}^{F}\hat{\rho}^{F}(t)=\sum_{k,n}\hat{L}_{k}\otimes\hat{\partial }^{(k)}\hat{L}_{n}\otimes\hat{\rho}^{(n)}(t)=\sum_{k,n}\hat{L}_{k}\hat{L}_{n} \otimes\hat{\partial}^{(k)}\hat{\rho}^{(n)}(t)=\sum_{m}\hat{L}_{m}\otimes\sum_ {n}\hat{\partial}^{(m-n)}\hat{\rho}^{(n)}(t)\,, \tag{18}\]
Such that the observable can be calculated as
\[\langle\hat{\partial}(t)\rangle=\sum_{m}Tr(\langle m|\hat{\partial}^{F}\hat{ \rho}^{F}(t)|0\rangle)e^{im\omega t} \tag{19}\]
Here, we have used the property of the Ladder operator, \(\langle m|\hat{L}_{m}|0\rangle=1\).
Similarly, the time evolution operator in Schrodinger representation \(\hat{\partial}(t,t_{0})\) can be expressed in terms of its Floquet version, \(\hat{U}^{F}(t,t_{0})\)
\[\hat{U}(t,t_{0})=\sum_{k}\hat{U}^{(k)}(t,t_{0})e^{ik\omega t}=\sum_{k}\langle k |\hat{U}^{F}(t,t_{0})|0\rangle e^{ik\omega t}=\sum_{k}\langle k|e^{-i\hat{H}^ {F}(t-t_{0})}|0\rangle e^{ik\omega t}. \tag{20}\]
The above matrix form for \(\hat{U}(t,t_{0})\) is identical to the relation achieved from the expansion solution [24]. We can also go to the diagonalized (adiabatic) basis set of the \(\hat{H}^{F}\)
\[\hat{U}(t,t_{0})=\sum_{k}\langle k|YY^{\dagger}e^{-i\hat{H}^{F}(t-t_{0})}YY^ {\dagger}|0\rangle\;e^{ik\omega t}=\sum_{k}\langle k|Ye^{-i\hat{\Lambda}^{F}(t -t_{0})}Y^{\dagger}|0\rangle e^{ik\omega t}. \tag{21}\]
\(Y\) and \(\hat{\Lambda}^{F}\) are the eigenvectors and eigenvalues of the Floquet Hamiltonian such that \(Y^{\dagger}\hat{H}^{F}\,Y=\hat{\Lambda}^{F}\).
### Redfield QME
Quantum master equation (QME) is the one of the simplest way to treat dynamics of open systems [29]. Since the focus of this overview is on Floquet dynamics in open quantum systems, we will first cover the Floquet QME. A typical open quantum system consists of a small size system (the System) and multiple rather large-size terminals (the bath) with very dense non-interaction states in thermodynamic
equilibrium. Here, we will identify the total Hamiltonians of as System \(\hat{H}_{S}\) and Bath \(\hat{H}_{B}\). In addition, \(\hat{H}_{BS}\) denotes the interaction of the System with environment.
In the followings we intend to explain two forms for Floquet QME. In general, QME enables us to study the time-evolution of the System's density operator \(\rho_{S}(t)\) (so-called the reduced density operator) rather than the untraceable total density operator \(\rho_{tot}(t)\). From the \(\rho_{S}(t)\), one can evaluate other physical observables such as the expectation value of Number operator. In what follows, we assume that only the System's Hamiltonian is time-periodic \(H_{s}(t)=H_{s}(t+T)\), in which \(T=2\pi/\omega\), and \(\omega\) are the driving period and the driving frequency. With this restriction, it appears to us that there could be two approaches in which the QME can benefit from the mathematical Floquet theory. In fact, this first approach is the Redfield EOM for the reduced \(\rho_{S}(t)\) wherein the Floquet theorem simplifies the solution procedure of the EOM. In the second approach, we will first transform the total density operator to the Floquet representation as \(\hat{\rho}_{tot}(t)\rightarrow\hat{\rho}_{tot}^{F}(t)\) and then the result will be reduced into the System's Floquet density matrix as \(\rho_{tot}^{F}(t)\rightarrow\rho_{S}^{F}(t)\).
Using the second quantization form, a general expression for the three major components of the total Hamiltonian can be given by:
\[\hat{H}_{S}(t)=\sum_{\alpha,B}h_{\alpha B}(t)\,\hat{c}_{\alpha}^{\dagger}\hat {c}_{\beta},\qquad\hat{H}_{B}=\sum_{k\in L,r}\,\varepsilon_{k}\,\hat{e}_{k}^{ \dagger}\hat{e}_{k},\qquad\hat{H}_{BS}=\sum_{k\in L,r,\alpha}V_{ka}^{*}\hat{c}_ {\alpha}^{\dagger}\hat{e}_{k}+V_{ka}\hat{c}_{k}^{\dagger}\hat{e}_{\alpha}, \tag{22}\]
where \(\hat{c}_{\alpha}^{\dagger}\) is the creation operator for an electronic orbital in the system and \(\hat{c}_{k}^{\dagger}\) is the creation operator for electronic orbital \(k\) in the bath. \(h_{\alpha B}\) is a matrix element of the system Hamiltonian. We have set that the system is time-periodic \(H_{s}(t)=H_{s}(t+T)\). For the system-bath couplings, we employ the wide-band approximation [30] such that the hybridization function \(\Gamma_{\alpha B}(\varepsilon)\) is independent of \(\varepsilon\)
\[\Gamma_{\alpha B}(\varepsilon)=2\pi\sum_{k}V_{ka}V_{kB}^{*}\delta(\varepsilon -\varepsilon_{k})=\Gamma_{\alpha B}. \tag{23}\]
To derive the Redfield QME with periodic driving, we start with the LvN equation for the total density operator in the interaction picture,
\[\frac{d}{dt}\hat{\rho}_{tot}(t)=-i\big{[}\hat{H}_{BS}(t),\bar{\rho}_{tot}(t) \big{]}. \tag{24}\]
Here the total density operator and interaction Hamiltonian are written in the interaction picture:
\[\begin{split}&\bar{H}_{BS}(t)=e^{i\bar{H}_{B}t}\mathcal{F}e^{ i\int_{0}^{t}\bar{H}_{S}(s)ds}\,\,\bar{H}_{BS}\,e^{-i\bar{H}_{B}t}\mathcal{F}e^{-i \int_{0}^{t}\bar{H}_{S}(s)ds},\\ &\bar{\rho}_{tot}(t)=e^{i\bar{H}_{B}t}\mathcal{F}e^{i\int_{0}^{t }\bar{H}_{S}(s)ds}\,\bar{\rho}_{tot}(t)\,e^{-i\bar{H}_{B}t}\mathcal{F}e^{-i \int_{0}^{t}\bar{H}_{S}(s)ds}.\end{split} \tag{25}\]
where \(\mathcal{T}\) is the time-ordering operator. As typically done in the QME, we take the integral over time of the LvN equation and then substitute the result into the LvN equation again, such that we arrive at
\[\frac{d}{dt}\tilde{\rho}_{tot}(t)=-i[\tilde{H}_{BS}(t),\tilde{\rho}_{tot}(0)]- \int_{0}^{t}[\tilde{H}_{BS}(t),[\tilde{H}_{BS}(t^{\prime}),\tilde{\rho}_{tot}(t ^{\prime})]]dt^{\prime}. \tag{26}\]
The derivation proceeds by taking the _Bom_ approximation which states that system and bath are separable due to weak coupling such that the total density matrix can be written as \(\tilde{\rho}_{tot}(t)=\tilde{\rho}_{S}(t)\otimes\tilde{\rho}_{B}\). We then trace over bath on both sides of the above equation
\[\frac{d}{dt}\tilde{\rho}_{S}(t)=-\int_{0}^{t}Tr_{B}[\tilde{H}_{BS}(t),[\tilde{ H}_{BS}(t^{\prime}),\tilde{\rho}_{S}(t^{\prime})\otimes\tilde{\rho}_{B}]]dt^{\prime}. \tag{27}\]
Note that, it can be shown that \(Tr_{B}[\tilde{H}_{BS}(t),\tilde{\rho}_{tot}(0)]=0\), provided that \(H_{BS}(t)\) is a bi-linear function of the bath and system operators and the initial density operator is in equilibrium. The last assumption is the _Markovian_ approximation, in which one assumes that the bath dynamics are much faster than the system, such that one can ignore the memory effects of the bath dynamics:
\[\frac{d}{dt}\tilde{\rho}_{S}(t)=-\int_{0}^{\infty}Tr_{B}[\tilde{H}_{BS}(t),[ \tilde{H}_{BS}(t-\tau),\tilde{\rho}_{S}(t)\otimes\tilde{\rho}_{B}]]\;d\tau. \tag{28}\]
The above equation is the Redfield QME that is true with or without time-dependent driving [31]. Below, we will simplify the QME when applying it to our model Hamiltonian with Floquet driving.
#### 2.3.1 Redfield QME with Floquet driving: Hilbert space QME
We now go back to the Schrodinger picture, where the density operator in Schrodinger picture reads
\[\tilde{\rho}_{S}(t)=\tilde{U}_{S}(t)\tilde{\rho}_{S}(t)\tilde{U}_{S}^{\dagger }(t).\]
Here \(\tilde{U}_{S}(t)=\mathcal{F}e^{-i\int_{0}^{t}\tilde{H}_{S}(s)ds}\) is the time evolution operator for the system. Also, we define \(\tilde{U}_{S}(t,t-\tau)=\mathcal{F}e^{-i\int_{t-\tau}^{t}\tilde{H}_{S}(s)ds}\)and \(\tilde{H}_{SB}(t,\tau)=\tilde{U}_{B}(\tau)\tilde{U}_{S}(t,t-\tau)\tilde{H}_{ BS}\tilde{U}_{S}^{\dagger}(t,t-\tau)\tilde{U}_{B}^{\dagger}(\tau).\) With these definitions, the Redfield QME in the Schrodinger picture is given by
\[\frac{\partial\tilde{\rho}_{S}(t)}{\partial t}=-i[\tilde{H}_{S}(t),\tilde{ \rho}_{S}(t)]-\int_{0}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\int\limits_{0}^{\infty}\sum_{\alpha,\beta,k}V_{ka}V_{k\beta}^{*}f(\varepsilon_{k}, \mu)e^{i\varepsilon_{k}\tau}\xi_{a}\hat{\varepsilon}_{\beta}^{\dagger}(t,\tau) \hat{\beta}_{S}(t)\;\;d\tau. \tag{31}\]
On above, we have used the fact that \(Tr_{B}\left(\hat{\varepsilon}_{k}^{\dagger}\hat{\varepsilon}_{k}\hat{\beta}_{ B}(\mu)\right)=f(\varepsilon_{k},\mu)\). At this point, we use the results from Section 2.2 for the time evolution, such that we can simplify \(\hat{\varepsilon}_{\beta}^{\dagger}(t,\tau)\) as
\[\hat{\varepsilon}_{\beta}^{\dagger}(t,\tau)=\sum_{n,m}\langle n|e^{-i\Pi^{F} \tau}\,\hat{\mathbb{C}}_{\beta,00}^{\dagger}e^{i\Pi^{F}\tau}|m\rangle e^{i(n-m )\omega t}. \tag{32}\]
Here, we have defined \(\hat{\mathbb{C}}_{\beta,00}^{\dagger}=|0\rangle\hat{\varepsilon}_{\beta}^{ \dagger}\langle 0|\) in the Floquet M-like representation. To proceed, we then use the eigenbasis of Floquet system Hamiltonian \(Y^{\dagger}\,\hat{R}^{F}\,Y=\bar{X}^{F}\). In particular, we will employ the matrix element
\[\left(Y^{\dagger}e^{-i\Pi^{F}\tau}\hat{\mathbb{C}}_{\beta,00}^{\dagger}e^{i \Pi^{F}\tau}Y\right)_{NM}=\left(e^{-i\bar{X}^{F}\tau}Y^{\dagger}\hat{\mathbb{C }}_{\beta,00}^{\dagger}Ye^{i\bar{X}^{F}\tau}\right)_{NM}=\left(Y^{\dagger}\hat {\mathbb{C}}_{\beta,00}^{\dagger}Y\right)_{NM}e^{-i(\varepsilon_{N}- \varepsilon_{M})\tau}. \tag{33}\]
With this matrix elements, we can evaluate the integral
\[\begin{split}&\sum_{k}V_{ka}V_{k\beta}^{*}f(\varepsilon_{k}, \mu)\int_{0}^{\infty}e^{i\varepsilon_{k}\tau}\big{(}Y^{\dagger}e^{-i\bar{B}^{F }\tau}\hat{\mathbb{C}}_{\beta,00}^{\dagger}e^{i\bar{B}^{F}\tau}Y\big{)}_{NM}d \tau\\ &=\pi\sum_{k}V_{ka}V_{k\beta}f(\varepsilon_{k},\mu)\big{(}Y^{ \dagger}\hat{\mathbb{C}}_{\beta,00}^{\dagger}Y\big{)}_{NM}\delta(\varepsilon _{k}-\Omega_{NM}^{F})=\frac{\Gamma_{\alpha\beta}}{2}\hat{\mathbb{C}}_{\beta, 00}^{\dagger}\end{split} \tag{34}\]
Here, we have defined \(\hat{\mathbb{C}}_{\beta,00}^{\dagger}_{NM}=\left(Y^{\dagger}\hat{\mathbb{C}} _{\beta,00}^{\dagger}Y\right)_{NM}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\frac{\partial\hat{\rho}_{S}^{E}(t)}{\partial t}=-i[\hat{H}_{S}^{F},\hat{\rho}_{S} ^{F}(t)]-\int_{0}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Similar to the case of non-Floquet QCLE, we proceed to perform the partial Wigner transformation (PWT) with respect to the nuclear coordination on both sides of the above LvN equation. For an operator \(\hat{\partial}\), PWT is given by
\[\hat{\partial}_{w}(\mathbf{R},\mathbf{P},t)\equiv\left(\hat{\partial}\left(\mathbf{\hat{R}}, \mathbf{\hat{P}},t\right)\right)_{w}\equiv\left(2\pi\right)^{-3N}\int\limits_{0}^ {\infty}d\mathbf{Y}e^{-i\mathbf{P}\mathbf{Y}}\left(\mathbf{R}-\frac{\mathbf{Y}}{2}\left|\hat{ \partial}\left(\mathbf{\hat{R}},\hat{P},t\right)\right|\mathbf{R}+\frac{\mathbf{Y}}{2}\right). \tag{40}\]
Here, \(\mathbf{Y}\) is an auxiliary nuclei vector coordinate. The direct consequence of PWT is that the output operator is now a function of nuclei's coordinate vector and momentum vector \((\mathbf{R},\mathbf{P})\) rather than the operators \((\mathbf{\hat{R}},\mathbf{\hat{P}})\). Notice that when performing PWT for a product, we have the following relationship:
\[\left(\hat{A}\hat{B}\right)_{W}(\mathbf{R},\mathbf{P},t)=\left(\hat{A}\right)_{W}(\bm {R},\mathbf{P},t)\ e^{-i\overline{\hat{A}}/2}\left(\hat{B}\right)_{W}(\mathbf{R},\mathbf{ P},t). \tag{40}\]
Here, \(\Lambda\) denotes the Poisson bracket:
\[\overline{\hat{\Lambda}}=\sum_{\alpha}\overline{\hat{\partial}\rho_{\alpha}} \overline{\hat{\partial}\theta_{\alpha}}-\overline{\hat{\partial}\theta_{ \alpha}}\overline{\hat{\partial}\theta_{\alpha}}. \tag{41}\]
\(e^{-i\overline{\hat{\Lambda}}/2}\) is called the Wigner-Moyal operator. If we keep only the first order of its Tayler expansion \(e^{-i\overline{\hat{\Lambda}}/2}\approx 1-i\overline{\hat{\Lambda}}/2\), we will then arrive at the Floquet QCLE:
\[\frac{\partial\hat{\rho}^{F}_{\ w}(t)}{\partial t}=-i\{\rho^{F}_{\ w}(\mathbf{R}, \hat{P},\hat{p}),\hat{\rho}^{F}_{\ w}(t)\}-\sum_{\alpha}\frac{P_{\alpha}}{M_{ \alpha}}\frac{\partial\hat{\rho}^{F}_{\ w}(t)}{\partial R_{\alpha}}+\frac{1}{ 2}\sum_{\alpha}\left\{\frac{\partial\hat{\rho}^{F}_{\ w}(\mathbf{R},\hat{P},\hat{p })}{\partial R_{\alpha}},\frac{\partial\hat{\rho}^{F}_{\ w}(t)}{\partial P_{ \alpha}}\right\}. \tag{42}\]
Here, \(\{\}\) denotes the anti-commutator. We remind the authors that on the above Floquet QCLE, the density and Hamiltonian operators are now in the Floquet space.
The FQCLE will a platform for developing trajectory based nonadiabatic approach, such as Floquet surface hopping. Below, we show that we can further map the FQCLE into a Fokker-Planck equation when considering a manifold of electronic states. The resulting electronic friction represents the first order correction to the BO approximation.
### Floquet electronic friction
Now, we show that we can map the FQCLE into a Fokker-Planck equation when the timescales of electronic and the driving are much faster than the nuclear motion. The deviation follows closely to the case of non Floquet driving [36, 37]. The nuclear density is given by \(\mathcal{A}(\mathbf{R},\mathbf{P},t)\)=\(\frac{1}{N}Tr_{e,F}\left(\hat{\rho}^{F}_{\ w}(\mathbf{R},\mathbf{P},t)\right).\) Here we trace over all electronic DoFs as well as all Floquet replicas. Here N is the number of Floquet replicas. \(Tr_{e,F}\) represent the trace over the Fourier space and many-body electronic states. To the first order correction to the BO approximation, we find that the nuclear motion follows the FP equation as follows:
\[\frac{\partial\mathcal{A}(t)}{\partial t}=-\sum_{\alpha}\frac{P_{\alpha}}{M_{ \alpha}}\frac{\partial\mathcal{A}(t)}{\partial R^{\alpha}}-\sum_{\alpha}F_{ \alpha}\frac{\partial\mathcal{A}(t)}{\partial P_{\alpha}}+\sum_{\alpha,\beta} \gamma_{\alpha\beta}\frac{\partial}{\partial P_{\alpha}}\bigg{(}\frac{P_{ \beta}}{M_{\beta}}\mathcal{A}(t)\bigg{)}+\sum_{\alpha,\beta}\overline{D}_{ \alpha\beta}^{s}\frac{\partial^{2}\mathcal{A}(t)}{\partial P_{\alpha} \partial P_{\beta}}, \tag{43}\]
Here \(F_{\alpha}\) is the mean force, \(\gamma_{\alpha\beta}\) is the frictional tensor that represents the first order correction to the BO approximation. \(\overline{\mathcal{D}}_{\alpha\beta}^{s}\) is the correlation function of the random force. In particular, we can express the frictional force as
\[\gamma_{\alpha\beta}=-\int\limits_{0}^{\infty}dt\,Tr_{e,F}\bigg{(}\frac{ \partial H^{F}}{\partial R_{\alpha}}e^{-\mu r_{\alpha}^{F}t}\frac{\partial \beta_{ss}^{F}}{\partial R_{\beta}}e^{\mu r_{\alpha}}\bigg{)}, \tag{44}\]
Here \(\beta_{ss}^{F}\) is the steady state Floquet electronic density operator. Thus, we have arrived in a general form for electronic friction in which all contributors are in their Floquet representations.
### Numerical Results: Electronic friction
When applying the results for electronic friction to the model Hamiltonian in Equation (22), we can express the Floquet electronic friction in terms of Floquet Green's functions
\[\gamma_{\alpha\beta}=-\frac{1}{N}\int\limits_{-\infty}^{\infty}\frac{d\epsilon }{2\pi}Tr_{m,F}\bigg{(}\frac{\partial h^{F}}{\partial R_{\alpha}}\frac{ \partial G^{F}}{\partial\epsilon}\frac{\partial h^{F}}{\partial R_{\beta}}G^{ F}{}^{<}(\epsilon)\bigg{)}+h.c. \tag{45}\]
Here \(Tr_{m,F}\) refers to the trace over one-body and Floquet DoFs, and \(h^{F}\) is the matrix elements of the System's Hamiltonian in the Floquet space. And \(G^{F}{}^{R}(\epsilon)\) and \(G^{F}{}^{<}(\epsilon)\) are Floquet retarded Green's function and Floquet lesser Green's function respectively. For more details consult with the Ref. [35].
To evaluate electronic friction given in terms the Floquet Green's functions, we choose a two-level electronic system carrying by two nuclear DoFs. Our model Hamiltonian for such system is given by:
\[h(\mathbf{R},t)=\begin{pmatrix}x+\Delta&Ay+Bcos(\omega t)\\ Ay+Bcos(\omega t)&-x-\Delta\end{pmatrix} \tag{46}\]
By taking the nuclear potential, \(U(\mathbf{R})\), to be harmonic oscillators in both \(x\) and \(y\) dimensions, the diagonal elements of the total Hamiltonian represent two parabolas shifted in the x direction with a driving force of \(2\Delta\). The off-diagonal coupling has two terms. The first one depends to the nuclear displacement in y direction with the strength \(A\) and the second is an external time-periodic driving which represents the dipole approximation for a monochromatic light-matter interaction. \(B\) and \(\omega\) represent the intensity and frequency of the light. We have assumed one of levels connected to the left and another connected to the right leads such that \(\Sigma_{k}^{R}\) is a diagonal matrix with the same elements as: \(\Gamma_{11}=\Gamma_{22}=1\). In addition, we have employed following variables: \(A=1\), \(B=2\), \(\Delta=3\), \(\beta=2\), and
0. Consistent with previous works [38], in the non-Floquet limit (\(B=0\)), the antisymmetric part of electronic friction, \(\gamma_{xy}^{A}=\left(\gamma_{xy}-\gamma_{yx}\right)/2\), is vanishing, whereas the symmetric part, \(\gamma_{xy}^{S}=\left(\gamma_{xy}+\gamma_{yx}\right)/2\), is non-vanishing. In Figure 1 (a) and (b), we have plotted different electronic friction tensors, calculated by Equation (46), for \(\omega=1\) and \(\omega=0.5\), receptively. As one can see from these figures that, for \(B\neq 0\), the \(\gamma_{xy}^{A}\) is non-vanishing and its strength increases as \(B\) increases.
The nonvanishing \(\gamma_{xy}^{A}\) results in a Lorentz-like force. Previously, we have shown that when including spin-orbit couplings, the friction tensor exhibits nonvanishing anti-symmetric parts. [38] Here, we show that the light-matter interactions can also introduce nonvanishing \(\gamma_{xy}^{A}\). The nonvanishing Lorentz-like force is missing in the BO approximation, which may have a strong effect on nuclear dynamics [39]. Particularly, we have shown that such a Lorentz-like force can results in spin selectivity [39]
## 4 Floquet Nonadiabatic Dynamics in Open Quantum System
### Quantum-Classical Liouville Equation - Classical Master Equation (Qcle-Cme)
When consider the nonadiabatic dynamics in open quantum system with Floquet driving, we can embed the FQCLE into a master equation. The procedure follows from the Floquet QME (Equation (36)) for the system, and then take the partial Wigner transformation on the nuclear DoFs. Finally, we arrive at FQCLE-CME:
\[\frac{\partial}{\partial t}\hat{\beta}_{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P},t)=\{ \hat{H}_{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P}),\hat{\beta}_{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P },t)\}_{a}-\mathrm{i}\big{[}\hat{H}_{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P}),\hat{\beta} _{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P},t)\big{]}-\widehat{\mathfrak{D}}_{W}^{\xi}( \hat{\beta}_{S}^{\xi}{}_{W}(\mathbf{R},\mathbf{P},t)) \tag{47}\]
Here, different from the FQCLE, we have an additional dissipator, \(\widehat{\Sigma}_{W}^{F}(\beta_{SW}^{F}(\mathbf{R},\mathbf{P},t))\), given by
\[\widehat{\Sigma}_{W}^{F}(\hat{\beta}_{SW}^{F}(\mathbf{R},\mathbf{P},t))=-\int_{0}^{ \infty}d\tau Tr_{B}\left([\widehat{\Pi}_{SBW}^{F},[\widehat{\widehat{\Pi}}_{SBW }^{F}(\tau),\hat{\rho}_{SW}^{F}(\mathbf{R},\mathbf{P},t)\otimes\hat{\rho}_{B}]]\right) \tag{48}\]
Notice that this dissipator is being partial Wigner transformed on the nuclear DoFs. We can further simplify this dissipator when considering the model Hamiltonian presented in Equation (22). The results will be similar to Equation (37) except that now the operator will be position dependent. The FQCLE-CME is a general approach to deal with Floquet nonadiabatic dynamics in open quantum systems. Below, we will apply this approach to a simple model.
### Anderson-Holstein model with Floquet driving: a case study
We will now try to elaborate our methods based on the Anderson-Holstein model with Floquet driving. In our Anderson-Holstein model, there is only one level in the System, which couples to an electronic bath as well as a nuclear DoF. In addition, we apply a periodic driving to the electronic level. Such that the System Hamiltonian reads
\[\hat{H}_{S}(t)=\big{(}E_{d}+A\sin(\Omega t)\big{)}\hat{c}^{\dagger}\hat{c}+g( \hat{a}^{\dagger}+\hat{a})\hat{c}^{\dagger}\hat{c}+\hbar\omega\left(\hat{a}^{ \dagger}\hat{a}+\frac{1}{2}\right)=\hat{H}_{mol}+A\sin(\Omega t)\hat{c}^{ \dagger}\hat{c}. \tag{50}\]
In the System Hamiltonian \(\bar{H}_{S}(t)\), which is separated into the time-independent part (\(\bar{H}_{mol}\)) and time-dependent part (\(A\sin(\Omega t)\hat{c}^{\dagger}\hat{c}\)), we consider the one level with the energy of \(E_{d}\) is coupled to an oscillator with frequency \(\omega\), the coupling strength can be tuned by \(g\). Additionally, the amplitude \(A\) determines the strength of the time periodic driving and \(\Omega\) is the driving frequency. In this special case, we can derive a classical master equation to describe the coupled electron-nuclear dynamics for the system [40].
\[\begin{split}&\frac{\partial\hat{P}_{0}(x,p)}{\partial t}=\frac{ \partial\hat{H}_{0}(x,p)}{\partial x}\frac{\partial\hat{P}_{0}(x,p)}{\partial p }-\frac{\partial\hat{H}_{0}(x,p)}{\partial p}\frac{\partial\hat{P}_{0}(x,p)}{ \partial x}-\gamma_{0\to 1}\hat{P}_{0}(x,p)+\gamma_{1\to 0}\hat{P}_{1}(x,p)\\ &\frac{\partial\hat{P}_{1}(x,p)}{\partial t}=\frac{\partial\hat{ H}_{1}(x,p)}{\partial x}\frac{\partial\hat{P}_{1}(x,p)}{\partial p}-\frac{ \partial\hat{H}_{1}(x,p)}{\partial p}\frac{\partial\hat{P}_{1}(x,p)}{\partial x }+\gamma_{0\to 1}\hat{P}_{0}(x,p)-\gamma_{1\to 0}\hat{P}_{1}(x,p)\end{split} \tag{51}\]
Here \(\hat{P}_{0}(x,p)\) and \(\hat{P}_{1}(x,p)\) are the phase space density with the level being empty or occupied. \(\gamma_{0\to 1}\) and \(\gamma_{1\to 0}\) are the hopping rates that is given by:
\[\begin{split}&\gamma_{0\to 1}=\Gamma\hat{f}(\Delta V)\\ &\gamma_{1\to 0}=\Gamma\left(1-\hat{f}(\Delta V)\right)\\ &\Delta V=\hat{H}_{1}-\hat{H}_{0}=\sqrt{2}gx+E_{d}\end{split} \tag{52}\]
Here, \(\tilde{f}(\Delta V)\) refers to the Fermi function that is modified with Floquet replicas (Bessel-modified fermi function). In general, \(\tilde{f}(\Delta V)\) is time dependent [40]. However, in the upcoming result and taking the time-scale separation, we employed the following time-averaged Bessel-modified fermi function:
\[\tilde{f}(\epsilon)=\sum_{n}\left[\int_{n}\left(\frac{A}{\Delta}\right)\right] ^{2}f(\epsilon-n\Omega) \tag{53}\]
Where \(f\) is the Fermi function. In fact, the rate equations given in Equation (51) is our CME for one-level where the Floquet theory is applied only within its dissipator term. We will employ the following surface hopping (SH) algorithm, to solve the above F-CME approximately. The algorithm is as follows,
1. Prepare the initial \(x\) and \(p\). Choose the active potential surface (say the state 0 surface).
2. Evolve \(x\) and \(p\) according to classical EOM for a time interval \(\Delta t\) on the active surface.
3. Calculating the hopping rate \(\gamma_{0-1}\) based on \(\tilde{f}(\Delta V)\) and generate a random number \(\zeta\in[0,1]\). If \(\zeta<\gamma_{0-1}\Delta t\), the nuclei switch to the other surface without momentum rescaling. Otherwise, the nuclei stay on the same surface.
4. Repeat step 2 and 3 for the desired number of time steps.
Our Floquet surface hopping approach thus refers to solving Bessel-CME using SH (F-CME-SH). In the Figure 2, we present the transient electronic and nuclear dynamics of the one-level case. In particular, we plotted the nuclear kinetic energy denoted by \(E_{k}\) and the expectation value of the number operator denoted by \(N\) while the electrochemical potential kept at zero. Two methods are compared against each other, namely above F-CME-SH and the QCLE that is associated with Hilbert space QME (H-QME). Results are given for three different values for the driving amplitudes. Note that when the driving amplitude is small (\(A=0.001\)), the nuclear kinetics can reach to the equilibrium temperature of \(E_{k}=1/2kT\), however, when increasing the driving amplitude (\(A=0.01,0.02\) ), the effective temperature is different from the equilibrium temperature. This indicates that system cannot reach to thermodynamic equilibrium under high intensity driving. We have realized that employing the time-averaged Bessel-modified fermi function in Figure 2 (F-CME-SH) allows the dynamics to reach to the correct steady state in longer time while the dynamics will miss the small oscillation features that is associated with the external driving (not visible in the Figure 2).
## Conclusion
In this overview, we offer Floquet formalism to treat dynamics, with a focus on nonadiabatic dynamics in open quantum systems. We begin by introduce Floquet LvN formulation, then two versions of Floquet QME, and Floquet QCLE. We also offer electronic friction approach, which is the first correction to the BO approximation with all electronic dynamics and Floquet driving being incorporated into frictional force. Finally, we sketch out Floquet QCLE-CME to treat nonadiabatic dynamics in open quantum systems in general. As an example, we consider a driven Anderson-Holstein model. We show that a Floquet surface hopping approach is applicable to treat the electronic and nuclear dynamics with periodic driving.
## Funding Information
This work is supported by National Science Foundation of China under grant number NSFC No. 22273075.
Figure 2: Transient dynamics: (a)-(c) the impurity electronic population as a function of time with different driving frequencies \(\Omega\) and driving amplitudes \(A\). (d)-(e) the nuclear relaxation as a function of time associated with (a)-(c). Here, we set nuclear frequency as \(\hbar\omega=0.003\), electron-phonon coupling \(g=0.0075\), \(kT=0.01\), \(\Gamma=0.01\), and \(E_{d}=g^{2}/\hbar\omega\). Note that the effect temperature is different from equilibrium temperature (\(E_{k}=1/2kT\)) when the driving amplitude is large (\(A=0.01,0.02\)). |
2310.10276 | A Low Complexity Block-oriented Functional Link Adaptive Filtering
Algorithm | The high computation complexity of nonlinear adaptive filtering algorithms
poses significant challenges at the hardware implementation level. In order to
tackle the computational complexity problem, this paper proposes a novel
block-oriented functional link adaptive filter (BO-FLAF) to model memoryless
nonlinear systems. Through theoretical complexity analysis, we show that the
proposed Hammerstein BO trigonometric FLAF (HBO-TFLAF) has 47% lesser
multiplications than the original TFLAF for a filter order of 1024. Moreover,
the HBO-TFLAF exhibits a faster convergence rate and achieved 3-5 dB lesser
steady-state mean square error (MSE) compared to the original TFLAF for a
memoryless nonlinear system identification task. | Pavankumar Ganjimala, Subrahmanyam Mula | 2023-10-16T11:05:18Z | http://arxiv.org/abs/2310.10276v1 | # A Low Complexity Block-oriented Functional Link Adaptive Filtering Algorithm
###### Abstract
The high computation complexity of nonlinear adaptive filtering algorithms poses significant challenges at the hardware implementation level. In order to tackle the computational complexity problem, this paper proposes a novel block-oriented functional link adaptive filter (BO-FLAF) to model memoryless nonlinear systems. Through theoretical complexity analysis, we show that the proposed Hammerstein BO trigonometric FLAF (HBO-TFLAF) has \(47\%\) lesser multiplications than the original TFLAF for a filter order of \(1024\). Moreover, the HBO-TFLAF exhibits a faster convergence rate and achieved \(3-5\) dB lesser steady-state mean square error (MSE) compared to the original TFLAF for a memoryless nonlinear system identification task.
Nonlinear adaptive filters, system identification, functional link adaptive filters, spline adaptive filters
## I Introduction
Conventionally, signal processing applications such as active noise control (ANC), echo cancellation, channel equalization etc. have relied on linear adaptive filters to model unknown systems due to their ease of design, implementation and lesser computational complexity [1]. However, in reality, a lot of practical problems involve nonlinear elements [2, 3, 4] which cannot be modelled by the conventional methods. One way to address this issue is to use nonlinear adaptive filters which incorporate nonlinearity in their model and thus improve the modelling accuracy.
Different structures and algorithms for nonlinear adaptive filters have been proposed in literature. Some of the important classes of algorithms are kernel adaptive filters (KAF) [5], functional link adaptive filters (FLAF) [6] and spline adaptive filters (SAF) [7]. Out of these the FLAF belongs to a class of filters known as linear-in-the-parameters (LIP) filters and has been widely used in online learning applications [8, 9]. Most of the recent works improving FLAF [10, 11, 12, 13] focus on improving the performance of the basic FLAF. There are only a few efforts which tackle the high computational complexity issue of nonlinear adaptive filters [14, 15, 16]. Compared to the FLAF, the SAF belongs to the block-oriented class of filters. Block-oriented filters are composed of a cascade of purely linear (L) or nonlinear (NL) modelling blocks and can be of Hammerstein type (NL-L) or Wiener type (L-NL). SAF has been proposed as a low-complexity solution for various applications [17, 18, 19]. The low complexity of SAF can be attributed to its block-oriented structure [20]. With that motivation, we propose a novel low complexity block-oriented FLAF structure in this paper. In the next section, we give a brief overview of the FLAF.
## II Functional link adaptive filter
The block diagram of an \(M\)-tap FLAF is shown in Fig. 1. As depicted in the figure, input samples \(x(n)\) are fed to a topped delay line and each of the \(M\) outputs is expanded into \(Q\) samples through a functional expansion block (FEB), where \(Q\) is the number of functional links. The FEB generates the nonlinear terms which help model the nonlinear system. The functional links are a set of \(Q\) functions \(\Phi=[\varphi_{0}(\cdot),\varphi_{1}(\cdot),\ldots\varphi_{Q-1}(\cdot)]\). The output expansion of an input sample \(x(n-i)\) is given as \(\mathbf{\bar{g}}_{i}(n)\in\mathbb{R}^{Q}\), where
\[\mathbf{\bar{g}}_{i}(n)=\begin{bmatrix}\varphi_{0}(x(n-i))\\ \varphi_{1}(x(n-i))\\ \vdots\\ \varphi_{Q-1}(x(n-i))\end{bmatrix} \tag{1}\]
\(i=0,\ldots M-1\) and \(n\) is the time index. The entire input buffer after expansion results in the expansion buffer \(\mathbf{g}(n)\), given as
\[\mathbf{g}(n)=[\mathbf{\bar{g}}_{0}(n)^{T},\mathbf{\bar{g}}_{1}(n)^{T},\ldots \mathbf{\bar{g}}_{M-1}(n)^{T}]^{T} \tag{2}\]
There are various choices for FEB basis functions such as Chebyshev, Legendre and trigonometric polynomials. Among these, the trigonometric polynomial function is widely used in literature as it provides the best compact representation and is computationally inexpensive [6]. We consider the trigonometric polynomial in this paper, for which the function expansion of the \(i^{\text{th}}\) sample in the tapped delay line is given by
\[\varphi_{j}(x(n-i))=\begin{cases}x(n-i),&j=0\\ sin(p\pi x(n-i)),&j=2p-1\\ cos(p\pi x(n-i)),&j=2p\end{cases} \tag{3}\]
where \(p=1\ldots P\), and \(j=0\ldots Q-1\). \(P\) is the expansion order which denotes the amount of nonlinearity required and \(Q=2P+1\).
If we consider the weight vector \(\mathbf{w}(n)=[w_{0}(n),w_{1}(n),\ldots w_{MQ-1}(n)]\), then the output of the adaptive filter is given as \(y(n)=\mathbf{w}(n)^{T}\mathbf{g}(n)\) and the error \(e(n)=d(n)-y(n)\), where \(d(n)\) is the desired response. The weight update equation using the stochastic gradient rule is given as
\[\mathbf{w}(n+1)=\mathbf{w}(n)+\mu e(n)\mathbf{g}(n) \tag{4}\]
where \(\mu\) is the step size parameter.
In this section, we described a memoryless FLAF, which models instantaneous nonlinearity. An FLAF with memory can be realized using additional functional expansions [6] which can model dynamic nonlinearity, i.e nonlinear functions which depend on the time instant.
## III Block-Oriented Functional Link Adaptive Filter
In this section, we propose two modifications to the FLAF to make it computationally more efficient. First, we describe an improvisation coined as the single \(\Phi\) FLAF. Then, we describe the novel block-oriented FLAF algorithm.
### _Single \(\Phi\) Flaf_
Instead of delaying the input sample \(x(n)\) in a tapped delay line and performing the \(M\)\(\Phi\) operations in parallel as shown in the original FLAF in Fig. 1, we can perform a single \(\Phi\) operation first on the input sample \(x(n)\) and then delay the expanded samples in \(\mathbf{g}_{0}(n)\) as shown in Fig. 2(a). This idea is already present in nonlinear ANC literature [14, 21] and we extend this to the FLAF and coin this as the single \(\Phi\) architecture. The \(M\)\(\Phi\) blocks are identical and time-invariant. Moreover, they just perform a point-wise mapping of the input sample. Hence, this rearrangement does not affect the generation of \(\mathbf{g}(n)\). In this topology, \(\mathbf{g}(n)=[\bar{\mathbf{g}}_{0}(n)^{T},\bar{\mathbf{g}}_{0}(n-1)^{T}, \ldots\bar{\mathbf{g}}_{0}(n-M+1)^{T}]^{T}\), where \(\bar{\mathbf{g}}_{0}(n-i)\) is the expanded vector \(\bar{\mathbf{g}}_{0}(n)\) delayed by \(i\) samples. In this topology, the number of \(\Phi\) modules reduces from \(M\) to just \(1\) and is independent of filter order \(M\), but the number of delay elements increases from \(M\) to \(MQ\). Generally, delay elements (or memory) are relatively less expensive compared to \(\Phi\) block which involves polynomial function evaluations.
### _Block-Oriented Functional Link Adaptive Filter_
Here we further reduce the computational complexity of the single \(\Phi\) FLAF by designing a block-oriented Hammerstein structure inspired from the SAF structure [22]. We split the parallel \(MQ\) tap filter in FLAF into two separate serial filters of order \(M\) and \(Q\) respectively and create a novel Hammerstein type filter called the Hammerstein block-oriented FLAF (HBO-FLAF) which is shown in Fig. 2(b). In the first stage the \(Q\) samples from the FEB are adaptively combined by the adaptive weights \(\mathbf{a}(n)=[a_{0}(n),a_{1}(n),\ldots a_{Q-1}(n)]^{T}\) to identify the nonlinear component of the system. The output of the first stage \(s(n)\) is given to a tapped delay line of length \(M\) and the outputs are then adaptively combined by the second filter with weights \(\mathbf{w}(n)=[w_{bias}(n),w_{0}(n),w_{1}(n),\ldots w_{M-1}(n)]^{T}\) to identify the linear component. Here \(w_{bias}\) is an optional adaptive bias quantity added to obtain improved filter performance.
The output of the nonlinear filter \(s(n)\) is given by
\[s(n)=\mathbf{a}(n)^{T}\mathbf{g}(n) \tag{5}\]
and \(\mathbf{g}(n)=\bar{\mathbf{g}}_{0}(n)=[\varphi_{0}(x(n)),\varphi_{1}(x(n)), \ldots\varphi_{Q-1}(x(n))]^{T}\)
The final output \(y(n)\) is given by
\[y(n)=\mathbf{w}(n)^{T}\mathbf{s}(n) \tag{6}\]
where, \(\mathbf{s}(n)=[1,s(n),s(n-1),\ldots s(n-M+1)]^{T}\) is a buffer consisting of the first stage outputs. The estimation error is defined as \(e(n)=d(n)-y(n)\), and \(d(n)\) is the desired signal.
The HBO-FLAF has two weight updates to be performed. The weight update equation is derived here using the standard stochastic gradient descent method [23]. The weight update for the linear filter weights \(\mathbf{w}(n+1)\) using the MSE cost function can be derived as follows
\[\mathbf{w}(n+1) =\mathbf{w}(n)+\frac{\partial(e(n)^{2})}{\partial\mathbf{w}(n)}\] \[=\mathbf{w}(n)+2e(n)\frac{\partial(d(n)-\mathbf{w}(n)^{T}\mathbf{ s}(n))}{\partial\mathbf{w}(n)}\]
Simplifying the derivative and replacing the constant terms with a learning rate parameter \(\mu_{w}\)[7], we get the final weight update equation for the linear weights \(\mathbf{w}(n)\) as
\[\mathbf{w}(n+1)=\mathbf{w}(n)+\mu_{w}e(n)\mathbf{s}(n) \tag{7}\]
Similarly, the weight update for the nonlinear filter weights \(\mathbf{a}(n+1)\) can be derived using stochastic gradient descent as
\[\mathbf{a}(n+1)=\mathbf{a}(n)+2e(n)\frac{\partial(d(n)-\mathbf{w}(n)^{T} \mathbf{s}(n))}{\partial\mathbf{a}(n)} \tag{8}\]
Rewriting \(\mathbf{s}(n)\) in terms of \(\mathbf{a}(n)\)
\[\mathbf{s}(n)^{T}=[\mathbf{a}(n)^{T}\mathbf{g}(n),\mathbf{a}(n-1)^{T}\mathbf{ g}(n-1),\ldots\mathbf{a}(n-M+1)^{T}\mathbf{g}(n-M+1)]\]
For a small step size, we can assume that the weights \(\mathbf{a}(n)\) change very little over each iteration [23], i.e \(\mathbf{a}(n)\approx\mathbf{a}(n-1)\approx\ldots\mathbf{a}(n-M+1)\). Then \(\mathbf{s}(n)\) can be written as
\[\mathbf{s}(n)^{T} =[\mathbf{a}(n)^{T}\mathbf{g}(n),\ldots\mathbf{a}(n)^{T}\mathbf{ g}(n-M+1)]\] \[=\mathbf{a}(n)^{T}[\mathbf{g}(n),\mathbf{g}(n-1),\ldots\mathbf{ g}(n-M+1)]\] \[=\mathbf{a}(n)^{T}\mathbf{G}^{T} \tag{9}\]
Fig. 1: Original FLAF
where \(\mathbf{G}^{T}\) is a \(Q\times M\) matrix defined as
\[\mathbf{G}^{T}=[\mathbf{g}(n),\mathbf{g}(n-1),\ldots\mathbf{g}(n-M+1)] \tag{10}\]
Substituting (9) in (8) we get
\[\mathbf{a}(n+1)=\mathbf{a}(n)+2e(n)\frac{\partial(d(n)-\mathbf{w}(n)^{T} \mathbf{G}\cdot\mathbf{a}(n))}{\partial\mathbf{a}(n)}\]
Similar to (7), we simplify the equation for the nonlinear weights \(\mathbf{a}(n)\) using learning rate parameter \(\mu_{a}\) as
\[\mathbf{a}(n+1)=\mathbf{a}(n)+\mu_{a}e(n)\mathbf{w}(n)^{T}\mathbf{G} \tag{11}\]
In this section, we presented a memoryless version of the HBO-TFLAF. Similar to FLAF with memory, HBO-FLAF is capable of modelling nonlinear systems with memory by adding a delay line to \(\mathbf{g}(n)\). Additional adaptive weights are added to the delay line samples and their cross terms.
### _Theoretical computational complexity_
The computational complexity of an algorithm can be measured in terms of the number of arithmetic operations performed in an iteration. The computational complexity of the least mean squares (LMS), trigonometric FLAF (TFLAF), single \(\Phi\) TFLAF, HBO-TFLAF and Hammerstein SAF (HSAF) algorithms are shown in Table I. In this study, we only consider the memoryless version of the TFLAF algorithm. Here we define \(M\) as the length of the linear filter, \(Q_{t}\) and \(P_{t}\) as the \(Q\) and \(P\) parameters in TFLAF, HBO-TFLAF and \(Q_{h}=P_{h}+1\), where \(P_{h}\) is the order of nonlinear function in HSAF.
The TFLAF algorithm has \(Q_{t}\) times more multiplications and additions compared to the LMS algorithm. In addition, TFLAF also has \(M(Q_{t}-1)\) trigonometric function evaluations and \(MP_{t}\) multiplications in the FEB. The single \(\Phi\) structure reduces the number of trigonometric function evaluations to \(Q_{t}-1\) and correspondingly the \(MP_{t}\) multiplications in the FEB to \(P_{t}\) multiplications. The HBO-TFLAF further reduces the computation by splitting the \(MQ_{t}\) tap filter into two filters of order \(M\) and \(Q_{t}\) respectively (\(M+Q_{t}\) feed-forward taps). Although the HBO-TFLAF has only \(M+Q_{t}\) forward taps, the additional vector-matrix multiplication between \(\mathbf{w}(n)\) and \(\mathbf{G}\) in (11) contributes to \(MQ_{t}\) multiplications and \((M-1)Q_{t}\) additions. The computational complexity of HSAF is similar to the HBO-TFLAF and has \(Q_{h}\) instead of \(Q_{t}\) taps.
To obtain a rough estimate of the computational complexity, we consider an acoustic echo cancellation application and assume \(M=1024\) and \(Q_{t}=7\), \(Q_{h}=4\). Substituting the values, the number of multiplications for TFLAF, single \(\Phi\) TFLAF, HBO-TFLAF and HSAF are \(17409,14340,9235\) and \(6171\) respectively and the number of additions are \(14336,14336,9222\) and \(5143\) respectively. The number of multiplications and additions is in the order: TFLAF \(>\) Single \(\Phi\) TFLAF \(>\) HBO-TFLAF \(>\) HSAF. Although the HBO-TFLAF is similar in structure to HSAF, since \(Q_{t}>Q_{h}\), the HBO-TFLAF has higher computation. HBO-TFLAF will have lower computation than the HSAF when \(Q_{t}<Q_{h}\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Algorithm & Number of & Number of & Trig. \\ & multipliers & adders & functions \\ \hline \hline LMS & \(2M+1\) & \(2M\) & \(\mathbf{0}\) \\ \hline TFLAF & \(2MQ_{t}+MP_{t}+1\) & \(2MQ_{t}\) & \(M(Q_{t}-1)\) \\ \hline Single \(\Phi\) TFLAF & \(2MQ_{t}+P_{t}+1\) & \(2MQ_{t}\) & \(Q_{t}-1\) \\ \hline HBO-TFLAF & \(\begin{array}{c}2(M+Q_{t})+P_{t}\\ +MQ_{t}+1\end{array}\) & \(2M+Q_{t}+1\) & \(Q_{t}-1\) \\ \hline HSAF & \(\begin{array}{c}2(M+Q_{h})+P_{h}\\ +MQ_{h}\\ +(P_{h}+1)^{2}\end{array}\) & \(\begin{array}{c}2M+Q_{h}+\\ MP_{h}+Q_{h}+\\ 3+P_{h}(P_{h}+1)\end{array}\) & \(0\) \\ \hline \end{tabular}
\end{table} TABLE I: Theoretical computational complexity
Fig. 2: Proposed FLAF modifications
## IV MSE Performance Analysis
In order to assess the performance of the TFLAF, single \(\Phi\) TFLAF and HBO-TFLAF, we perform nonlinear system identification on three different systems in MATLAB. We also compare performance with the HSAF and the LMS algorithm which is a linear model. The performance metric chosen here is the mean square error (MSE) metric given by \(10\cdot\log_{10}(E[e(n)^{2}])\). The MSE learning curves are generated from averaging \(500\) independent Monte Carlo experiments followed by a \(20\) tap moving average filter to improve the visibility of the curves.
Input to the systems is white Gaussian with a variance of \(0.25\) and noise variance is taken to be \(0.01\). The filter weights in LMS, RFF-KLMS, TFLAF and linear filter weights in HSAF and HBO-TFLAF are initialized to zeros. The nonlinear adaptive weight vector \(\mathbf{a}(n)\) in HBO-TFLAF is initialized as \(\delta(n)\), where the first weight, which corresponds to the \(x(n)\) output is one and the rest are zeros. This ensures that the HBO-TFLAF structure behaves as a linear filter initially. The bias weight (\(w_{bias}\)) is also used for the TFLAF and HBO-TFLAF. The HSAF nonlinear weight vector denoted by \(\mathbf{q}(n)\) is set as \([-3.0,-2.75,\ldots 2.75,3]\) (for \(\Delta x=0.25\)) resembling a linear mapping. In HSAF, for all the experiments \(Q_{h}=4\) and the matrix \(C\) is chosen as the CR-spline basis matrix (HSAF parameter definitions in [22]). The parameters chosen for the various filters in the three systems are shown in Table II. The inputs and the weight initialization are kept the same for all the experiments.
#### Iv-1 Memoryless systems
First, we consider the system identification of Hammerstein-type memoryless nonlinear systems. In system \(1\), an asymmetric loudspeaker distortion system [6] is considered. The nonlinear system is cascaded with a linear system which is a reverberation effect generated using the image source method (ISM) with a reverberation time of \(T_{60}=60\) ms and a sampling rate of \(8\) kHz and is truncated after \(512\) samples.
The MSE learning curves for system \(1\) are shown in Fig. 3(a). The nonlinear adaptive filters perform better compared to the LMS algorithm. The HBO-TFLAF, in spite of having lesser complexity than the TFLAF, performs better than TFLAF/Single \(\Phi\) TFLAF and obtains around \(3\) dB lower MSE. The performance of HBO-TFLAF and HSAF is almost similar, with HSAF obtaining around \(1\) dB lower MSE here. We can also see that the learning curves of single \(\Phi\) TFLAF and the TFLAF overlap and this proves that the single \(\Phi\) TFLAF structure is functionally identical to the TFLAF. In the subsequent plots, we omit single \(\Phi\) TFLAF learning curves.
Next, we perform the system identification of another memoryless system, system \(2\) which is a soft-clipping distortion system as described in [8], where we consider \(\zeta=0.35\). The linear system is kept the same as before. The MSE learning curves for system \(2\) are shown in Fig. 3(b). The performance is similar to system \(1\), and the HBO-TFLAF outperforms all the algorithms and obtains \(5\) dB lower MSE compared to the HSAF in this case.
From the results of the two memoryless systems, we conclude that the block-oriented algorithms HBO-TFLAF/HSAF are the best choice to model memoryless systems. Since TFLAF has more degrees of freedom, it tends to over-fit and thus has lesser performance than HBO-TFLAF/HSAF. The lesser number of taps in HBO-TFLAF/HSAF also allows it to converge to the solution faster. Although HSAF and HBO-TFLAF are similar in structure, we observe that HBO-TFLAF has a global weight adaptation behaviour as opposed to the local adaptation of HSAF. The HSAF would therefore need samples spread across the input range to learn the entire nonlinearity, whereas the HBO-TFLAF would not need such a broad range of training samples to learn the entire nonlinearity.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Parameter} & Memoryless & Memoryless & System \\ & system \(1\) & system \(2\) & with memory \\ \hline \hline \(M\) & \(512\) & \(512\) & \(8\) \\ \hline \(\mu_{LMS}\) & \(0.002\) & \(0.004\) & \(0.003\) \\ \hline \(\mu_{TFLAF}\) & \(0.0005\) & \(0.0004\) & \(0.0002\) \\ \hline \(\mu_{a-BOTFLAF}\) & \(0.0006\) & \(0.0006\) & \(0.0004\) \\ \hline \(\mu_{a-BOTFLAF}\) & \(0.0006\) & \(0.0011\) & \(0.0005\) \\ \hline \(Q_{t}\) & \(9\) & \(9\) & \(7\) \\ \hline \(\mu_{w-HSAF}\) & \(0.0015\) & \(0.0018\) & \(0.002\) \\ \hline \(\mu_{a-HSAF}\) & \(0.0015\) & \(0.0075\) & \(0.005\) \\ \hline \(\Delta_{x}\) & \(0.25\) & \(0.21\) & \(0.25\) \\ \hline \end{tabular}
\end{table} TABLE II: Experiment parameters
Fig. 3: MSE learning curves
This article is accepted in the \(30^{\text{th}}\) IEEE International Conference on Computer Vision (ICCV), 2015.
From the experiments, we observe that the HBO-TTLAF models symmetric and regularly shaped nonlinearities better, whereas HSAF is better at modelling irregular nonlinearities. A detailed study comparing the HSAF and HBO-TTLAF is beyond the scope of this paper and is necessary to reach more solid conclusions. Additionally, we note that the HBO-TTLAF is slightly easier to set up and function as it has only \(4\) hyperparameters as opposed to \(5\) in HSAF.
#### Iv-B2 System with memory
Next, to understand the limitation of block-oriented systems, we test their MSE performance in identifying a nonlinear system with memory. We consider the following system whose output \(y(n)\), given an input \(x(n)\) is
\[y(n)=0.6sin(\pi x(n))^{3}+0.2cos(2\pi x(n-2))^{2}\\ -0.1cos(4\pi x(n-4))+1.125 \tag{12}\]
This type of system depends nonlinearly on an input sample and its past samples (or has memory) but does not contain cross terms between the current and past samples. We point out here that the definition of a system with memory is slightly different in [6], which defines it as systems which have cross terms between samples in memory.
The MSE learning curves for this system are shown in Fig. 3(c). The TFLAF has the lowest MSE for this system. TFLAF can adapt the weights corresponding to the functional expansion of each input sample in the memory buffer separately. Whereas, the block-oriented models combine the functional expansion before the memory buffer and therefore will not be able to model systems with memory. As mentioned earlier, it is possible to add a delay line and additional filter weights to improve HBO-TTLAF performance for memory systems. With added filter weights, the computational advantage of HBO-TTLAF comes down. The single \(\Phi\) TFLAF becomes more computationally efficient for larger order of memory.
## V Conclusion and Future work
In this paper, a novel nonlinear adaptive filter structure coined HBO-FLAF is proposed. HBO-FLAF breaks down the original FLAF into two stages and performs the nonlinear and linear modelling separately. Through theoretical analysis and MATLAB simulations, we demonstrated that HBO-TTLAF not only has a lower computational complexity compared to TFLAF but it also achieves faster convergence and lesser steady-state MSE compared to TFLAF for a memoryless nonlinear system identification task. Therefore, HBO-FLAF can be a potential candidate for real-time VLSI applications involving nonlinear system identification. We also introduced the single \(\Phi\) FLAF structure which has lower computational complexity than TFLAF and maintains identical performance as TFLAF. In the future, we plan to implement the proposed algorithms in hardware and study the power, performance and area trade-offs.
## Acknowledgement
Pavankumar Ganjimala's work was supported by the Prime Minister's Research Fellowship (PMRF), Ministry of Education (MoE), Government of India.
|
2304.09860 | NRTS: A Client-Server architecture for supporting data recording,
transmission and evaluation of multidisciplinary teams during the neonatal
resuscitation simulation scenario | In this technical report, we describe Neonatal Resuscitation Training
Simulator (NRTS), an Android mobile app designed to support medical experts to
input, transmit and record data during a High-Fidelity Simulation course for
neonatal resuscitation. This mobile app allows one to automatically send all
the recorded data from "Neonatal Intensive Care Unit" (NICU) of Casale
Monferrato Children's Hospital, (Italy) to a server located at the Department
of Science and Technological Innovation (DiSIT), University of Piemonte
Orientale (Italy). Finally, the medical instructor can view statistics on a
simulation exercise that may be used during the de-briefing phase for the
evaluation of multidisciplinary teams involved in the simulation scenarios. | Manuel Striani | 2023-04-12T07:36:40Z | http://arxiv.org/abs/2304.09860v1 | NRTS: A Client-Server architecture for supporting data recording, transmission and evaluation of multidisciplinary teams during the neonatal resuscitation simulation scenario
###### Abstract
In this technical report, we describe Neonatal Resuscitation Training Simulator (NRTS), an Android mobile app designed to support medical experts to input, transmit and record data during a High-Fidelity Simulation course for neonatal resuscitation. This mobile app allows one to automatically send all the recorded data from "Neonatal Intensive Care Unit" (NICU) of Casale Monferrato Children's Hospital, (Italy) to a server located at the Department of Science and Technological Innovation (DiSIT), University of Piemonte Orientale (Italy). Finally, the medical instructor can view statistics on a simulation exercise that may be used during the de-briefing phase for the evaluation of multidisciplinary teams involved in the simulation scenarios.
Cloud architecture, Delivery room, High Fidelity Simulation, Intensive Care Unit, Mobile app, Neonatal resuscitation, Simulation based training
## I Introduction
The delivery room is a complex and dynamic environment, and although most new-borns present a normal adaptation to birth, about 5-10% of new-borns, need some form of help in the transition from intra-to extra-uterine life, 3-5% require mask ventilation, and only less than 1/1% require advanced resuscitation measures such as intubation, chest compressions, ventilation and medication.
Therefore, in view of the relative infrequency of new-born babies requiring extensive resuscitative interventions and the complexity and dynamism of neonatal emergencies, the delivery room can become a high-risk place for patients and very stressful for the care staff, who must be fully aware of the various steps and be trained and prepared to skillfully perform the procedures and work effectively in a team. In 2004, the Joint Commission published a sentinel event alert on neonatal outcomes indicating ineffective communication within the neonatal care team as the leading cause of death or permanent disability.
Increasing efforts to reduce medical errors and risks associated with physicians' performance of high-risk procedures. on patients have led to the increasing use of simulation in emergency medicine training. Before 2010 the Neonatal Resuscitation
Training Programme Neonatal Resuscitation Training Programme (NRP) of the American Academy of Pediatrics (AAP) was focused on acquiring the technical knowledge and skills necessary for neonatal resuscitation.
In recent years, there has been a gradual transformation to simulation-based training aimed at acquiring and maintaining the technical skills for teamwork in the management of critical patients.
By combining teaching and risk analysis in a safe environment for healthcare workers, medical simulation can decrease errors caused by human factors and create a more effective and safer patient treatment. It can only be a strategic and innovative tool in the context of multidisciplinary lifelong learning if there is a basis for the implementation of standardized high-quality programs of verifiable effectiveness. Simulation is an ideal educational methodology for teaching cognitive, technical and behavioural skills and hopefully improving early life care. In fact, it allows training that goes from a simple and linear learning of the resuscitation algorithm to a more complex one, such as behavioural skills and teamwork through debriefing, an essential part of the simulation during which, immediately after the scenarios, an analysis of the procedures is performed in search of potential weaknesses and strategies for the improvement, prevention and containment of errors.
The continuous training of healthcare workers becomes essential in the context of emergency and urgent care, where professionals are called upon to work in critical and highly stressful situations, and where the possession of manual dexterity in the execution of complex manoeuvres, prompt decision-making and the ability to work in a team become vital for the success of interventions.
This technical report presents a Client-Server architecture for using a mobile app for **N**onatal **R**esuscitation **T**raining **S**imulator (**NRTS**), designed and developed for supporting the correctness of the neonatal resuscitation during the simulation scenario and the subsequent evaluation of multidisciplinary
teams in the debriefing phase.
## II Neonatal high Fidelity simulation center
The neonatal HFS (High-Fidelity Simulation) centre at "Santo Spirito" Children's Hospital of Casale Monferrato (Italy) consists of a scenario room (shown in Figure 13) with DR (delivery room) or neonatal intensive care bed, a director's room and classrooms for theoretical lessons and de-briefings. The simulation room was modified specifically to have the appearance of a real DR or neonatal intensive care bed. Participants had all the necessary materials for attending to a new-born available, according to the latest American Heart Association (AHA) and Academy of Paediatrics (AAP) recommendations, including: an oxygen-air blender; a T-piece resuscitator Neo-Tee\({}^{\textregistered}\) Infant T-Piece Resuscitator Mercury Medical, Clearwater, Florida, USA); respiratory support devices for invasive (Leoni Plus, Heinen Lowenstein, Rheinland-Pfalz, Germany); respiratory support devices for invasive (Leoni Plus, Heinen Lowenstein, Rheinland-Pfalz, Germany) and non-invasive (Instant Flow, CareFusion, Hoechberg, Germany) ventilation
During scenario performance, it was possible to see the patient's vital signs and laboratory or instrumental tests on a specific monitor. Scenarios were performed by using new-born simulators (SimNewB and Premature Anne1). SimNewB is a highly realistic neonatal simulator with one size and weight of a new-born baby girl delivered at term with approximately 3.5 Kg body weight. Premature Anne is a highly realistic 25-week preterm infant simulator with an approximate weight of 0.6 Kg. A recording system with three high-definition cameras and an ambient microphone located in the resuscitation warmer was used.
Footnote 1: [https://laerdal.com/us/products/simulation-training/obstetrics-pedatrics/premature-anne/](https://laerdal.com/us/products/simulation-training/obstetrics-pedatrics/premature-anne/)
The HFS courses were performed over a time period of 1 month between June and July, into two separate sections including theoretical and videos lessons, TS (technical skills) exercises, scenario performances, de-briefings and psychological tests. At the beginning of each section forty-three practitioners (obstetricians, neonatalologists, physicians, midwives, and paediatric nurses) were admitted to the training High Fidelity Simulation centre and grouped into multidisciplinary teams of 3-4 persons and underwent simulator suite orientation (familiarization).
## III Client side: the NRTS mobile app
The client-side architecture offers the **N**eonatal **R**esuscitation **T**raining **S**imulator (**NRTS**), a mobile app which enable a medical expert to input, transmit and record data during the simulation phase of neonatal resuscitation. This mobile app automatically sends all the recorded data to a server in the form of process traces (i.e., the sequences of activities executed on the premature Anne at the hospital during the simulation scenarios).
In particular, **NRTS** supporting a medical instructor during the simulation scenarios to memorize and automatically send all the process traces to a server physically located at the Department of Science and Technological Innovation (DiSIT), University of Piemonte Orientale.
Furthermore, the medical instruction is also enable to insert oyster significant patient and simulation data like body-temperature or other kinds of qualitative notes and comments on teams involved in the simulation session.
Furthermore, the medical instructor may also enter other significant patient and simulation data like body temperature or other kinds of qualitative notes and comments on teams involved in the simulation session.
The mobile app **NRTS** has been developed and designed to runs on smartphone devices with a minimum operating system of Android v.13 "Tiramisu".
The mobile application presents a first step (initial activity), shown in Figure 1, that offers the option of starting a new simulation session.
After selecting the option "Start a new simulation session", the medical instructor, as shown in Figure 2, is able to visualize the checklist of pre-resuscitation actions to be performed, according to clinical guidelines, and then, it would be possible
Figure 1: By using this first activity, the instructor can start a new simulation session for neonatal resuscitation. In addition, the initial activity contains a link to the web page that shows the instructor, the results of the calculation of specific statistics, such as the average of the distances from the gold-standard trace (guideline) that were obtained by each group that participated in a simulation session.
to indicate whether or not these pre-requisites have been performed before starting to record the actual simulation scenario.
It is possible to start recording the simulation session (in the form of a process trace). When the physician starts a new simulation scenario, the app executes a timer (shown in Figure 2) within an activity that contains a set of actions to be executed during the first minute of simulation.
The duration of each individual action in the process trace is calculated by touching the same button corresponding to the same action twice, to start and stop the timer.
Figure 3 shows a dialog window in which the team leader can insert the value of body temperature. The temperature ranges are decided by the domain expert, selecting temperatures from 35.5\({}^{\circ}\)C to 39.5\({}^{\circ}\)C and over 40\({}^{\circ}\)C.
After performing the first minute actions, the operator switches to the screen with the actions to be undertaken within the second minute (shown in Figure 6). From this activity, it is possible to return to the previous activity or to proceed with the actions to be performed in the subsequent phases of the simulation scenario. Figure 5 shows the activity with the actions to be performed within two minutes from the start of the simulation scenario.
Figure 7 shows the activity with the actions to be performed no later than three minutes after the start of the simulation scenario. If necessary, or if the team has finished the simulation scenario, the operator may move forward to the next activity by selecting "End". In this way, a process trace containing all information about the sequence of actions and their durations is sent to the server.
Figures 8 and 9 show that, before sending the process trace to the server, through the speech-to-text function, the medical instructor in charge of the simulation may enter additional notes and comments, containing information that will later be used during the de-briefing phase.
By clicking on "Send Data", all the data collected during the simulation scenario, is saved into JSON objects and sent to the server via a HTTP-POST. After receiving the data (process trace), the server calculates the semantic distance [1] from the gold-standard trace (provided my medical expert) and sends the result to the mobile application. Figure 10 shows the distance value calculated by the server, relative to the process traces recorded during the current simulation scenario compared to the gold-standard trace (guideline, based on the new-born simulation algorithm).
In order to view statistics for all groups that participated in the simulation session, the medical instructor is able to insert the session ID (provided by the **NRTS** mobile app) in the text box available on the servers-side web-page shows in Figure 11. The activity shown in Figure 10, offers the possibility of automatically copy the session ID and paste it to obtain the
Figure 3: Activity containing actions to be performed within the first minute of the simulation session
Figure 2: Second activity: check-list of pre-rianimation phase
results of all the groups that are involved in the simulation scenarios.
Moreover, the medical instructor is allowed to analyse groups using statistics (shown in Figure 11 and 12) during the de-briefings phase to evaluate problems and errors that arose in the simulation session where personnel were involved (grouped into multidisciplinary teams of 3-4 persons (obstetrician, neonatalogist, midwife, paediatric nurse) and underwent simulator suite orientation (familiarization).
## IV Server side
The backend usually consists of three parts: a web server (provided by Apache HTTP or Microsoft Windows Server), a JAVA module, and a database (provided by DBMS NoSQL MongoDB2). The server component of the architecture (shown in Figure 13) receives the JSON-LD data sent by the app and stores them in the NoSQL database MongoDB. In our current implementation, the server has the following properties:
Footnote 2: [https://www.mongodb.com/](https://www.mongodb.com/)
* **Ubuntu edition**: 20.04 LTS Focal Fossa
* **Processor**: Virtual Machine 2VCPU
* **Installed memory (RAM)**: 4 GB
* **Storage memory**: 40 GB
## V The workflow architecture
Figure 13 shows the HFS simulation centre. The simulation room was composed by medical instructor and healthcare personnel grouped into multidisciplinary teams of 3-4 persons (obstetrician, neonatalologist, midwife, paediatric nurse).
In the simulation room, there is a neonatal craddle with a doll representing the premature new-born infant on which a series of manoeuvres must be performed by the team. These manoeuvres are included in a pre-defined set of actions provided by the domain expert neonatalologist, each of which is characterised by a start and end timestamp. In this way, each action has a specific duration. Other actions, such as fever measurement, are characterised by a range [35.5\({}^{\circ}\)C - 39.5\({}^{\circ}\)C] with step 0.5 provided by the medical expert. Moreover, the information on the timing and duration of each action was formalized, following the neonatal resuscitation guideline [2, 3, 4] describes the steps to be followed during the process of assessing and resuscitating a new-born baby.
1. During the simulation scenarios, the **NRTS** mobile app, through a minimalist interface, the medical instructor store process traces (sequences of actions performed by teams involved) and parameters recorded during an action (such as the body temperature value \(T\) and saturation levels \(SpO_{2}\) (shown in Figure 4)).
2. The process trace recorded by **NRTS** mobile app, is sent
Figure 8: Via this activity, the medical instructor can send to the server additional notes or terminate the current simulation scenario
Figure 7: Activity containing actions to be performed within three minutes of the simulation session.
Figure 9: Activity for Speech-To-Text notes
to the server. The **TExC** module (shown in Figure 13) calculates the distance, expressed as a value in the range [0,1] from the current process trace and gold-standard trace.
3. Finally, the server stores the process traces in a NoSQL database using the DBMS MongoDB and sends the score to the **NRTS** mobile app, which makes it available to the medical instructor, displays it in the final activity (shown in Figure 10).
4. The Client-Server architecture allows the storage and comparison of simulation process traces in the DBMS MongoDB, with the reference guideline for neonatal resuscitation, in order to establish the goodness and correctness of the simulation sessions with the final evaluation of the groups in the de-briefings phase.
## VI Evaluation of multi-disciplinary teams
The **NRTS** mobile app, makes possible to record all individual simulation sessions that are performed by the team of healthcare professionals participating in the training phase of the course. From the simulation sessions, it is possible to derive a collection of process traces, sequencing in each trace the actions performed by each individual team of participants and enriching them with temporal information (i.e. start and end timestamps).
In order to evaluate the learning outputs, it will then be possible to compare the traces generated by a team against a "Gold" trace, which represents the correct behaviour to be adopted in the simulated situation, and thus derive an ordering of the traces generated by the various teams, with respect to their distance from the optimal trace.
The sequences of actions constituting a single trace have both temporal information attributes (i.e., start/end timestamp, and consequently duration) and atemporal attributes (such as name and measured value (body temperature \(T\), oxygen \(O\) or saturation \(SpO_{2}\)).
In a medical emergency domain, temporal information are very relevant and plays a central role. It may be important to penalize the fact that the same action may have different durations in different process traces. It must also be borne in mind that, for medico-legal purposes, abnormal durations, relating to the same action and delays must be justified. In order to obtain an accurate calculation of the distance between two process traces, it is important to take into account all types of information that can be retrieved from the process traces themselves, therefore the simulation scenarios were evaluated
Figure 11: By using the web-page, medical instructor can insert the session ID (provided by the **NRTS** mobile app) and visualize some statistics about different groups involved in the simulation scenario
Figure 10: Final activity showing the distance value between the recorded process traces during the current simulation scenario and the gold-standard trace (stored on the server). The distance of 0.7612 indicates a distance of 76% from the guideline (gold-standard trace).
both through a metric of comparison between traces, as well as with regard to times, i.e. durations were calculated as the average of the \(k\)-traces over the n groups that participated. The comparison of the traces was performed using the TExC (Trace **E**xtractor and **C**omparison) module, which already existing as a Java library developed at the University of Piemonte Orientale - DiSIT, Department of Science and Technological Innovation - Computer Science institute. To calculate the distance between two process traces, we used the Semantic Trace Edit Distance [5][6].
## VII How to download and install NRTS
Downloading apps from the official Google Play Store is a pretty simple process. Errors like "Not available for your device" or "Incompatible with your device" can interrupt it. Well, these messages mean you can't install the apps, but that doesn't mean you can't use them on your device.
Finding and downloading APK versions of those specific apps can help you install the applications manually. The most secure and exact way to download Android and install apps from Google Play, is using device with version \(\geq\) Android 13 called "Tiramisu", developed by Google, released for the public on August, 2022. **NRTS** mobile app is available on Google Play.
For visualize statistics, our server is reachable at this IP address
[https://tinyurl.com/NRTStatistics](https://tinyurl.com/NRTStatistics)
## Acknowledgment
The authors would like to thank Mariachiara Martina Strozzi - medical doctor (MD) and director of the HFS center at "Santo Spirito" Children's Hospital of Casale Monferrato (Italy) and Roberta De Benedictis, master student of Computer Science at Computer Science Institute of DiSIT - University of Piemonte Orientale (Italy), for their help and support during the process of design, developing and testing of the **NRTS** mobile architecture.
This research is original and has a financial support of the University of Piemonte Orientale (DiSIT-UPO).
Contacts
Should you have any questions about **N**eonatal **R**esuscitation **T**raining **S** Simulator (**NRTS**), do not hesitate to contact us. For general questions you can contact dott. Manuel Striani ([email protected]), Prof. Stefania Montani ([email protected]) or Prof. Massimo Canonico ([email protected]).
|
2303.02992 | Normalization flow | We propose a new approach to the theory of normal forms for Hamiltonian
systems near a non-resonant elliptic singular point. We consider the space of
all Hamiltonian functions with such an equilibrium position at the origin and
construct a differential equation in this space. Solutions of this equation
move Hamiltonian functions towards their normal forms. Shifts along the flow of
this equation correspond to canonical coordinate changes. So, we have a
continuous normalization procedure. The formal aspect of the theory presents no
difficulties. The analytic aspect and the problems of convergence of series are
as usual non-trivial. | Dmitry Treschev | 2023-03-06T09:39:46Z | http://arxiv.org/abs/2303.02992v2 | # Normalization flow
###### Abstract
We propose a new approach to the theory of normal forms for Hamiltonian systems near a non-resonant elliptic singular point. We consider the space of all Hamiltonian functions with such an equilibrium position at the origin and construct a differential equation in this space. Solutions of this equation move Hamiltonian functions towards their normal forms. Shifts along the flow of this equation correspond to canonical coordinate changes. So, we have a continuous normalization procedure. The formal aspect of the theory presents no difficulties. The analytic aspect and the problems of convergence of series are as usual non-trivial.
## 1 Introduction
Following Birkhoff, [1] consider the problem of reduction of a Hamiltonian system in a neighborhood of an elliptic fixed point to the normal form. More precisely, we deal with the system
\[\dot{z}=i\partial_{\overline{z}}\widehat{H},\quad\dot{\overline{z}}=-i \partial_{z}\widehat{H},\qquad\widehat{H}=\widehat{H}(z,\overline{z}). \tag{1.1}\]
Here \(z=(z_{1},\ldots,z_{n})\) and \(\overline{z}=(\overline{z}_{1},\ldots,\overline{z}_{n})\) are independent coordinates on \(\mathbb{C}^{2n}\), the Hamiltonian function \(\widehat{H}\) has the form1
Footnote 1: Note that the variable \(\overline{z}_{j}\) is not necessarily complex conjugate of \(z_{j}\). Following Birkhoff, we use \(z\) and \(\overline{z}\) as _independent_ complex variables. Analogously the notation \(\overline{k}\) has nothing to do with complex conjugacy.
\[\widehat{H}=\sum_{k,\overline{k}\in\mathbb{Z}_{+}^{n}}\widehat{H}_{k,\overline {k}}z^{k}\overline{z}^{\overline{z}}=H_{2}+\widehat{H}_{\circ},\qquad\widehat{ H}_{\circ}=O_{3}(z,\overline{z}),\quad H_{2}(z,\overline{z})=\sum_{j=1}^{n} \omega_{j}z_{j}\overline{z}_{j}, \tag{1.2}\]
\[z^{k}=z_{1}^{k_{1}}\cdots z_{n}^{k_{n}},\quad\overline{z}^{\overline{k}}= \overline{z}_{1}^{\overline{k}_{1}}\cdots\overline{z}_{n}^{\overline{k}_{n}}, \quad\mathbb{Z}_{+}^{n}=\{0,1,2,\ldots\}.\]
Here \(O_{m}(z,\overline{z})\) is a short form for \(O(|z|^{m}+|\overline{z}|^{m})\).
Reality condition for \(\widehat{H}\) is as follows:
\[\overline{\widehat{H}}_{k,\overline{k}}=\widehat{H}_{\overline{k},k}\quad \text{for any }k,\overline{k}\in\mathbb{Z}_{+}^{n}. \tag{1.3}\]
In general we do not assume that \(\widehat{H}\) is real in this sense.
We assume that the frequency vector \(\omega=(\omega_{1},\ldots,\omega_{n})\) is nonresonant:
\[\langle q,\omega\rangle\neq 0\quad\text{for any }q\in\mathbb{Z}^{n}\setminus\{0\},\]
where \(\langle\cdot,\cdot\rangle\) is the standard inner product on \(\mathbb{R}^{n}\). It is known that in this case there exists a formal canonical near-identity change of the variables
\[(z,\overline{z})\mapsto(Z,\overline{Z})=(z,\overline{z})+O_{2}(z,\overline{z}), \quad d\overline{z}\wedge dz=d\overline{Z}\wedge dZ \tag{1.4}\]
such that the new Hamiltonian function takes the normal form:
\[H(z,\overline{z})=N(Z_{1}\overline{Z}_{1},\ldots,Z_{n}\overline{Z}_{n}).\]
The change of variables (1.4) is formal because \(Z=Z(z,\overline{z})\), \(\overline{Z}=\overline{Z}(z,\overline{z})\), and \(N\) are in general formal power series. The problem of convergence/divergence of the normalizing transformation under the assumption of analyticity of \(\widehat{H}\) is central in the theory. A separate (harder) problem is convergence/divergence of the normal form. If the normalization converges then the system is locally completely integrable. Various versions of the inverse statement are proved in [16, 6, 3].
Another corollary from convergence of the normalization is Lyapunov stability of the equilibrium position. The papers [4, 7] contain examples of real-analytic Hamiltonians (1.2) such that the origin is Lyapunov unstable in the system (1.1).
Convergence of the normal form does not imply convergence of the normalization. But it has interesting dynamical consequences: the measure of the set covered by KAM-tori turns out to be noticeably bigger than in the case when the normal form diverges [8].
Probably one should expect that convergence is an exceptional phenomenon in any reasonable sense. At the moment this exceptionality is known in terms of Baire category [12] and \(\Gamma\)-capacity [10, 8]. Explicit examples of systems with divergent normal form can be found in [5, 17, 4].
The case of the "trivial" normal form
\[N(Z_{1}\overline{Z}_{1},\ldots,Z_{n}\overline{Z}_{n})=H_{2}(Z,\overline{Z}) \tag{1.5}\]
is special. According to [2, 11], see also [14], (1.5) combined with Diophantine conditions, imposed on \(\omega\), imply convergence of the normalization.
The change of variables (1.4) is traditionally constructed as a composition of an infinite sequence of coordinate changes which normalize the Hamiltonian function up to a remainder of a higher and higher degree [1]. In further works (see for example [13]) the change of coordinates (1.4) is represented as a formal series in \(z\) and \(\overline{z}\). Because of the symplecticity condition the corresponding formulas look cumbersome and implicit, but such an approach gives a possibility to control polynomial structure of the normalization in any finite degree w.r.t. the phase variables.
Let \(\mathcal{F}\) be the space of all power series in the variables \(z\) and \(\overline{z}\). Sometimes we refer to elements of \(\mathcal{F}\) as functions although they are only formal power series. In this paper we study a flow on the space2\(\mathcal{F}\). We call it the normalization flow \(\phi^{\delta}\), \(\delta\in\mathbb{R}\). Any shift
Footnote 2: More precisely on the subspace \(\mathcal{F}_{\circ}\subset\mathcal{F}\) of series which start from terms of power \(3\).
\[H_{2}+\widehat{H}_{\circ}\mapsto H_{2}+\phi^{\delta}(\widehat{H}_{\circ}), \qquad\widehat{H}_{\circ}\in\mathcal{F}\]
is a transformation of the Hamiltonian function \(H_{2}+\widehat{H}_{\circ}\) according to a certain (depending on \(\delta\) and on the initial Hamiltonian \(H_{2}+\widehat{H}_{\circ}\)) canonical change of variables.
The flow \(\phi^{\delta}\) is determined by a certain ODE in \({\cal F}\):
\[\partial_{\delta}H_{\circ}=-\{\xi H_{\circ},H_{2}+H_{\circ}\},\qquad H_{\circ}|_{ \delta=0}=\widehat{H}_{\circ}. \tag{1.6}\]
Here \(\{\,,\}\) is the Poisson bracket and \(\xi\) is a linear operator on \({\cal F}\). In fact, (1.6) is an initial value problem (IVP) for a differential equation presented in the form of Lax L-A pair. Usually such systems are considered in the theory of integrable systems.
Let \({\cal N}\subset{\cal F}\) be the subspace which consists of series which depend on \(z,\overline{z}\) only through the combinations \(z_{j}\overline{z}_{j}\). The space \({\cal N}\) is an invariant manifold with respect to \(\phi^{\delta}\). Any point of \({\cal N}\) is fixed. Any orbit of the flow indefinitely approaches \({\cal N}\) with respect to a certain topology on \({\cal F}\).
In Section 2.3 we represent the system (1.6) in the form of an infinite ODE system for the coefficients \(H_{k,\overline{k}}\). A bit more convenient equivalent form of it is the system (2.14). Because of a special "nilpotent" structure of the system (2.14), the existence and uniqueness of a solution for the corresponding IVP for any initial condition turns out to be a simple fact (Section 3.1).
We are particularly interested in the restriction of \(\phi^{\delta}\) to the subspace \({\cal A}\subset{\cal F}\) of analytic Hamiltonian functions. In Section 3.2 we prove that for any \(\widehat{H}_{\circ}\in{\cal A}\) the solution \(H_{\circ}\) also lies in \({\cal A}\) for any \(\delta>0\). However the polydisk of analyticity generically shrinks when \(\delta\) grows. According to our rough lower estimate its radius is of order \(1/\delta\).
In Section 4 we discuss some special properties of the system (1.6). For example, the set
\[\{H_{\circ}\in{\cal F}:H_{k,\overline{k}}=0\mbox{ if }\langle\omega,\overline{k} -k\rangle\leq M_{1}\mbox{ or }\langle\omega,\overline{k}-k\rangle\geq M_{2}\}\]
is invariant for the flow \(\phi^{\delta}\).
In Section 5 after some technical work we present the system (2.14) in a more convenient form. In this form unknown functions in the differential equations are the functions \({\cal H}^{q}=\sum_{\overline{k}-k=q}{\cal H}_{k,\overline{k}}z^{k}\overline{ z}^{\overline{k}}\), \(q\in{\mathbb{Z}}^{n}\). Any function \({\cal H}^{q}\) equals \(z^{k_{q}}\overline{z}^{\overline{k}_{q}}N^{q}\). Here \(z^{k_{q}}\overline{z}^{\overline{k}_{q}}\) is a certain monomial, determined by \(q\), and \(N^{q}\in{\cal N}\). We rewrite the system in terms of the functions \(N^{q}\) only. This version of the system (1.6) is the main tool in Sections 6.2 and 6.3 to obtain some explicit solutions and in Section 7 to prove a global in \(\delta\geq 0\) existence theorem for a certain initial condition \(\widehat{H}_{\circ}\).
For any \(N\in{\cal N}\) let \({\cal W}_{N}\) be the corresponding stable manifold. Then \({\cal F}\) is foliated by such manifolds: \({\cal F}=\cup_{N\in{\cal N}}{\cal W}_{N}\). Unfortunately the invariant manifold \({\cal N}\) is not normally hyperbolic. Because of small divisors the orbits \(\phi^{\delta}(\widehat{H}_{\circ})\) do not tend to their normal forms uniformly exponentially. However it seems interesting to consider the restriction of \(\phi^{\delta}\) to the manifolds \({\cal W}_{N}\). Section 6 contains an attempt to imagine how such a restriction might look like.
This is a motivation to introduce the so-called asymptotic flow \(\Psi^{\delta}\). It is determined by an equation (asymptotic system) analogous to (1.6), where in comparison with (1.6) some terms are dropped. Due to this the "normal component" of the Hamiltonian \(\Psi^{\delta}(\widehat{H}_{\circ})\) remains constant. In Section 6.1 we prove that the asymptotic flow is conjugated to the normalization flow (presented in slightly another form \(\Phi^{\delta}\)) by a smooth map \({\cal F}\to{\cal F}\).
In Section 6.2 we obtain an explisit solution of the asymptotic system. At this point one might expect that at least for the asymptotic flow analytic initial conditions generate an orbit on which the analyticity domain does not shrink to zero. Unfortunately, this
is not true. In Section 6.3 we consider an example when the normalization flow (in the form \(\Phi^{\delta}\)) and the asymptotic flow have the same orbit. This happens when all Taylor coefficients of the initial condition \(\widehat{H}_{\circ}\) with \(\langle\omega,\overline{k}-k\rangle<0\) vanish. According to results of Section 4 this property remains valid on the whole orbit. Note that such an example is physically meaningless because this choice of \(\widehat{H}_{\circ}\) is incompatible with the reality condition. However in the context of our approach it is important because we prove (Proposition 6.3) that typically3 the analyticity polydisk for the solution \(H_{\circ}\) shrinks indefinitely as \(\delta\to\infty\).
Footnote 3: Even in the case when the normal form (which is known in this example in advance) is a quadratic polynomial in \(\kappa_{j}=z_{j}\overline{z}_{j}\).
Section 7 presents a somewhat more positive result. We present an example of an analytic (if necessary, real-analytic) Hamiltonian \(\widehat{H}_{\circ}\) for which both its normal form \(\phi^{\infty}(\widehat{H}_{\circ})\) and the transformation to the normal form are analytic. Such a function \(\widehat{H}_{\circ}\) satisfies the following condition:
\[\widehat{H}_{k,\overline{k}}\neq 0\quad\text{only for}\quad\overline{k}-k\in\{0,q,-q\},\]
where \(q\in\mathbb{Z}^{n}\) is a nonzero vector. Hence, "majority" of coefficients \(\widehat{H}_{k,\overline{k}}\) in this case vanish, although expansion of \(\widehat{H}_{\circ}\) still contains an infinite number of nonzero terms.
Analytic convergence to normal form in this situation is not very surprising. Indeed, any quadratic form
\[Q=\sum a_{j}z_{j}\overline{z}_{j},\qquad\sum a_{j}q_{j}=0\]
commutes with \(\widehat{H}_{\circ}\): \(\{Q,\widehat{H}_{\circ}\}=0\). Therefore the traditional normalization procedure converges by the Ito theorem [6]. However we do not know if normalization by the flow \(\phi^{\delta}\) is equivalent from the viewpoint of analytic convergence to the traditional normalization. On the other hand it was interesting for us to prove a global in time \(\delta\) existence of an analytic solution of IVP (1.6) for some nontrivial initial condition \(\widehat{H}_{\circ}\). Technically the problem is reduced to a certain nonlocal in time theorem of Cauchy-Kovalevskaya type for the PDE system (8.5).
In this paper dealing with problems of local in "space" but global in "time" analytic convergence of solutions for various initial value problems we use the so-called Majorant principle presented in Section 8.3. It slightly differs from majorant argument traditionally used in relation with the Cauchy-Kovalevskaya theorem. The main difference is that we refuse to regard time as a complex variable and therefore no Taylor expansions in time appear. This version of the majorant method is better adapted to the problem of the construction of nonlocal in time analytic solutions.
Finally note that we discuss a "continuous" (unlike step-by-step) approach to the theory of normal forms under the following restrictions:
* the systems are Hamiltonian,
* the fixed point is elliptic,
* the frequencies are nonresonant.
Surely our methods work if any of these conditions (or any combination of these conditions) is dropped.
**Acknowledgements**. The author thanks Sergey Bolotin, Sergey Suetin and Oleg Zubelevich for valuable discussions.
## 2 Basic construction
For any \(q=(q_{1},\ldots,q_{n})\in\mathbb{Z}^{n}\) let \(|q|=|q_{1}|+\cdots+|q_{n}|\) be its \(l^{1}\)-norm. For any \(k,\overline{k}\in\mathbb{Z}_{+}^{n}\) we put
\[{\bf k}=(k,\overline{k})\in\mathbb{Z}_{+}^{2n},\quad{\bf k}^{\prime}= \overline{k}-k\in\mathbb{Z}^{n},\quad{\bf k}^{*}=(\overline{k},k),\quad|{\bf k }|=|k|+|\overline{k}|,\]
\[{\bf z}=(z,\overline{z}),\quad{\bf z}^{\bf k}=z^{k}\overline{z}^{\overline{k}},\quad z,\overline{z}\in\mathbb{C}^{n}.\]
Then \({\bf k}^{*}-{\bf k}=({\bf k}^{\prime},-{\bf k}^{\prime})\). In particular \({\bf k}^{\prime}=0\) if and only if \({\bf k}={\bf k}^{*}\).
We will use the notation
\[\mathbb{Z}_{\circ}^{2n}=\{{\bf k}\in\mathbb{Z}_{+}^{2n}:|{\bf k}|\geq 3\}.\]
### Spaces
Let \({\cal F}\) be the vector space of all series
\[H=\sum_{{\bf k}\in\mathbb{Z}_{+}^{2n}}H_{\bf k}{\bf z}^{\bf k},\qquad H_{\bf k }\in\mathbb{C}. \tag{2.1}\]
The series (2.1) are assumed to be formal i.e., there is no restriction on the values of the coefficients \(H_{\bf k}\). So, \({\cal F}\) coincides with the ring \(\mathbb{C}[[z_{1},\ldots,z_{n},\overline{z}_{1},\ldots,\overline{z}_{n}]]\).
Below we use on \({\cal F}\) the product topology i.e., a sequence \(H^{(1)},H^{(2)},\ldots\in{\cal F}\) is said to be convergent if for any \({\bf k}\in\mathbb{Z}_{+}^{2n}\) the sequence of coefficients \(H_{\bf k}^{(1)},H_{\bf k}^{(2)},\ldots\) converges.
For any \(H\) satisfying (2.1) we define \(p_{\bf k}(H)=H_{\bf k}\). So \(p_{\bf k}:{\cal F}\to\mathbb{C}\) is a canonical projection corresponding to \({\bf k}\in\mathbb{Z}_{+}^{2n}\). Suppose \(F\in{\cal F}\) depends on a parameter \(\delta\in I\), where \(I\subset\mathbb{R}\) is an interval. In other words, we consider a map
\[f:I\to{\cal F},\quad I\ni\delta\mapsto F(\cdot,\delta).\]
We say that \(F\) is smooth in \(\delta\) if all the maps \(p_{\bf k}\circ f\) are smooth.
Let \({\cal F}_{r}\subset{\cal F}\) be the space of "real" series:
\[{\cal F}_{r}=\{H\in{\cal F}:\overline{H}_{\bf k}=H_{\bf k^{*}}\ \ \mbox{for any}\ {\bf k}\in\mathbb{Z}_{+}^{n}\}.\]
We define \({\cal A}\subset{\cal F}\) as the space of analytic series:
\[{\cal A}=\{H\in{\cal F}:\ \mbox{there exist}\ c,a\ \mbox{such that}\ |H_{\bf k}|\leq ce^{a|{\bf k}|}\ \mbox{for any}\ {\bf k}\in\mathbb{Z}_{+}^{2n}\}.\]
The product topology may be restricted from \({\cal F}\) to \({\cal A}\), but, being a scale of Banach spaces, \({\cal A}\) may be endowed with a more natural topology. We have \({\cal A}=\cup_{0<\rho}{\cal A}^{\rho}\), where \({\cal A}^{\rho}\) is a Banach space with the norm \(\|\cdot\|_{\rho}\),
\[\|H\|_{\rho}=\sup_{{\bf z}\in D_{\rho}}|H({\bf z})|,\quad D_{\rho}=\{{\bf z}\in \mathbb{C}^{2n}:|z_{j}|<\rho,\ |\overline{z}_{j}|<\rho,\ j=1,\ldots,n\}. \tag{2.2}\]
Then we have: \({\cal A}^{\rho^{\prime}}\subset{\cal A}^{\rho}\), \(\|\cdot\|_{\rho}\leq\|\cdot\|_{\rho^{\prime}}\) for any \(0<\rho<\rho^{\prime}\).
**Lemma 2.1**: _If \(\|H\|_{\rho}\leq c\) then_
\[|H_{\bf k}|\leq c\rho^{-|{\bf k}|}. \tag{2.3}\]
The proof follows from the Cauchy formula: for any positive \(\rho_{0}<\rho\) and any \(H\in{\cal A}^{\rho}\)
\[H_{\bf k}=\frac{1}{(2\pi i)^{2n}}\oint dz_{1}\ldots\oint dz_{n}\oint d\overline {z}_{1}\ldots\oint\frac{H({\bf z})\,d\overline{z}_{n}}{{\bf z}^{{\bf k}+{\bf 1}}},\]
where \({\bf 1}=(1,\ldots,1)\in{\mathbb{Z}}_{+}^{2n}\) and the integration is performed along the circles
\[|z_{1}|=\rho_{0},\ldots,\quad|z_{n}|=\rho_{0},\quad|\overline{z}_{1}|=\rho_{0},\ldots,\quad|\overline{z}_{n}|=\rho_{0}.\]
This implies \(|H_{\bf k}|\leq c\rho_{0}^{-|{\bf k}|}\). Since \(\rho_{0}<\rho\) is arbitrary, we obtain (2.3).
**Corollary 2.1**: _Topology determined on \({\cal A}^{\rho}\) by the norm \(\|\cdot\|_{\rho}\) is stronger than the product topology induced from \({\cal F}\) i.e., if a sequence \(\{H^{(j)}\}\) converges in \(({\cal A}^{\rho},\|\cdot\|_{\rho})\) then it converges in \({\cal F}\)._
Let \({\cal N}\subset{\cal F}\) be the space of "normal forms":
\[{\cal N}=\{H\in{\cal F}:H_{\bf k}\neq 0\ \mbox{ implies }{\bf k}^{\prime}=0\}.\]
We define the subspace (the subring) \({\cal F}_{\diamond}\subset{\cal F}\) of the series
\[H_{\diamond}=\sum_{{\bf k}\in{\mathbb{Z}}_{\diamond}^{2n}}H_{\bf k}{\bf z}^{{ \bf k}},\qquad H_{\bf k}\in{\mathbb{C}}. \tag{2.4}\]
By definition we also have: \({\cal N}_{\diamond}={\cal N}\cap{\cal F}_{\diamond}\).
### Continuous averaging
Now we construct a flow \(\phi^{\delta}\), \(\delta\in{\mathbb{R}}\) in \({\cal F}_{\diamond}\) which asymptotically (as \(\delta\to+\infty\)) reduces any Hamiltonian \(H_{2}+\widehat{H}_{\diamond}\), \(\widehat{H}_{\diamond}\in{\cal F}_{\diamond}\) to the normal form \(H_{2}+N_{\diamond}\), \(N_{\diamond}=\phi^{+\infty}(\widehat{H}_{\diamond})\in{\cal N}_{\diamond}\). Construction of the normalization flow is based on the method of continuous averaging [15].
First we explain the general idea of the continuous averaging. Consider the change of variables in the form of a shift
\[{\bf z}=(z,\overline{z})\mapsto{\bf z}_{\delta}=(z_{\delta},\overline{z}_{ \delta}),\qquad(Z,\overline{Z})=(z_{\Delta},\overline{z}_{\Delta}) \tag{2.5}\]
along solutions of the Hamiltonian system4 with Hamiltonian \(F=F({\bf z},\delta)=O_{3}({\bf z})\) and independent variable \(\delta\):
Footnote 4: The so-called, Lie method.
\[z^{\prime}=i\partial_{\overline{z}}F,\quad\overline{z}^{\prime}=-i\partial_{z }F,\qquad(\cdot)^{\prime}=d/d\delta. \tag{2.6}\]
Suppose the function \(H_{2}+\widehat{H}_{\diamond}\) expressed in the variables \({\bf z}_{\delta}\) takes the form \(H_{2}+H_{\diamond}\):
\[H_{2}({\bf z})+\widehat{H}_{\diamond}({\bf z})=H_{2}({\bf z}_{\delta})+H_{ \diamond}({\bf z}_{\delta},\delta). \tag{2.7}\]
Let \(\{\,,\,\}\) be the Poisson bracket on \({\cal F}\):
\[\{F,G\}=i\sum_{j=1}^{n}\big{(}\partial_{\overline{z}_{j}}F\partial_{z_{j}}G- \partial_{z_{j}}F\partial_{\overline{z}_{j}}G\big{)}.\]
Differentiating equation (2.7) in \(\delta\), we obtain:
\[\partial_{\delta}H_{\circ}=-\{F,H_{2}+H_{\circ}\},\qquad H_{\circ}|_{\delta=0} =\widehat{H}_{\circ}.\]
The main idea of the continuous averaging is to take \(F\) in the form \(\xi H_{\circ}\), where \(\xi\) is a certain operator5, depending on the problem we deal with. Then we obtain an initial value problem in \({\cal F}_{\circ}\):
Footnote 5: This idea is similar to the Moser’s idea of homotopy method [9].
\[\partial_{\delta}H_{\circ}=-\{\xi H_{\circ},H_{2}+H_{\circ}\},\qquad H_{\circ }|_{\delta=0}=\widehat{H}_{\circ}. \tag{2.8}\]
### The operator \(\xi\)
We put
\[\sigma_{q}=\mbox{sign}\,(\langle\omega,q\rangle),\quad\omega_{q}=|\langle \omega,q\rangle|,\qquad q\in\mathbb{Z}^{n}.\]
For any \(H_{\circ}\) satisfying (2.4) we define
\[\xi H_{\circ}=-i\sum_{|{\bf k}|\geq 3}\sigma_{{\bf k}^{\prime}}H_{{\bf k}}{ \bf z}^{{\bf k}}=i(H^{-}-H^{+}),\qquad H^{\pm}=\sum_{\pm\sigma_{{\bf k}^{ \prime}}>0}H_{{\bf k}}{\bf z}^{{\bf k}}. \tag{2.9}\]
We also put
\[H^{0}=\sum_{{\bf k}^{\prime}=0,\,|{\bf k}|\geq 4}H_{{\bf k}}{\bf z}^{{\bf k}} \in{\cal N}_{\circ}\]
so that \(H_{\circ}=H^{0}+H^{-}+H^{+}\).
We obtain a more detailed form of the differential equation (2.8):
\[\partial_{\delta}H_{\circ}=v_{0}(H_{\circ})+v_{1}(H_{\circ})+v_{2 }(H_{\circ}), \tag{2.10}\] \[v_{0}=-i\{H^{-}-H^{+},H_{2}\},\quad v_{1}=-i\{H^{-}-H^{+},H^{0} \},\quad v_{2}=-2i\{H^{-},H^{+}\}.\]
Informal explanation for the choice (2.9) of the operator \(\xi\) is as follows. Removing non-linear terms \(v_{1}+v_{2}\) in (2.10), we obtain the equation \(\partial_{\delta}H_{\circ}=v_{0}(H_{\circ})\) or, in more detail
\[\partial_{\delta}H_{{\bf k}}=-\omega_{{\bf k}^{\prime}}H_{{\bf k}},\qquad H_{ {\bf k}}|_{\delta=0}=\widehat{H}_{{\bf k}} \tag{2.11}\]
which can be easily solved:
\[H_{{\bf k}}=e^{-\omega_{{\bf k}^{\prime}}\delta}\widehat{H}_{{\bf k}}.\]
Thus when \(\delta\to+\infty\), solution \(H_{\circ}\) of the truncated problem (2.11) tends to \(H^{0}\in{\cal N}_{\circ}\).
Let \(e_{j}=(0,\ldots,1,\ldots,0)\) be the \(j\)-th unit vector in \({\mathbb{Z}}_{+}^{n}\) and let \({\bf e}_{j}=(e_{j},e_{j})\in{\mathbb{Z}}_{+}^{2n}\). Equation (2.10) written for each Taylor coefficient \(H_{\bf k}\) has the form
\[\partial_{\delta}H_{\bf k} = -\omega_{{\bf k}^{\prime}}H_{\bf k}+v_{1,{\bf k}}(H_{\circ})+v_{2, {\bf k}}(H_{\circ}),\qquad H_{\bf k}|_{\delta=0}=\widehat{H}_{\bf k}, \tag{2.12}\] \[v_{1,{\bf k}} = -\sum_{j=1}^{n}\sum_{|l|\geq 2}\sigma_{{\bf k}^{\prime}}(\overline {k}_{j}-k_{j})l_{j}H_{{\bf k}+{\bf e}_{j}-(l,l)}H_{l,l},\] (2.13) \[v_{2,{\bf k}} = 2\sum_{j=1}^{n}\sum_{\sigma_{{\bf l}^{\prime}}<0<\sigma_{{\bf m }^{\prime}},1+{\bf m}-{\bf k}={\bf e}_{j}}(\overline{l}_{j}m_{j}-l_{j} \overline{m}_{j})H_{\bf l}H_{\bf m}.\]
By using the change of variables
\[H_{\bf k}={\cal H}_{\bf k}e^{-\omega_{{\bf k}^{\prime}}\delta},\]
we reduce the equations (2.12) to the form
\[\partial_{\delta}{\cal H}_{\bf k} = v_{1,{\bf k}}({\cal H}_{\circ})+{\bf v}_{2,{\bf k}}({\cal H}_{ \circ}),\qquad{\cal H}_{\bf k}|_{\delta=0}=\widehat{H}_{\bf k}, \tag{2.14}\] \[{\bf v}_{2,{\bf k}} = 2\sum_{j=1}^{n}\sum_{\sigma_{{\bf l}^{\prime}}<0<\sigma_{{\bf m }^{\prime}},1+{\bf m}-{\bf k}={\bf e}_{j}}(\overline{l}_{j}m_{j}-l_{j} \overline{m}_{j}){\cal H}_{\bf l}{\cal H}_{\bf m}e^{-\omega_{{\bf l}^{\prime},{ \bf m}^{\prime}}\delta},\] (2.15) \[\omega_{{\bf l}^{\prime},{\bf m}^{\prime}} = \omega_{{\bf l}^{\prime}}+\omega_{{\bf m}^{\prime}}-\omega_{{\bf l }^{\prime}+{\bf m}^{\prime}}\]
**Remark 2.1**: _1. The functions \(v_{1,{\bf k}}\) and \(v_{2,{\bf k}}\) are quadratic polynomials in the variables \(H_{\bf k}\). The functions \({\bf v}_{2,{\bf k}}\) are quadratic polynomials in \({\cal H}_{\bf k}\) with coefficients, depending on \(\delta\)._
_2. The polynomial \(v_{1,{\bf k}}\) vanishes for any \({\bf k}={\bf k}^{*}\)._
_3. The polynomials \(v_{1,{\bf k}},v_{2,{\bf k}}\) and \({\bf v}_{2,{\bf k}}\) do not depend on the variables \(H_{\bf s},{\cal H}_{\bf s}\), for any \({\bf s}\in{\mathbb{Z}}_{\circ}^{2n}\) such that \(|{\bf s}|>|{\bf k}|-2\)._
_4. The polynomials \(v_{2,{\bf k}}\) and \({\bf v}_{2,{\bf k}}\) do not depend on the variables \(H_{\bf s},{\cal H}_{\bf s}\) for any \({\bf s}\in{\mathbb{Z}}_{\circ}^{2n}\) such that \(0\leq\sigma_{{\bf k}}\langle\omega,{\bf s}^{\prime}\rangle\leq\omega_{{\bf k}^ {\prime}}\)._
_5. For any \(\sigma_{{\bf l}^{\prime}}<0<\sigma_{{\bf m}^{\prime}}\)_
\[\omega_{{\bf l}^{\prime},{\bf m}^{\prime}}=\left\{\begin{array}{ll}2\omega_ {{\bf l}^{\prime}}&\mbox{if}\quad\sigma_{{\bf k}^{\prime}}>0,\\ 2\omega_{{\bf m}^{\prime}}&\mbox{if}\quad\sigma_{{\bf k}^{\prime}}<0.\end{array}\right.\]
## 3 Normalization flow exists
### Formal aspect
Let \(I\subset{\mathbb{R}}\) be an interval containing the point \(0\). We say that the curve \(\gamma:I\to{\cal F}_{\circ}\) is a solution of the system (2.8) on the interval \(I\) if \(\gamma(0)=\widehat{H}_{\circ}\) and the coefficients \(H_{\bf k}(\delta)={\rm pr}_{\bf k}\,\gamma(\delta)\) satisfy (2.12).
If the solution \(\gamma:I\to{\cal F}_{\circ}\) exists and is unique then for any \(\delta\in I\) we put \(\gamma(\delta)=\phi^{\delta}(\widehat{H}_{\circ})\).
**Definition 3.1**: _The ordinary differential equation_
\[F^{\prime}=\Phi(F),\qquad F=\sum_{|{\bf k}|\geq 3}F_{\bf k}{\bf z}^{\bf k}\]
on \({\cal F}\) has a nilpotent form if for any \({\bf k}\in{\mathbb{Z}}_{+}^{2n}\) we have: \(F^{\prime}_{\bf k}=\Phi_{\bf k}\), where \(\Phi_{\bf k}\) is a function of the variables \(F_{\bf m}\), \(|{\bf m}|<|{\bf k}|\)._
The system (2.14) has a nilpotent form. In particular, \(\partial_{\delta}{\cal H}_{\bf k}=0\) for \(|{\bf k}|=3\).
**Theorem 1**: _For any \(\widehat{H}_{\circ}\in{\cal F}_{\circ}\) and any \(\delta\in{\mathbb{R}}\) the element \(H_{\circ}(\cdot,\delta)=\phi^{\delta}(\widehat{H}_{\circ})\) is well-defined. For any \({\bf k}\in{\mathbb{Z}}_{\circ}^{2n}\) the function \({\cal H}_{\bf k}(\delta)=H_{\bf k}(\delta)e^{\omega_{\bf k^{\prime}}\delta}\) equals_
\[{\cal H}_{\bf k}(\delta)=\widehat{H}_{\bf k}+P_{\bf k}(\widehat{H}_{\circ}, \delta), \tag{3.1}\]
_where \(P_{\bf k}\) is a polynomial in \(\widehat{H}_{\bf m}\), \(|{\bf m}|<|{\bf k}|\) with coefficients in the form of (finite) linear combinations of terms \(\delta^{s}e^{-\imath\delta}\), \(s\in{\mathbb{Z}}_{+}\). Here \(\nu\geq 0\) and moreover, for \({\bf k}^{\prime}=0\) the equation \(\nu=0\) implies \(s=0\)._
_Proof of Theorem 1._ Suppose equations (3.1) hold for all vectors \({\bf k}\in{\mathbb{Z}}_{\circ}^{2n}\) with \(|{\bf k}|<K\). Take any \({\bf k}\in{\mathbb{Z}}_{\circ}^{2n}\) with \(|{\bf k}|=K\). By (2.14)
\[{\cal H}_{\bf k}=\widehat{H}_{\bf k}+I_{1}+I_{2},\qquad I_{1}=\int_{0}^{\delta }v_{1,{\bf k}}({\cal H}_{\circ}(\lambda))\,d\lambda,\quad I_{2}=\int_{0}^{ \delta}{\bf v}_{2,{\bf k}}({\cal H}_{\circ}(\lambda),\lambda)\,d\lambda.\]
By using (2.13), (2.15), and the induction assumption, we obtain (3.1).
**Corollary 3.1**: _The limit \(\lim_{\delta\to+\infty}\phi^{\delta}(\widehat{H}_{\circ})\) exists and lies in \({\cal N}_{\circ}\)._
Indeed, by Theorem 1 for any \({\bf k}\in{\mathbb{Z}}_{\circ}^{2n}\) there exists the limit
\[\lim_{\delta\to+\infty}H_{\bf k}(\delta)=H_{\bf k}(+\infty).\]
The convergence is exponential and \(H_{\bf k}(+\infty)\) vanishes for any \({\bf k}\neq{\bf k}^{*}\). This is equivalent to the existence of \(\lim_{\delta\to+\infty}\phi^{\delta}(\widehat{H}_{\circ})=\phi^{+\infty}( \widehat{H}_{\circ})\) in the product topology on \({\cal F}_{\circ}\) together with the statement \(\phi^{+\infty}(\widehat{H}_{\circ})\in{\cal N}_{\circ}\).
**Theorem 2**: _Suppose \(\widehat{H}_{\circ}\in{\cal F}_{r}\cap{\cal F}_{\circ}\). Then \(\phi^{\delta}(\widehat{H}_{\circ})\in{\cal F}_{r}\cap{\cal F}_{\circ}\) for any \(\delta\in{\mathbb{R}}\)._
_Proof._ The required identities \(\overline{H}_{\bf k}(\delta)=H_{{\bf k}^{*}}(\delta)\), \(\delta\in{\mathbb{R}}\) can be proved by induction in \(|{\bf k}|\) by using the equation (2.14).
### Analytic aspect
**Theorem 3**: _Suppose \(\widehat{H}_{\circ}\in{\cal A}^{\rho}\cap{\cal F}_{\circ}\). Then \(\phi^{\delta}(\widehat{H}_{\circ})\in{\cal A}^{g(\rho,\delta)}\cap{\cal F}_{\circ}\) for any \(\delta\in{\mathbb{R}}\), where for some constants \(A,B>0\)_
\[g(\rho,\delta)\geq\frac{A}{1+B\delta}.\]
_Proof_. We use the majorant method. We remind definitions and basic facts concerning majorants in Section 8.2. For some positive \(a,b\)
\[\widehat{H}_{\circ}\ll f(\zeta)=\sum_{{\bf k}\in{\mathbb{Z}}_{\circ}^{2n}} \widehat{\bf H}_{\bf k}{\bf z}^{\bf k},\qquad\zeta=\sum_{j=1}^{n}(z_{j}+ \overline{z}_{j}),\]
where the function \(f(\zeta)=O_{3}(\zeta)\), a majorant for the initial condition, will be chosen a bit later.
Consider together with (2.14) the following majorant system
\[\partial_{\delta}{\bf H}_{\bf k} = {\bf V}_{\bf k},\qquad{\bf H}_{\bf k}|_{\delta=0}=\widehat{\bf H }_{\bf k}, \tag{3.2}\] \[{\bf V}_{\bf k} = 2\sum_{j=1}^{n}\sum_{{\bf l}+{\bf m}-{\bf k}={\bf e}_{j}}( \overline{l}_{j}m_{j}+l_{j}\overline{m}_{j}){\bf H}_{\bf l}{\bf H}_{\bf m}. \tag{3.3}\]
To obtain equation (3.3), we have replaced in \(v_{1,{\bf k}}\) and \({\bf v}_{2,{\bf k}}\) minuses by pluses, dropped the exponential multipliers, and added some new positive (for positive \({\bf H}_{\bf l}\) and \({\bf H}_{\bf m}\)) terms. The main property of the system (3.2)-(3.3) is such that if for any \({\bf k}\in{\mathbb{Z}}_{\circ}^{2n}\) we have \(|\widehat{H}_{\bf k}|\leq\widehat{\bf H}_{\bf k}\) then for any \(\delta\geq 0\) the inequality \(|H_{\bf k}|\leq{\bf H}_{\bf k}\) holds i.e., conditions (a) and (b) from Definition 8.2 hold.
The system (3.2)-(3.3) has a nilpotent form. Therefore by Theorem 8 we may use Majorant principle: if \({\bf H}\) is a solution of (3.2)-(3.3) then (2.14) has a solution \({\cal H}\) and \({\cal H}\ll{\bf H}\) for any \(\delta\geq 0\).
The system (3.2)-(3.3) may be written in a shorter form:
\[\partial_{\delta}{\bf H}=4\sum_{j=1}^{n}\partial_{z_{j}}{\bf H}\,\partial_{ \overline{z}_{j}}{\bf H},\qquad{\bf H}|_{\delta=0}=f(\zeta). \tag{3.4}\]
Since the initial condition \({\bf H}|_{\delta=0}\) depends on \(z\) and \(\overline{z}\) only through the variable \(\zeta\), we may look for a solution of (3.4) in the form \({\bf H}(z,\overline{z},\delta)=F(\zeta,\delta)\). The function \(F\) satisfies the equation
\[\partial_{\delta}F=4n(\partial_{\zeta}F)^{2},\qquad F|_{\delta=0}=f(\zeta).\]
The function \(G=\partial_{\zeta}F\) satisfies the inviscid Burgers' equation
\[\partial_{\delta}G=8nG\partial_{\zeta}G,\qquad G|_{\delta=0}=\partial_{\zeta} f(\zeta)=O_{2}(\zeta). \tag{3.5}\]
By using the method of characteristics we obtain that the function \(G=G(x,t)\) which solves (3.5), satisfies the equation
\[G=f^{\prime}(\zeta+8ntG). \tag{3.6}\]
**Lemma 3.1**: _Suppose the function \(f^{\prime}\) is analytic at zero. Then for any \(t\geq 0\) the function \(G\), solving (3.6), is also analytic at zero. The corresponding radius of analyticity is of order \(1/\delta\)._
_Proof._ By Lemma 8.2 we can take \(f^{\prime}(\zeta)=a\zeta^{2}/(b-\zeta)\). Then putting \(\tau=8n\delta\), we obtain from (3.6)
\[G=\frac{a\tau(\zeta+\tau G)}{b-\zeta-\tau G}.\]
This is a quadratic equation w.r.t. \(G\). The solution is
\[G=\frac{2a\zeta^{2}}{b-\zeta-2a\tau\zeta+\sqrt{(b-\zeta-2a\tau\zeta)^{2}-4a \tau\zeta^{2}(1+a\tau)}}.\]
It is analytic for
\[|\zeta|<\frac{b}{1+a\tau+2\sqrt{a\tau(1+a\tau)}}.\]
**Corollary 3.2**: _Suppose \(\widehat{H}_{\diamond}\in{\cal A}\cap{\cal F}_{r}\). Then \(\phi^{\delta}(\widehat{H}_{\diamond})\in{\cal A}\cap{\cal F}_{r}\) for any \(\delta\in\mathbb{R}\)._
It is important to note that for a typical \(\widehat{H}_{\diamond}\in{\cal A}\) one should expect that \(\phi^{+\infty}(\widehat{H}_{\diamond})\in{\cal N}_{\diamond}\) does not belong to \({\cal A}\), [8].
## 4 Simple properties of \(\phi^{\delta}\)
### Strips and balls
We define the strips \(S_{M_{1},M_{2}}\) and balls \(B_{M}\):
\[S_{M_{1},M_{2}} = \{{\bf k}\in\mathbb{Z}_{\diamond}^{2n}:M_{1}\leq\langle\omega,{ \bf k}^{\prime}\rangle\leq M_{2}\}, \tag{4.1}\] \[B_{M} = \{{\bf k}\in\mathbb{Z}_{\diamond}^{2n}:|{\bf k}|\leq M\},\qquad M \geq 0.\]
Sometimes in this notation we use \(M_{2}=+\infty\). Then we take in (4.1) \(\langle\omega,{\bf k}^{\prime}\rangle<+\infty\). Analogously for \(M_{1}=-\infty\).
We have the following subspaces in \({\cal F}\):
\[{\cal F}_{S_{M_{1},M_{2}}} = \{H_{\diamond}\in{\cal F}_{\diamond}:H_{\bf k}=0\mbox{ for any }{\bf k}\in\mathbb{Z}_{\diamond}^{2n}\setminus S_{M_{1},M_{2}}\},\] \[{\cal F}_{B_{M}} = \{H_{\diamond}\in{\cal F}_{\diamond}:H_{\bf k}=0\mbox{ for any }{\bf k}\in\mathbb{Z}_{\diamond}^{2n}\setminus B_{M}\}.\]
**Theorem 4**: _For any \(M_{1}\leq M_{2}\) and \(M\geq 0\) the sets \({\cal F}_{S_{M_{1},M_{2}}}\) and \({\cal F}_{B_{M}}\) are \(\phi^{\delta}\)-invariant._
_Proof._ Take any \({\bf k}\in\mathbb{Z}_{\diamond}^{2n}\setminus S_{N_{1},N_{2}}\). For definiteness we assume that \(\langle\omega,{\bf k}^{\prime}\rangle>N_{2}\). Consider the corresponding equation (2.14). In the term \(v_{1,{\bf k}}\) (see (2.13)) we have:
\[{\bf k}+{\bf e}_{j}-(l,l)\in\mathbb{Z}_{\diamond}^{2n}\setminus S_{N_{1},N_{2 }}.\]
The conditions of summation in \({\bf v}_{2,{\bf k}}\)
\[\sigma_{{\bf l}^{\prime}}<0<\sigma_{{\bf m}^{\prime}},\quad{\bf l}+{\bf m}-{ \bf k}={\bf e}_{j}\]
imply \({\bf k}^{\prime}={\bf m}^{\prime}+{\bf l}^{\prime}\). Therefore
\[\langle\omega,{\bf m}^{\prime}\rangle=\langle\omega,{\bf k}^{\prime}\rangle- \langle\omega,{\bf l}^{\prime}\rangle>\langle\omega,{\bf k}^{\prime}\rangle>N_{2}.\]
This implies that \({\bf m}\in{\mathbb{Z}}_{\circ}^{2n}\setminus S_{N_{1},N_{2}}\).
Hence (by using induction in \(|{\bf k}|\)), we obtain \(v_{1,{\bf k}}={\bf v}_{2,{\bf k}}=0\).
In the case of the set \({\cal F}_{B_{M}}\) the argument is analogous: for any \({\bf k}\in B_{M}\) we see that any term in \(v_{1,{\bf k}}\) and any term in \({\bf v}_{2,{\bf k}}\) contains a multiplier \(H_{\bf m}\), \({\bf m}\in B_{M}\).
### Discrete symmetries
Consider the following two antisymplectic involutions:
\[I^{\pm}({\mathbb{C}}^{2n,0})\leftarrow,\qquad{\bf z}=(z,\overline{z})\mapsto I ^{\pm}({\bf z})=\pm(\overline{z},z).\]
For any involution \(I^{\sigma}\), \(\sigma\in\{+,-\}\) and any \(H\in{\cal F}\) we have:
\[H^{0}\circ I^{\sigma}=(H\circ I^{\sigma})^{0},\quad H^{+}\circ I^{\sigma}=(H \circ I^{\sigma})^{-},\quad\mbox{and}\quad H^{-}\circ I^{\sigma}=(H\circ I^{ \sigma})^{+}.\]
Equivalently, \((\xi H)\circ I^{\sigma}=-\xi(H\circ I^{\sigma})\).
Note that in the "real" coordinates \((x,y)\), \(\sqrt{2}\,z=y+ix\), \(\sqrt{2}\,\overline{z}=y-ix\) the map \(I^{+}\) changes sign at \(x\) while \(I^{-}\) changes sign at \(y\).
The function \(H\in{\cal F}\) is said to be \(I^{\sigma}\)-invariant if \(H\circ I^{\sigma}=H\).
In particular, \(H\) is \(I^{+}\)-invariant if it is even w.r.t. \(x\) and \(I^{-}\)-invariant if it is even w.r.t. \(y\). Any function \(F\in{\cal N}\) is both \(I^{+}\)-invariant and \(I^{-}\)-invariant. For example, this holds for the function \(H_{2}\).
Suppose the Hamiltonian function \(\widehat{H}\in{\cal F}\) is \(I^{\sigma}\)-invariant and defined in a neighborhood of the origin. Then the corresponding system (1.1) is said to be \(I^{\sigma}\)-reversible. This means that if \({\bf z}(t)\) is a solution of (1.1) then \(I^{\sigma}({\bf z}(-t))\) is also a solution of (1.1).
**Theorem 5**: _Let \(I=I^{\sigma}\), \(\sigma\in\{+,-\}\). Suppose \(\widehat{H}\) is \(I\)-invariant. Then \(H\), the solution of the averaging system (2.8), is \(I\)-invariant for any \(\delta>0\)._
_Proof_. Take composition of (2.8) with \(I\). We obtain in the left-hand side
\[(\partial_{\delta}H_{\circ})\circ I=\partial_{\delta}(H_{\circ}\circ I),\]
and in the right-hand side
\[-\{\xi H_{\circ},H_{2}+H_{\circ}\}\circ I=\{(\xi H_{\circ})\circ I,H_{2} \circ I+H_{\circ}\circ I\}=\{-\xi(H_{\circ}\circ I),H_{2}+H_{\circ}\circ I\}.\]
This means that if \(H_{\circ}\) is a solution of (2.8) then \(H_{\circ}\circ I\) is also a solution of (2.8). Since the initial condition \(\widehat{H}_{\circ}\) is assumed to be \(I\)-invariant and solution of (2.8) is unique, Theorem 5 follows.
**Corollary 4.1**: _The normalization flow \(\phi^{\delta}\) preserves \(I^{\sigma}\)-reversibility._
## 5 Advanced form for \(v_{1}\) and \({\bf v}_{2}\)
### More notation
We put
\[Q_{q}=\{{\bf k}\in{\mathbb{Z}}_{\diamond}^{2n}:{\bf k}^{\prime}=q\}.\]
In any set \(Q_{q}\) there is a unique "minimal" element \({\bf k}_{q}\). It can be defined equivalently by any of two conditions
**(1)**\({\bf k}_{q}\in Q_{q}\), \(|{\bf k}_{q}|=\min_{{\bf k}\in Q_{q}}|{\bf k}|\),
**(2)**\({\bf k}_{q}=(k_{q},\overline{k}_{q})\in Q_{q}\), \(\min\{k_{q_{j}},\overline{k}_{q_{j}}\}=0\) for any \(j=1,\ldots,n\).
For any \(q\in{\mathbb{Z}}^{n}\) we have \({\bf k}_{q}={\bf k}_{-q}^{*}\) and
\[Q_{q}=\{{\bf k}\in{\mathbb{Z}}_{\diamond}^{2n}:{\bf k}={\bf k}_{q}+(l,l),\;l \in{\mathbb{Z}}_{+}^{n}\}.\]
We define the subspaces
\[{\cal F}^{q}=\{F\in{\cal F}_{\diamond}:F=\sum_{{\bf k}\in Q_{q}}F_{{\bf k}}{ \bf z}^{{\bf k}},\;F_{{\bf k}}\in{\mathbb{C}}\}.\]
Then \({\cal F}^{0}={\cal N}_{\diamond}\) and \({\cal F}_{\diamond}=\oplus_{q\in{\mathbb{Z}}^{n}}{\cal F}^{q}\).
For any \(F^{q}\in{\cal F}^{q}\) and \(F^{p}\in{\cal F}^{p}\) we have: \(F^{q}F^{p},\{F^{q},F^{p}\}\in{\cal F}^{q+p}\) and
\[F^{q}={\bf z}^{{\bf k}_{q}}N,\qquad N\in{\cal N}. \tag{5.1}\]
For any \(q,p\in{\mathbb{Z}}^{n}\) we define \(q\lhd p\in{\mathbb{Z}}^{n}\) by
\[(q\lhd p)_{j}=\left\{\begin{array}{r@{\quad\quad}l}0\quad\mbox{if}\quad q_{j }p_{j}\geq 0\quad\mbox{or}\quad|p_{j}|<|q_{j}|,\\ q_{j}\quad\mbox{if}\quad q_{j}p_{j}<0\quad\mbox{and}\quad|q_{j}|<|p_{j}|,\\ q_{j}/2\quad\mbox{if}\quad q_{j}=-p_{j}.\end{array}\right.\]
For any \(s\in{\mathbb{Z}}^{n}/2\) we put
\[[s]=(|s_{1}|,\ldots,|s_{n}|)\in{\mathbb{Z}}_{+}^{n}/2.\]
**Lemma 5.1**: _For any \(q,p\in{\mathbb{Z}}^{n}\)_
\[{\bf k}_{q}+{\bf k}_{p}={\bf k}_{q+p}+(l,l),\qquad l=[q\lhd p]+[p\lhd q].\]
_Proof_. It is sufficient to note that
\[([q\lhd p]+[p\lhd q])_{j}=\left\{\begin{array}{r@{\quad\quad}l}0\quad\quad \mbox{if}\quad q_{j}p_{j}\geq 0,\\ \min\{|q_{j}|,|p_{j}|\}\quad\mbox{if}\quad q_{j}p_{j}<0.\end{array}\right.\]
### Poisson brackets
We put
\[\partial=(\partial_{1},\ldots,\partial_{n}),\quad\partial_{j}=\partial_{\kappa_{j} },\quad\kappa_{j}=z_{j}\overline{z}_{j}.\]
The operators \(\partial_{j}\) act on functions (formal series), lying in \({\cal N}\).
**Lemma 5.2**: _For any \(q,p\in{\mathbb{Z}}^{n}\) and any \(N\in{\cal N}\)_
\[{\bf z}^{{\bf k}_{q}+{\bf k}_{p}} = {\bf z}^{{\bf k}_{q+p}}\kappa^{[q<p]+[p<q]}, \tag{5.2}\] \[\{{\bf z}^{{\bf k}_{q}},N\} = i{\bf z}^{{\bf k}_{q}}\langle q,\partial\rangle N,\] (5.3) \[\{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\} = i{\bf z}^{{\bf k}_{q+p}}\langle q-q\lhd p-p+p\lhd q,\partial \rangle\kappa^{[q<p]+[p<q]}. \tag{5.4}\]
We prove Lemma 5.2 in Section 8.1
**Lemma 5.3**: _For any \(p,q\in{\mathbb{Z}}^{n}\) and \(N^{p},N^{q}\in{\cal N}\)_
\[\{{\bf z}^{{\bf k}_{q}}N^{q},{\bf z}^{{\bf k}_{p}}N^{p}\}=i{\bf z}^{{\bf k}_{q+ p}}\Big{(}N^{q}\kappa^{[q<p]}\langle q,\partial\rangle N^{p}\kappa^{[p<q]}-N^{p} \kappa^{[p<q]}\langle p,\partial\rangle N^{q}\kappa^{[q<p]}\Big{)}.\]
Proof follows from equations (5.2)-(5.4). Indeed, \(\{{\bf z}^{{\bf k}_{q}}N^{q},{\bf z}^{{\bf k}_{p}}N^{p}\}=B_{1}+B_{2}+B_{3}\),
\[B_{1}={\bf z}^{{\bf k}_{p}}N^{q}\{{\bf z}^{{\bf k}_{q}},N^{p}\},\quad B_{2}=-{ \bf z}^{{\bf k}_{q}}N^{p}\{{\bf z}^{{\bf k}_{p}},N^{q}\},\quad B_{3}=N^{q}N^{p} \{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\}.\]
By (5.2) and (5.3) we have:
\[B_{1}=i{\bf z}^{{\bf k}_{q}+{\bf k}_{p}}N^{q}\langle q,\partial\rangle N^{p}= i{\bf z}^{{\bf k}_{q+p}}\kappa^{[q<p]+[p<q]}N^{q}\langle q,\partial\rangle N^{p}.\]
Analogously \(B_{2}=-i{\bf z}^{{\bf k}_{q+p}}\kappa^{[q<p]+[p<q]}N^{p}\langle p,\partial \rangle N^{q}\). By (5.2) and (5.4)
\[B_{3}=i{\bf z}^{{\bf k}_{q+p}}N^{q}N^{p}\kappa^{[q<p]}\langle q,\partial\rangle \kappa^{[p<q]}-\kappa^{[p<q]}\langle p,\partial\rangle\kappa^{[q<p]}.\]
Lemma 5.3 follows from these computations.
### Normalizing system revisited
The equation
\[{\bf v}_{2}({\cal H}_{\diamond},\delta)=-2i\sum_{\sigma_{r}<0<\sigma_{s}}\{{ \cal H}^{r}e^{-\omega_{r}\delta},{\cal H}^{s}e^{-\omega_{s}\delta}\}e^{\omega_ {r+s}\delta}\]
implies that system (2.14) can be presented in the form
\[\partial_{\delta}{\cal H}^{q}=i\sigma_{q}\{{\cal H}^{q},{\cal H}^{0}\}-2i\sum_ {\sigma_{r}<0<\sigma_{s},r+s=q}\{{\cal H}^{r},{\cal H}^{s}\}e^{-\omega_{r,s} \delta},\qquad{\cal H}^{q}|_{\delta=0}=\widehat{{\cal H}}^{q}. \tag{5.5}\]
Note that \(\widehat{{\cal H}}^{q}=O_{|q|}({\bf z})\) if \(|q|\geq 3\), \(\widehat{{\cal H}}^{q}=O_{3}({\bf z})\) if \(0<|q|<3\), and \(\widehat{{\cal H}}^{0}=O_{4}({\bf z})\).
For any \(q\in{\mathbb{Z}}^{n}\) we put
\[{\cal H}^{q}={\bf z}^{{\bf k}_{q}}N^{q},\quad\widehat{{\cal H}}^{q}={\bf z}^{ {\bf k}_{q}}\widehat{N}^{q},\qquad N^{q},\widehat{N}^{q}\in{\cal N}.\]
By Lemma 5.3
\[\{{\cal H}^{r},{\cal H}^{s}\}=i{\bf z}^{{\bf k}_{r+s}}\Big{(}N^{r}\kappa^{[r \triangle s]}\langle r,\partial\rangle N^{s}\kappa^{[s\triangle r]}-N^{s} \kappa^{[s\triangle r]}\langle s,\partial\rangle N^{r}\kappa^{[r\triangle s]} \Big{)}.\]
Therefore system (5.5) takes the form
\[\partial_{\delta}N^{q} = w_{1}^{q}+{\bf w}_{2}^{q}, \tag{5.6}\] \[w_{1}^{q} = -\sigma_{q}N^{q}\langle q,\partial\rangle N^{0},\] \[{\bf w}_{2}^{q} = 2\sum_{\sigma_{r}<0<\sigma_{s},\,r+s=q}\Big{(}N^{r}\kappa^{ \triangle s}\langle r,\partial\rangle N^{s}\kappa^{s\triangle r}-N^{s}\kappa^ {s\triangle r}\langle s,\partial\rangle N^{r}\kappa^{r\triangle s}\Big{)}e^{- \omega_{r,s}\delta}.\]
## 6 Asymptotic flow
We take \(G=\sum_{{\bf k}\in{\mathbb{Z}}_{\rm n}^{2n}}G_{{\bf k}}(\delta){\bf z}^{{\bf k}}\) and \(\widehat{G}=\sum_{{\bf k}\in{\mathbb{Z}}_{\rm n}^{2n}}\widehat{G}_{{\bf k}}{ \bf z}^{{\bf k}}\). The system
\[\partial_{\delta}G_{{\bf k}}=v_{1,{\bf k}}(G),\qquad G_{{\bf k}}|_{\delta=0}= \widehat{G}_{{\bf k}} \tag{6.1}\]
is said to be the asymptotic system. It is obtained from (2.14) if we drop the term \({\bf v}_{2,{\bf k}}\). It is motivated by our desire to study restriction of the flow \(\varphi^{\delta}\) (or the flow \(\Phi^{\delta}\) of the system (2.14)) to the stable manifolds \({\cal W}_{N}\) of points \(N\in{\cal N}_{\diamond}\).
### Existence theorem
Let \(G=G(\delta)\) be a solution of the asymptotic system (6.1). For any \({\bf k}={\bf k}^{*}\) we have \(v_{1,{\bf k}}=0\). Therefore the coefficients \(G_{{\bf k}}\) with \({\bf k}={\bf k}^{*}\) remain constant. Hence we may regard the system (6.1) as linear. In Section 6.2 we obtain an explicit solution for it.
**Proposition 6.1**: _Any solution of the asymptotic system has the form_
\[G_{{\bf k}}=\widehat{G}_{{\bf k}}+p_{{\bf k}}(\widehat{G},\delta), \tag{6.2}\]
_where \(p_{{\bf k}}\) are polynomials in \(\widehat{G}_{{\bf m}}\), \(|{\bf m}|<|{\bf k}|\). The dependence of \(p_{{\bf k}}\) on \(\delta\) is also polynomial and \(\deg_{\delta}p_{{\bf k}}\leq\mathop{\rm lev}({\bf k})\)._
_Proof_. Induction in \(|{\bf k}|\).
The flow \(\Psi^{\delta}\) of the system (6.1) will be called the asymptotic flow.
**Theorem 6**: _There exists a smooth map \(\Lambda:{\cal F}_{\diamond}\to{\cal F}_{\diamond}\) which conjugates the flow \(\Phi^{\delta}\) of system (2.14) with \(\Psi^{\delta}\):_
\[\Lambda\circ\Phi^{\delta}=\Psi^{\delta}\circ\Lambda. \tag{6.3}\]
_Proof_. Let \({\cal H}_{{\bf k}}(\delta)\) and \(G_{{\bf k}}(\delta)\) be solutions of (2.14) and (6.1) with the initial conditions
\[{\cal H}_{{\bf k}}(0)=\widehat{{\cal H}}_{{\bf k}}\quad\mbox{and}\quad G_{{ \bf k}}(0)=\widehat{G}_{{\bf k}}.\]
We determine the map
\[\Lambda:{\cal F}_{\circ}\to{\cal F}_{\circ},\qquad\widehat{\cal H}_{\circ}=\sum{ \cal H}_{\bf k}{\bf z}^{\bf k}\mapsto\Lambda(\widehat{\cal H})=\widehat{G}_{ \circ}=\sum G_{\bf k}{\bf z}^{\bf k}. \tag{6.4}\]
by the equation
\[{\cal H}_{\bf k}(\delta)=G_{\bf k}(\delta)+O(e^{-\mu_{\bf k}\delta}),\qquad{\bf k }\in{\mathbb{Z}}_{\circ}^{2n} \tag{6.5}\]
as \(\delta\to+\infty\), where \(\mu_{\bf k}\) are positive.
**Lemma 6.1**: _There exists a unique \(\Lambda\) satisfying (6.4)-(6.5). This map has the form_
\[\widehat{G}_{\bf k}=\widehat{H}_{\bf k}+{\cal P}_{\bf k}(\widehat{H}_{\circ}, \delta), \tag{6.6}\]
_where \({\cal P}_{\bf k}\) are polynomials in \(\widehat{H}_{\bf m}\), \(|{\bf m}|<|{\bf k}|\) with polynomial in \(\delta\) coefficients._
Theorem 6 obviously follows from Lemma 6.1.
_Proof of Lemma 6.1._ We use induction in \(|{\bf k}|\). If \(|{\bf k}|=3\), we have:
\[{\cal H}_{\bf k}(\delta)=\widehat{\cal H}_{\bf k}\quad\mbox{and}\quad G_{\bf k }(\delta)=\widehat{G}_{\bf k}.\]
In this case we take \(\widehat{G}_{\bf k}=\widehat{\cal H}_{\bf k}\).
Suppose we have determined \(\Lambda\) for all vectors from \({\mathbb{Z}}_{\circ}^{2n}\) with norm less than \(|{\bf k}|\). By (3.1) and (6.2)
\[{\cal H}_{\bf k}(\delta)=\widehat{H}_{\bf k}+P_{\bf k}(\widehat{H}_{\circ}, \delta),\quad G_{\bf k}(\delta)=\widehat{G}_{\bf k}+p_{\bf k}(\widehat{G}_{ \circ},\delta),\]
where \(P_{\bf k}\) and \(p_{\bf k}\) are polynomials in \(\widehat{H}_{\bf m}\) and \(\widehat{G}_{\bf m}\), \(|{\bf m}|<|{\bf k}|\). Coefficients of the polynomials \(P_{\bf k}\) are linear combinations of expressions \(\delta^{s}e^{-\nu\delta}\) while coefficients of \(p_{\bf k}\) are polynomials in \(\delta\).
Let \(\widetilde{P}_{\bf k}\) be the polynomial obtained from \(P_{\bf k}\) if all terms \(\delta^{s}e^{-\nu\delta}\) with positive \(\nu\) are replaced by zeros. Then equation (6.5) takes the form
\[\widehat{G}_{\bf k}=\widehat{H}_{\bf k}+\widetilde{P}_{\bf k}(\widehat{H}_{ \circ},\delta)-p_{\bf k}(\widehat{G}_{\circ},\delta).\]
By the induction assumption \(\widehat{G}_{\bf m}\), \(|{\bf m}|<|{\bf k}|\) have been already expressed as polynomials in \(\widehat{H}_{\bf l}\), \(|{\bf l}|<|{\bf m}|\). Hence we obtain (6.6).
### Explicit solution
To obtain explicit solution for the asymptotic system (6.1) we present the system in another form. We put
\[G=\sum_{q\in{\mathbb{Z}}^{n}}G^{q},\qquad G^{q}={\bf z}^{{\bf k}_{q}}N^{q}, \quad N^{q}\in{\cal N}.\]
Then (6.1) is equivalent to the system
\[\partial_{\delta}N^{q}=-\sigma_{q}N^{q}\langle q,\partial\rangle N^{0},\qquad N ^{q}|_{\delta=0}=\widehat{N}^{q}. \tag{6.7}\]
Equations (6.7) may be solved separately. First we note that \(N^{0}=\widehat{N}^{0}\) is independent of \(\delta\). Hence
\[N^{q}=e^{-\sigma_{q}\delta\langle q,\partial\rangle N^{0}}\widehat{N}^{q}, \qquad q\in{\mathbb{Z}}^{n}.\]
Equivalently,
\[G^{q}=e^{-\sigma_{q}\delta\langle q,\partial\rangle N^{0}}\widehat{G}^{q}, \qquad q\in{\mathbb{Z}}^{n}. \tag{6.8}\]
### Example
Consider the system (2.12) with initial condition \(\widehat{H}\) such that for any \(\widehat{H}_{\bf k}\neq 0\) we have \({\bf k}\in S_{0,+\infty}\). This condition is incompatible with reality of the Hamiltonian \(\widehat{H}\). However we believe that this example is instructive. At least, it rids us from some naive expectations.
By Theorem 4\({\cal H}_{\bf m}(\delta)=0\) for any \({\bf m}\in{\mathbb{Z}}_{\circ}^{2n}\setminus S_{0,+\infty}\) and any \(\delta\geq 0\). Therefore \({\bf v}_{2,{\bf k}}=0\) and system (2.14) coincides with system (6.1).
We have: \(H^{0}\equiv\widehat{H}^{0}\) and by (6.8) for any \(q\in{\mathbb{Z}}^{n}\) solution of (2.12) reads
\[H^{q}=e^{-(\omega_{q}+h_{q})\delta}\widehat{H}^{q},\qquad h_{q}=\sigma_{q} \langle q,\partial\rangle\widehat{H}^{0}. \tag{6.9}\]
Let \(S\) be the set of all \(q\in{\mathbb{Z}}^{n}\) such that \(\widehat{H}^{q}\neq 0\). We put
\[c_{S}:=\inf_{q\in S}\omega_{q}/|q|.\]
**Proposition 6.2**: _Suppose \(S\subset\{q\in{\mathbb{Z}}^{n}:\langle\omega,q\rangle\geq 0\}\), \(\widehat{H}_{\circ}\in{\cal A}\cap{\cal F}_{\circ}\), and \(c_{S}>0\)._
_Then \(\widehat{H}_{\circ}\) is transformed to the normal form \(\widehat{H}^{0}\) by an analytic canonical transformation._
_Proof_. For some \(\rho_{0}>0\) we have: \(\widehat{H}_{\circ}\in{\cal A}^{\rho_{0}}\). Since \(\widehat{H}^{0}=O_{2}(\kappa)\), \(\kappa=(\kappa_{1},\ldots,\kappa_{n})\), \(\kappa_{j}=z_{j}\overline{z}_{j}\), we have:
\[\sup_{{\bf z}\in D_{\rho}}\ \sup_{e\in{\mathbb{R}}^{n},\,\|e\|=1}\big{|} \langle e,\partial\rangle\widehat{H}^{0}({\bf z})\big{|}\leq c\rho^{2},\qquad 0 <\rho\leq\rho_{0},\]
where \(D_{\rho}\) is the polydisk (2.2) and \(c\) is a positive constant. Then
\[\sup_{{\bf z}\in D_{\rho}}|h_{q}({\bf z})|\leq c|q|\rho^{2}.\]
If \(\rho^{2}<c_{S}/c\) then
\[\inf_{{\bf z}\in D_{\rho}}{\rm Re}(\omega_{q}+h_{q}({\bf z}))\geq(c_{S}-c\rho^ {2})|q|>0. \tag{6.10}\]
Hence by (6.9) the functions \(H^{q}\) are defined in \(D_{\rho}\) for any \(\delta>0\) and tend to zero as \(\delta\to+\infty\).
Inequality (6.10) means that there are no small divisors. Hence, change of variables \({\bf z}\mapsto{\bf z}_{+\infty}\) (see (2.5)), generated by the system (2.6) with \(F=\xi H_{\circ}\), is well-defined in \(D_{\rho}\) for some positive \(\rho\).
The condition \(c_{S}>0\) (no small divisors) is very restrictive. If \(c_{S}=0\), then typically the change \({\bf z}\mapsto{\bf z}_{+\infty}\) is not analytic and exists only in \({\cal F}\).
**Proposition 6.3**: _Suppose_
\[S\subset\{q\in{\mathbb{Z}}^{n}:\langle\omega,q\rangle\geq 0\},\quad\widehat{H}_{ \circ}\in{\cal A}\cap{\cal F}_{\circ},\quad\widehat{H}^{0}=\frac{1}{2} \langle A\kappa,\kappa\rangle+O_{3}(\kappa),\]
_where \(A\) is a self-adjoint operator. Let \(c\) be a positive constant. We assume that for any \(\varepsilon>0\) there exists \(q\in S\) such that \(\omega_{q}\leq\varepsilon|q|\) and \(|Aq|>c|q|\)._
_Then for any \(\rho>0\) the norm \(\|\widehat{H}_{\circ}\|_{\rho}\) is unbounded as a function of \(\delta\in[0,+\infty]\)._
_Proof_. Take any \(\rho>0\). Let \(D_{\rho}\) be the domain (2.2) and
\[B_{\rho}=\{\kappa\in\mathbb{C}^{n}:|\kappa_{1}|<\rho^{2},\ldots,|\kappa_{n}|<\rho ^{2}\}\]
its image with respect to the map \(\mathbf{z}\mapsto\kappa=(z_{1}\overline{z}_{1},\ldots,z_{n}\overline{z}_{n})\).
We have: \(h_{q}=\sigma_{q}\langle Aq,\kappa\rangle+qO_{2}(\kappa)\). If \(\rho\) is sufficiently small then for some \(q\in S\) and an open set \(U_{\rho}\in B_{\rho}\)
\[-h_{q}(\kappa)>\frac{1}{2}c|q|\rho^{2}>2\omega_{q},\qquad\kappa\in U_{\rho}.\]
The function \(\widehat{H}^{q}\) does not vanish identically. Hence \(\sup_{\kappa\in U_{\rho}}|\widehat{H}^{q}|>0\) and
\[H^{q}(\cdot,\delta)=e^{-(\omega_{q}+h_{q})\delta}\widehat{H}^{q}\to\infty \quad\mbox{for any $\kappa\in U_{\rho}$}.\]
This means that for some \(\mu>0\) we have: \(\|\widehat{H}^{q}(\cdot,\delta)\|_{\rho-\mu}\to\infty\) as \(\delta\to\infty\). By Lemma 8.5\(\|H_{\diamond}(\cdot,\delta)\|_{\rho}\to\infty\) as \(\delta\to+\infty\).
## 7 Converging normalization
Take any \(q\in\mathbb{Z}^{n}\setminus\{0\}\), \(\sigma_{q}>0\). Suppose
\[\widehat{H}_{\diamond}=\widehat{H}^{q}+\widehat{H}^{0}+\widehat{H}^{-q},\qquad \widehat{H}^{\pm q}=\mathbf{z}^{\mathbf{k}_{\pm q}}\widehat{N}^{\pm q}\in \mathcal{F}^{\pm q},\quad\widehat{H}^{0}=\widehat{N}^{0}\in\mathcal{F}^{0},\]
where \(\widehat{N}^{\pm q},\widehat{N}^{0}\in\mathcal{F}^{0}\). Then system (5.6) contains only three nontrivial equations
\[\partial_{\delta}N^{\pm q}=-N^{\pm q}\langle q,\partial\rangle N^{0},\quad \partial_{\delta}N^{0}=-2\langle q,\partial\rangle(\kappa^{[q]}N^{-q}N^{q})e^ {-2\omega_{q}\delta}. \tag{7.1}\]
Recall that \(\widehat{H}_{\diamond}=O_{3}(\mathbf{z})\). Therefore \(\widehat{H}^{\pm q}=O_{3}(\mathbf{z})\) and \(\widehat{H}^{0}=O_{2}(\kappa)\).
**Theorem 7**: _Suppose \(\widehat{N}^{\pm q},\widehat{N}^{0}\in\mathcal{N}\cap\mathcal{A}\), \(\mathbf{z}^{\mathbf{k}_{\pm q}}\widehat{N}^{\pm q}=O_{3}(\mathbf{z})\), and \(\widehat{N}^{0}=O_{2}(\kappa)\). Then there exists \(\widehat{\rho}>0\) such that system (7.1) has a solution \(N^{\pm q},N^{0}\in\mathcal{A}^{\widehat{\rho}}\) for all \(\delta>0\). Moreover,_
\[\lim_{\delta\to+\infty}e^{-\omega_{q}\delta}N^{\pm q}=0,\quad\lim_{\delta\to+ \infty}N^{0}\in\mathcal{N}\cap\mathcal{A}^{\widehat{\rho}},\]
_where the limits are taken w.r.t. the norm \(\|\cdot\|_{\widehat{\rho}}\), see (2.2)._
_Proof_. We put \(M=\kappa^{[q]}N^{-q}N^{q}\). Then
\[\partial_{\delta}M=-2M\langle q,\partial\rangle N^{0},\quad \partial_{\delta}N^{0}=-2\langle q,\partial\rangle Me^{-2\omega_{q}\delta}, \tag{7.2}\] \[M|_{\delta=0}=O_{3}(\kappa),\quad N^{0}|_{\delta=0}=O_{2}(\kappa). \tag{7.3}\]
The functions (7.3) are analytic for \(\kappa\in\mathcal{D}_{\rho_{\kappa}}\),
\[\mathcal{D}_{\rho_{\kappa}}=\{|\kappa_{1}|\leq\rho_{\kappa},\ldots,|\kappa_{n} |\leq\rho_{\kappa}\},\qquad\rho_{\kappa}>0.\]
Following the Majorant principle (Section 8.3), we associate with the IVP (7.2)-(7.3) a majorant IVP:
\[\partial_{\delta}\mathbf{M}=2\mathbf{M}\langle[q],\partial\rangle \mathbf{H}^{0},\quad\partial_{\delta}\mathbf{H}^{0}=2\langle[q],\partial \rangle\mathbf{M}e^{-2\omega_{q}\delta}, \tag{7.4}\] \[\mathbf{M}|_{\delta=0}\gg M|_{\delta=0},\quad\mathbf{H}^{0}|_{ \delta=0}\gg\mathcal{H}^{0}|_{\delta=0}. \tag{7.5}\]
According to Lemma 8.2 we may choose \({\bf M}|_{\delta=0}\) and \({\bf H}^{0}|_{\delta=0}\) to be functions of \(x=\kappa_{1}+\ldots+\kappa_{n}\) such that
\[{\bf M}|_{\delta=0}=O(x^{3})\quad\mbox{and}\quad{\bf H}^{0}|_{\delta=0}=O(x^{2}). \tag{7.6}\]
Then by Theorem 8 if the majorant system has a solution for all \(\delta\geq 0\) then the IVP (7.2)-(7.3) also has a solution for all \(\delta\geq 0\) and
\[M(\kappa,\delta)\ll{\bf M}(x,\delta),\quad{\cal H}^{0}(\kappa,\delta)\ll{\bf H }^{0}(x,\delta).\]
Since the initial conditions (7.5) depend only on \(x\), we may look for a solution of (7.4)-(7.5) in the form \({\bf M}={\bf M}(x,\delta)\), \({\bf H}^{0}={\bf H}^{0}(x,\delta)\). We use the fact that for any function \(F=F(x)\in{\cal F}\) we have: \(\partial_{\kappa_{j}}F=\partial_{x}F\). Therefore \(\langle[q],\partial\rangle F=|q|\partial_{x}F\).
We put
\[\tau=2|q|\delta,\quad\nu=\omega_{q}/|q|,\quad{\bf M}(x,\delta)=u(x,\tau),\quad {\bf H}^{0}(x,\delta)=v(x,\tau).\]
Then the IVP (7.4)-(7.5) takes the form
\[\partial_{\tau}u=u\partial_{x}v,\quad\partial_{\tau}v=e^{-\nu\tau}\partial_{x} u,\qquad u|_{\tau=0}=O(x^{3}),\quad v|_{\tau=0}=O(x^{2}). \tag{7.7}\]
Applying Theorem 9 we complete the proof.
## 8 Technical part
### Proof of Lemma 5.2
Equation (5.2) directly follows from Lemma 5.1. To prove (5.3), it is sufficient to take \(N=\kappa^{l}\), where \(l\in{\mathbb{Z}}_{+}^{n}\) is arbitrary. For \({\bf k}_{q}=(k_{q},\overline{k}_{q})\), \(\overline{k}_{q}-k_{q}=q\), we have:
\[\{{\bf z}^{{\bf k}_{q}},\kappa^{l}\}=i\sum_{j=1}^{n}\big{(}(\overline{k}_{q})_ {j}l_{j}-(k_{q})_{j}l_{j}\big{)}{\bf z}^{{\bf k}_{q}}\kappa^{l-e_{j}}=i{\bf z} ^{{\bf k}_{q}}\langle q,\partial\rangle\kappa^{l}.\]
To prove (5.4), it is sufficient to consider the case \(n=1\). In this case \(q,p\in{\mathbb{Z}}\) are scalar quantities.
(a) If \(qp\geq 0\) we have \(\{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\}=0\) and \(q\lhd p=p\lhd q=0\). Hence (5.4) holds.
(b) Suppose \(p<0<-p<q\). Then
\[{\bf z}^{{\bf k}_{q}}=\overline{z}^{q},\quad{\bf z}^{{\bf k}_{p}}=z^{-p}, \quad{\bf z}^{{\bf k}_{q+p}}=\overline{z}^{q+p},\quad q\lhd p=0,\quad p\lhd q=p.\]
Therefore
\[\{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\}=-iqp\,z^{-p-1}\overline{z}^{q- 1}=i\big{(}q-q\lhd p-p+p\lhd q\big{)}\,{\bf z}^{{\bf k}_{q+p}}\partial\kappa^ {-p}.\]
This implies (5.4) in case (b).
(c) Suppose \(p<0<q<-p\). Then
\[{\bf z}^{{\bf k}_{q}}=\overline{z}^{q},\quad{\bf z}^{{\bf k}_{p}}=z^{-p}, \quad{\bf z}^{{\bf k}_{q+p}}=\overline{z}^{-q-p},\quad q\lhd p=q,\quad p\lhd q=0.\]
Therefore
\[\{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\}=-iqp\,z^{-p-1}\overline{z}^{q-1}=i \big{(}q-q\lhd p-p+[p\lhd q\big{)}\,{\bf z}^{{\bf k}_{q+p}}\partial\kappa^{q}.\]
This implies (5.4) in case (c).
(d) If \(p<0<-p=q\) then
\[{\bf z}^{{\bf k}_{q}}=\overline{z}^{q},\quad{\bf z}^{{\bf k}_{p}}=z^{-p},\quad{ \bf z}^{{\bf k}_{q+p}}=1,\quad q\lhd p=q/2,\quad p\lhd q=p/2.\]
Therefore
\[\{{\bf z}^{{\bf k}_{q}},{\bf z}^{{\bf k}_{p}}\}=-iqp\,z^{-p-1}\overline{z}^{q- 1}=i\big{(}q-q\lhd p-p+p\lhd q\big{)}\,{\bf z}^{{\bf k}_{q+p}}\partial\kappa^{q}.\]
This implies (5.4) in case (b).
(e) The case \(q<0<p\) is analogous to (b)-(d).
### Majorants
For any \(F,{\bf F}\in{\cal F}\) we say that \(F\ll{\bf F}\) iff for their Taylor coefficients we have the inequalities \(|F_{\bf k}|\leq{\bf F}_{\bf k}\), \({\bf k}\in{\mathbb{Z}}_{+}^{2n}\).
**Lemma 8.1**: _Suppose \(F\ll{\bf F}\) and \(\hat{F}\ll\hat{\bf F}\). Then_
**1**_. \(F+\hat{F}\ll{\bf F}+\hat{\bf F}\), \(F\hat{F}\ll{\bf F}\hat{\bf F}\), \(\partial_{z_{s}}F\ll\partial_{z_{s}}{\bf F}\), and \(\partial_{\overline{z}_{s}}F\ll\partial_{\overline{z}_{s}}{\bf F}\), \(s=1,\ldots,n\)._
**2**_. If \(F\) and \({\bf F}\) depend on the parameter \(\delta\in[\delta_{1},\delta_{2}]\) then_
\[\int_{\delta_{1}}^{\delta_{2}}F\,d\delta\ll\int_{\delta_{1}}^{\delta_{2}}{\bf F }\,d\delta.\]
We skip an obvious proof.
**Lemma 8.2**: _Suppose \(F\in{\cal A}^{\rho}\), \(F=O_{s}({\bf z})\), and \(\|F\|_{\rho}=a\rho^{s}\). Then_
\[F\ll\frac{a\rho\zeta^{s}}{\rho-\zeta},\qquad\zeta=z_{1}+\ldots+z_{n}+ \overline{z}_{1}+\ldots+\overline{z}_{n}. \tag{8.1}\]
_Proof_. By Lemma 2.1 we have:
\[F=\sum_{|{\bf k}|\geq s}F_{\bf k}{\bf z}^{\bf k},\qquad|F_{\bf k}|\leq a\rho^{ s-|{\bf k}|}.\]
Note also that \(\sum_{|{\bf k}|=j}{\bf z}^{\bf k}\ll\zeta^{j}\) for any \(j\in{\mathbb{Z}}_{+}\). Then
\[F\ll\sum_{|{\bf k}|\geq s}a\rho^{s-|{\bf k}|}{\bf z}^{\bf k}\ll a\rho^{s}\sum_ {j=s}^{\infty}\frac{\zeta^{j}}{\rho^{j}}=\frac{a\rho\zeta^{s}}{\rho-\zeta}.\]
For any power series \(F\) the corresponding majorant \({\bf F}\) is obviously non-unique. Sometimes we will need majorant estimates of another form.
**Lemma 8.3**: _Suppose \(F=\sum_{m=s}^{\infty}F_{m}x^{m}=O(x^{s})\), \(x\in\mathbb{C}\) is analytic at \(0\in\mathbb{C}\):_
\[\|F\|_{R}=c_{F}R^{s}.\]
_Then for any \(\beta\in\mathbb{Z}_{+}\)_
\[|F_{m}|\ll\frac{c_{F}s^{\beta}}{m^{\beta}}\rho^{s-m},\qquad\rho=e^{-\beta/s}R.\]
_Proof_. By using Lemma 2.1 with \(z_{1}=x\) and \(F\) independent of \(z_{2},\ldots z_{n},\overline{z}_{1},\ldots\overline{z}_{n}\), we obtain the estimate
\[|F_{m}|\leq c_{F}R^{s-m}.\]
It remains to use the inequality \(R^{s-m}\leq s^{\beta}m^{-\beta}\rho^{s-m}\) for \(m\geq s\).
### Majorant principle
We use majorant method to obtain estimates for solutions of initial value problems (IVP) in \(\mathcal{F}\).
As an example consider the IVP
\[\partial_{\delta}F=\Phi(F,\delta),\qquad F|_{\delta=0}=\widehat{F}. \tag{8.2}\]
Here \(F\in\mathcal{F}\) depends on the parameter \(\delta\) and \(\Phi\) is a map from \(\mathcal{F}\times\mathbb{R}_{+}\) to \(\mathcal{F}\).
**Definition 8.1**: _IVP (8.2) is said to be power regular if for any \(\widehat{F}\in\mathcal{F}\) equation (8.2) has a unique solution \(F=F(\mathbf{z},\delta)\in\mathcal{F}\) for all \(\delta>0\)_
If (8.2) is a PDE then the main tool to construct a solution is the Cauchy-Kovalevskaya theorem. The system (2.10) is not a PDE because the right-hand side includes the operators \(H_{\diamond}\mapsto H^{\sigma}\), \(\sigma\in\{0,+,-\}\). However in this case power regularity of the solution easily follows from nilpotent structure of the system (2.14).
We associate with (8.2) the so-called majorant system
\[\partial_{\delta}\mathbf{F}=\Psi(\mathbf{F},\delta),\qquad\mathbf{F}|_{\delta =0}=\widehat{\mathbf{F}}. \tag{8.3}\]
We put \(\Phi_{\mathbf{k}}=p_{\mathbf{k}}\circ\Phi\) and \(\Psi_{\mathbf{k}}=p_{\mathbf{k}}\circ\Psi\).
**Definition 8.2**: _IVP (8.3) is said to be a majorant IVP for (8.2) if the following two properties hold:_
_(a) \(\widehat{F}\ll\widehat{\mathbf{F}}\)._
_(b) For any \(F\ll\mathbf{F}\) and \(\delta\geq 0\) we have: \(\Phi_{\mathbf{k}}(F,\delta)\ll\Psi_{\mathbf{k}}(\mathbf{F},\delta)\) for any \(\mathbf{k}\in\mathbb{Z}_{+}^{2n}\)._
**Majorant principle**. _Suppose the IVP (8.2) is power regular. Suppose also that there exists a solution \(\mathbf{F}=\mathbf{F}(\cdot,\delta)\in\mathcal{A}\) of (8.3) on the interval \(\delta\in[0,\delta_{0}]\). Then (8.2) has a unique analytic solution \(F\) on \([0,\delta_{0}]\) and \(F(\cdot,\delta)\ll\mathbf{F}(\cdot,\delta)\)._
**Remarks**. 1. Definitions 8.1 and 8.2 as well as the Majorant principle obviously extend to systems of equations, where \(F,\widehat{F}\in\mathcal{F}^{m}\) and \(\Phi:\mathcal{F}^{m}\times\mathbb{R}_{+}\to\mathcal{F}\).
2. One may replace the first equation (8.3) by the inequality \(\partial_{\delta}{\bf F}\gg\Psi({\bf F},\delta)\).
The majorant principle presented here differs from the majorant argument used since Cauchy times. Traditionally the evolution variable (in our case \(\delta\)) is regarded complex as well and Taylor expansions in it are used. In our approach this variable is a real parameter in both exact solution and a majorant. Due to this we are able to obtain majorant estimates for solutions of (8.2) on large (even infinite) intervals of \(\delta\).
**Theorem 8**: _Suppose both systems (8.2) and (8.3) have nilpotent structure. Then Majorant principle holds true._
We expect that Majorant principle is valid in a much wider generality. But in this paper we are only interested in the case of systems having nilpotent form.
_Proof of Theorem 8_. Let \({\bf k}^{0}\) be an index with minimal possible degree \(|{\bf k}^{0}|\). For example, in the system (2.14) \(|{\bf k}^{0}|=3\). Nilpotent form of (8.2) implies that
\[0=\partial_{\delta}F_{{\bf k}^{0}}\ll\partial_{\delta}{\bf F}_{{\bf k}^{0}}.\]
Hence \(F_{{\bf k}^{0}}(\delta)\ll{\bf F}_{{\bf k}^{0}}(\delta)\) for \(\delta\geq 0\).
We proceed by induction in \(|{\bf k}|\). Suppose \(F_{{\bf k}}(\delta)\ll{\bf F}_{{\bf k}}(\delta)\), \(\delta\geq 0\) provided \(|{\bf k}|<K\). For any \({\bf k}\) such that \(|{\bf k}|=K\) we have by induction assumption and item (b) of Definition 8.2:
\[\partial_{\delta}({\bf F}_{{\bf k}}-F_{{\bf k}})=\Psi_{{\bf k}}({\bf F}( \cdot,\delta),\delta)-\Phi_{{\bf k}}({\bf F}(\cdot,\delta),\delta)\gg 0.\]
Therefore
\[{\bf F}_{{\bf k}}(\delta)=\widehat{{\bf F}}_{{\bf k}}(\delta)+\int_{0}^{ \delta}\Psi_{{\bf k}}({\bf F}(\cdot,\lambda),\lambda)\,d\lambda\gg F_{{\bf k}} (\delta)=\widehat{F}_{{\bf k}}(\delta)+\int_{0}^{\delta}\Phi_{{\bf k}}(F( \cdot,\lambda),\lambda)\,d\lambda.\]
Here we used that arguments of \(\Psi_{{\bf k}}\) and \(\Phi_{{\bf k}}\) are known by the induction assumption.
This majorant inequality makes sense if the left-hand side is defined i.e., for any \(\delta\in[0,\delta_{0}]\).
### Two estimates
We put
\[S_{\gamma}(N)=\sum_{n_{1},n_{2}\geq 1,\,n_{1}+n_{2}=N}\frac{1}{n_{1}^{\gamma}n_ {2}^{\gamma}}. \tag{8.4}\]
**Lemma 8.4**: _Suppose \(\gamma\geq 2\). There exist a constants \(\chi_{\gamma}>0\) such that for any \(N\geq 2\)_
\[S_{\gamma}(N)\leq\chi_{\gamma}N^{-\gamma}.\]
_Proof_. We have:
\[S_{\gamma}(N)\leq 2\sum_{1\leq n_{1}\leq n_{2},\,n_{1}+n_{2}=N}\frac{1}{n_{1}^{ \gamma}n_{2}^{\gamma}}\leq 2\Big{(}\frac{2}{N}\Big{)}^{\gamma}\sum_{n_{1}=1}^{ \infty}\frac{1}{n_{1}^{\gamma}}.\]
**Lemma 8.5**: _Let \(\Lambda\subset\mathbb{Z}_{+}^{2n}\) be a nonempty set, \(0<\mu<\rho\), and \(F=\sum_{{\bf k}\in\mathbb{Z}_{+}^{2n}}F_{\bf k}{\bf z}^{\bf k}\in\mathcal{A}^{\rho}\). Consider the function \(f=\sum_{{\bf k}\in\Lambda}F_{\bf k}{\bf z}^{\bf k}\). Then_
\[\|f\|_{\rho-\mu}\leq(\rho/\mu)^{2n}\|F\|_{\rho}.\]
_Proof._ By Lemma 2.1
\[\|f\|_{\rho-\mu}\leq\sum_{{\bf k}\in\Lambda}|F_{\bf k}|(\rho-\mu)^{|{\bf k}|} \leq\sum_{{\bf k}\in\mathbb{Z}_{+}^{2n}}\|F\|_{\rho}\frac{(\rho-\mu)^{|{\bf k} |}}{\rho^{|{\bf k}|}}\leq\Big{(}\frac{\rho}{\mu}\Big{)}^{2n}\|F\|_{\rho}.\]
### Solution of a PDE system
Consider the PDE system (see (7.7))
\[{\bf u}_{\tau}={\bf u}{\bf v}_{x},\quad{\bf v}_{\tau}=e^{-\nu\tau}{\bf u}_{x}, \qquad{\bf u}|_{\tau=0}=O(x^{3}),\quad{\bf v}|_{\tau=0}=O(x^{2}), \tag{8.5}\]
where the initial conditions \({\bf u}|_{\tau=0}\) and \({\bf v}|_{\tau=0}\) are analytic in a neighborhood of the origin. By Lemma 8.3 coefficients in the expansions
\[{\bf u}(x,0)=\sum_{k=3}^{\infty}{\bf u}_{k}(0)x^{k},\quad{\bf v}(x,0)=\sum_{k =2}^{\infty}{\bf v}_{k}(0)x^{k}\]
satisfy the inequalities
\[|{\bf u}_{k}(0)|\leq C_{u}k^{-3}\rho_{0}^{3-k},\quad|{\bf v}_{k}(0)|\leq C_{v} k^{-4}\rho_{0}^{2-k},\]
where \(\rho_{0}>0\) may be taken arbitrarily small.
**Theorem 9**: _Suppose \(\rho_{0}>0\) is sufficiently small. Then solution of the IVP (8.5) exists in the disk \(\Delta_{r}=\{x\in\mathbb{C}:|x|<r\}\) for any \(\tau\geq 0\), where \(r>0\) is independent of \(\tau\)._
_Moreover, there exist \(Q_{u},Q_{v}>0\) such that for any \(\tau\geq 0\)_
\[{\bf u}=O(x^{3}),\quad{\bf v}=O(x^{2}),\qquad\|{\bf u}\|_{r}\leq Q_{u}r^{3}e^ {\nu\tau/2},\quad\|{\bf v}\|_{r}\leq r^{2}Q_{v}.\]
The proof is based on the following inductive lemma. Consider the PDE system
\[u_{t}=uv_{x},\quad v_{t}=\varepsilon u_{x},\qquad u(x,t)=\sum_{k=3}^{\infty}u _{k}(t)x^{k},\quad v(x,t)=\sum_{k=2}^{\infty}v_{k}(t)x^{k}. \tag{8.6}\]
with initial conditions
\[0\leq u_{k}(0)\leq c_{u}k^{-3}\rho^{3-k},\quad 0\leq v_{k}(0)\leq c_{v}k^{-4} \rho^{2-k} \tag{8.7}\]
**Lemma 8.6**: _Let \(u,v\) be a solution of (8.6) with initial conditions, satisfying (8.7) and let \(T\) be a positive constant. Then for any \(t\in[0,T]\)_
\[u_{k}(t)\leq U_{k}(t),\quad U_{k}(t) = \frac{c_{u}}{k^{3}}\rho^{3-k}(1+\varphi e^{(\lambda+ak)t}),\qquad k =3,4,\ldots, \tag{8.8}\] \[v_{k}(t)\leq V_{k}(t),\quad V_{k}(t) = \frac{c_{v}}{k^{4}}\rho^{2-k}+\frac{c_{u}\psi\varphi}{(k+1)^{3}} \rho^{2-k}e^{a(k+1)t},\qquad k=2,3,\ldots, \tag{8.9}\]
_where \(\varphi,\psi\), and \(\lambda\) do not depend on \(t\) and satisfy_
\[\varphi\lambda \geq \chi\rho c_{v}(1+\varphi), \tag{8.10}\] \[a \geq \chi\rho c_{u}\psi(1+\varphi)e^{2aT},\] (8.11) \[a\psi\varphi \geq \varepsilon(1+\varphi e^{\lambda T}). \tag{8.12}\]
_The constant \(\chi=\max\{\chi_{2},\chi_{3}\}\), see Lemma 8.4._
We prove Lemma 8.6 in Section 8.6.
**Corollary 8.1**: _Let \(\Lambda\) be a positive constant. Then the functions \(u_{k}\) and \(v_{k}\) from Lemma 8.6 satisfy the estimates_
\[0\leq u_{k}(T)\leq\widehat{c}_{u}k^{-3}\widehat{\rho}^{3-k}, \quad 0\leq v_{k}(T)\leq\widehat{c}_{v}k^{-4}\widehat{\rho}^{2-k}, \tag{8.13}\] \[\widehat{c}_{u}=c_{u}e^{3(aT+\Lambda)}(1+\varphi e^{\lambda T}), \quad\widehat{c}_{v}=c_{v}+c_{u}e^{3aT+2\Lambda}\psi\varphi/\Lambda,\quad \widehat{\rho}=\rho e^{-aT-\Lambda}. \tag{8.14}\]
Corollary 8.1 will be proven in Section 8.6.
We choose \(\tau_{m}=m\), \(m=0,1,2,\ldots\) Let \(({\bf u}(x,\tau),{\bf v}(x,\tau))\) be a solution of (8.5) and \((u(x,t),v(x,t))\) a solution of (8.6)\({}_{\varepsilon=e^{-\nu\tau_{m}}}\) with initial conditions satisfying
\[|{\bf u}_{k+1}(\tau_{m})|\leq u_{k+1}(0),\quad|{\bf v}_{k}(\tau_{m})|\leq v_{ k}(0),\qquad k=2,3,\ldots\]
Then on the interval \(\tau\in[m,m+1]\)
\[|{\bf u}_{k+1}(\tau)|\leq u_{k+1}(\tau-\tau_{m}),\quad|{\bf v}_{k}(\tau)|\leq v _{k}(\tau-\tau_{m}),\qquad k=2,3,\ldots\]
Hence we can obtain an estimate for \(({\bf u}(x,\tau),{\bf v}(x,\tau))\) applying Lemma 8.6 and Corollary 8.1 inductively. We will obtain sequences of positive constants
\[\varepsilon_{m}=e^{-\nu m},\ \ T_{m}=1,\ \ c_{u}(m),\ \ c_{v}(m),\ \ \rho_{m},\ \ \Lambda_{m},\ \ a_{m},\ \ \lambda_{m},\ \ \psi_{m},\ \ \varphi_{m}, \tag{8.15}\]
where6
Footnote 6: Here we do not write the index \(m\) in \(u_{k}\) and \(v_{k}\) for brevity.
\[|{\bf u}_{k}(\tau_{m})|\leq u_{k}(0)=c_{u}(m)k^{-3}\rho_{m}^{3-k}, \quad|{\bf v}_{k}(\tau_{m})|\leq v_{k}(0)=c_{v}(m)k^{-4}\rho_{m}^{2-k},\qquad m \in\mathbb{Z}_{+}.\]
The sequences (8.15) have to satisfy (8.10)-(8.12) and (8.14). Our aim is to prove that the sequences \(c_{v}(m)\) and \(1/\rho_{m}\) may be chosen bounded from above as \(m\to\infty\) while \(c_{u}(m)\leq Q_{u}\rho_{m}^{3}e^{\nu\tau/2}\).
The quantity \(\rho_{0}\) may be assumed to be small. We choose
\[a_{m}=\Lambda_{m}=1/(m^{2}+1),\quad\lambda_{m}=\frac{c_{v}(m)}{c_{v}(0)},\quad \varphi_{m}=\varphi_{0}=\sqrt{\rho_{0}}.\]
Then (8.10) holds because for sufficiently small \(\rho_{0}\)
\[\varphi_{m}\lambda_{m}=\sqrt{\rho_{0}}\frac{c_{v}(m)}{c_{v}(0)}\geq\chi\rho_{m }c_{v}(m)(1+\varphi_{m}).\]
Moreover, by the last equation (8.14) the sequence \(\{\rho_{m}\}_{m\in\mathbb{Z}_{+}}\) is decreasing and converges to the positive quantity \(\rho_{\infty}=\rho_{0}\exp(-\sum_{m=0}^{\infty}\frac{2}{m^{2}+1})\).
Conditions (8.11)-(8.12) and (8.14)7 may be taken in the form
Footnote 7: The last equation (8.14) may be dropped.
\[1 \geq (m^{2}+1)\chi\rho_{0}c_{u}(m)\psi_{m}2e^{2}, \tag{8.16}\] \[\psi_{m}\sqrt{\rho_{0}} = (m^{2}+1)e^{-\nu m}(1+\sqrt{\rho_{0}}e^{c_{v}(m)/c_{v}(0)}),\] (8.17) \[c_{u}(m+1) = c_{u}(m)e^{6/(m^{2}+1)}(1+\sqrt{\rho_{0}}e^{c_{v}(m)/c_{v}(0)}),\] (8.18) \[c_{v}(m+1) = c_{v}(m)+(m^{2}+1)c_{u}(m)e^{5/(m^{2}+1)}\psi_{m}\sqrt{\rho_{0}}. \tag{8.19}\]
We define \(\psi_{m}\) by (8.17). Then it remains to satisfy (8.16), (8.18), and (8.19). These conditions take the form
\[1 \geq 2e^{2}\chi\sqrt{\rho_{0}}c_{u}(m)(m^{2}+1)^{2}e^{-\nu m}(1+\sqrt{ \rho_{0}}e^{c_{v}(m)/c_{v}(0)}), \tag{8.20}\] \[c_{u}(m+1) = c_{u}(m)e^{6/(m^{2}+1)}(1+\sqrt{\rho_{0}}e^{c_{u}(m)/c_{v}(0)}),\] (8.21) \[c_{v}(m+1) = c_{v}(m)+c_{u}(m)e^{5/(m^{2}+1)}(m^{2}+1)^{2}e^{-\nu m}(1+\sqrt {\rho_{0}}e^{c_{v}(m)/c_{v}(0)}). \tag{8.22}\]
At the moment we forget about (8.20) and consider the sequences \(c_{u}(m)\) and \(c_{v}(m)\), satisfying (8.21), (8.22) and the initial conditions
\[c_{u}(0)=C_{u},\quad c_{v}(0)=C_{v}. \tag{8.23}\]
Taking if necessary instead of \(C_{v}\) a larger constant, we may assume that \(C_{u}\leq C_{v}\). We put
\[S_{0}=\sum_{l=0}^{\infty}\frac{6}{l^{2}+1},\quad S_{\nu}(m)=\sum_{l=0}^{m-1}(l ^{2}+1)^{2}e^{\nu/2-\nu l/2}.\]
**Lemma 8.7**: _Suppose \(\rho_{0}>0\) is sufficiently small and \(C_{u}\leq C_{v}\). Then the sequences \(c_{u}(m)\) and \(c_{v}(m)\), computed from (8.21)-(8.23), satisfy the estimates_
\[c_{v}(m)\leq C_{v}(1+e^{10+S_{0}}S_{\nu}(m)),\quad c_{u}(m)\leq C_{u}e^{S_{0}+ \nu m/2}. \tag{8.24}\]
_Proof of Lemma 8.7._ Let \(m_{0}\in\mathbb{N}\) be the minimal number such that \(6/(m_{0}^{2}+1)<\nu/4\). If \(\rho_{0}=0\) then
\[c_{u}(m) = C_{u}\exp\Big{(}\sum_{l=1}^{m-1}\frac{6}{l^{2}+1}\Big{)}\leq C_{ u}e^{S_{0}},\] \[c_{v}(m) = C_{v}+\sum_{l=1}^{m-1}c_{u}(l)e^{5/(l^{2}+1)}(l^{2}+1)^{2}e^{- \nu l}\leq C_{v}(1+e^{5+S_{0}}S_{\nu}(m))\]
Therefore there exists \(\rho_{0}>0\) such that
(1) inequalities (8.24) hold for any \(m=0,1,\ldots,m_{0}\),
(2) \(\sqrt{\rho_{0}}\exp(1+e^{10+S_{0}}S_{\nu}(\infty))<e^{\nu/4}-1\).
We proceed by induction. Take any \(k\geq m_{0}\). Suppose for any \(m\leq k\) inequalities (8.24) hold. Then by the definition of \(m_{0}\) and by condition (2)
\[e^{6/(k^{2}+1)}\leq e^{\nu/4}\quad\mbox{and}\quad 1+\sqrt{\rho_{0}}e^{c_{v}(k)/ c_{v}(0)}\leq e^{\nu/4}.\]
By using (8.21) we obtain:
\[c_{u}(k+1)=c_{u}(k)e^{6/(k^{2}+1)}(1+\sqrt{\rho_{0}}e^{c_{v}(k)/c_{v}(0)})\leq c _{u}(k)e^{\nu/2}.\]
Hence the second inequality (8.24) holds for \(m=k+1\).
By (8.22) we have:
\[c_{v}(k+1) \leq c_{v}(k)+c_{u}(k)e^{\nu/4-\nu k}(k^{2}+1)^{2}(1+\sqrt{\rho_{0}}e^{ c_{v}(k)/c_{v}(0)})\] \[\leq c_{v}(k)+c_{u}(k)(k^{2}+1)^{2}e^{\nu/2-\nu k}\] \[\leq C_{v}(1+e^{10+S_{0}}S_{\nu}(k))+C_{u}e^{S_{0}+\nu k/2}(k^{2}+1)^ {2}e^{\nu/2-\nu k}\] \[\leq C_{v}(1+e^{10+S_{0}}S_{\nu}(k+1)).\]
Hence the first inequality (8.24) holds for \(m=k+1\).
Finally turn to inequality (8.20). If \(\rho_{0}\) is taken satisfying condition (2) in the proof of Lemma 8.7 then by Lemma 8.7 we have: \(1+\sqrt{\rho_{0}}e^{c_{v}(m)/c_{v}(0)}\leq e^{\nu/4}\) and \(c_{u}(m)\) satisfies (8.24). Then (8.20) takes the form
\[1\geq 2e^{2}\chi\sqrt{\rho_{0}}C_{u}e^{S_{0}}(m^{2}+1)^{2}e^{\nu/2-\nu m/2}.\]
Taking if necessary a smaller \(\rho_{0}\), we can satisfy this inequality for any \(m\geq 0\). This finishes proof of Theorem 9.
### Auxiliary IVP
In this section we prove Lemma 8.6 and Corollary 8.1.
**Proof of Lemma 8.6**. Logic of our argument is as follows:
(a) we prove (8.8)\({}_{k=3}\),
(b) we derive (8.9)\({}_{k}\) from (8.8)\({}_{k+1}\),
(c) we derive (8.8)\({}_{k}\) from (8.9)\({}_{n\leq k-2}\) and (8.8)\({}_{n\leq k-1}\).
The system (8.6) is equivalent to the following infinite ODE system:
\[(u_{k})_{t}=\sum_{\alpha+\beta=k+1}\beta u_{\alpha}v_{\beta},\quad(v_{k})_{t}= \varepsilon(k+1)u_{k+1}. \tag{8.25}\]
(a) The equation \((u_{3})_{t}=0\) and the first estimate (8.7)\({}_{k=3}\) imply (8.8)\({}_{k=3}\).
(b) By (8.7) estimate \((8.9)_{k}\) holds for \(t=0\). To prove \((8.9)_{k}\) for arbitrary \(t\in[0,T]\), we assume that \((8.8)_{k+1}\) is valid. Then it is sufficient to prove that \((v_{k})_{t}\leq(V_{k})_{t}\). By (8.25) we have:
\[(v_{k})_{t} \leq \frac{\varepsilon c_{u}}{(k+1)^{2}}\rho^{2-k}(1+\varphi e^{( \lambda+a(k+1))t}),\] \[(V_{k})_{t} = \frac{ac_{u}\psi\varphi}{(k+1)^{2}}\rho^{2-k}e^{a(k+1)t}.\]
Then the inequality \((v_{k})_{t}\leq(V_{k})_{t}\) follows from (8.12).
(c) By (8.7) estimate \((8.8)_{k}\) holds for \(t=0\). To prove \((8.8)_{k}\) for arbitrary \(t\in[0,T]\), we assume that \((8.9)_{n\leq k-2}\) and \((8.8)_{n\leq k-1}\) are valid. It is sufficient to prove that \((u_{k})_{t}\leq(U_{k})_{t}\). By (8.25)
\[(u_{k})_{t} \leq \sum_{\alpha+\beta=k+1}\rho^{4-k}\frac{c_{u}}{\alpha^{3}}\Big{(} 1+\varphi e^{(\lambda+a\alpha)t}\Big{)}\Big{(}\frac{c_{v}}{\beta^{3}}+\frac{ \beta c_{u}\psi\varphi}{(\beta+1)^{3}}e^{a(\beta+1)t}\Big{)}\] \[= \rho^{4-k}(A_{1}+A_{2}),\] \[A_{1} = c_{u}c_{v}\sum_{\alpha+\beta=k+1}\big{(}1+\varphi e^{(\lambda+a \alpha)t}\big{)}\frac{1}{\alpha^{3}\beta^{3}},\] \[A_{2} = c_{u}^{2}\varphi\psi\sum_{\alpha+\beta=k+1}\Big{(}1+\varphi e^{( \lambda+a\alpha)t}\Big{)}\frac{e^{a(\beta+1)t}}{\alpha^{3}(\beta+1)^{2}}.\]
We estimate \(A_{1}\) and \(A_{2}\) separately. By using Lemma 8.4 we have:
\[A_{1} \leq c_{u}c_{v}\big{(}1+\varphi e^{(\lambda+ak)t}\big{)}\sum_{ \alpha+\beta=k+1}\frac{1}{\alpha^{3}\beta^{3}}\;\leq\;\frac{c_{u}c_{v}(1+ \varphi)e^{(\lambda+ak)t}}{(k+1)^{3}}\chi,\] \[A_{2} \leq c_{u}^{2}\varphi\psi\sum_{\alpha+\beta=k+1}(1+\varphi)\frac{e^{ (\lambda+a(k+2))t}}{\alpha^{3}(\beta+1)^{2}}\;\leq\;\frac{c_{u}^{2}\varphi \psi(1+\varphi)}{(k+1)^{2}}e^{(\lambda+a(k+2))t}\chi\]
We also obtain from (8.8)
\[(U_{k})_{t}=\frac{c_{u}(\lambda+ak)}{k^{3}}\rho^{3-k}\varphi e^{(\lambda+ak)t}.\]
Hence the inequality \((u_{k})_{t}\leq(U_{k})_{t}\) follows from two inequalities
\[\rho^{4-k}A_{1}\leq\frac{c_{u}\lambda}{k^{3}}\rho^{3-k}\varphi e^{(\lambda+ak) t}\quad\mbox{and}\quad\rho^{4-k}A_{2}\leq\frac{c_{u}a}{k^{2}}\rho^{3-k} \varphi e^{(\lambda+ak)t}.\]
The first one follows from (8.10) while the second from (8.11).
**Proof of Corollary 8.1**.
To prove the first estimate (8.13) we have to check that
\[\frac{c_{u}}{k^{3}}\rho^{3-k}\big{(}1+\varphi e^{(\lambda+ak)T}\big{)}\leq \frac{\widehat{c}_{u}}{k^{3}}\widehat{\rho}^{3-k}.\]
By (8.14) this inequality is equivalent to
\[1+\varphi e^{(\lambda+ak)T}\leq e^{3(aT+\Lambda)}(1+\varphi e^{\lambda T})e^{-(3- k)(aT+\Lambda)}.\]
which obviously holds.
To prove the second estimate (8.13) we check that
\[\frac{c_{v}}{k^{4}}\rho^{2-k}+\frac{c_{u}\psi\varphi}{(k+1)^{3}}\rho^{2-k}e^{a (k+1)T}\leq\frac{\widehat{c}_{v}}{k^{4}}\widehat{\rho}^{2-k}.\]
By (8.14) this inequality follows from
\[\frac{c_{u}\psi\varphi}{k^{3}}\rho^{2-k}e^{a(k+1)T}\leq\frac{c_{u}e^{3aT+2 \Lambda}\psi\varphi}{\Lambda k^{4}}\rho^{2-k}e^{-(2-k)(aT+\Lambda)}\]
which is equivalent to \(k\Lambda\leq e^{k\Lambda}\).
|
2303.13647 | Computing character tables and Cartan matrices of finite monoids with
fixed point counting | In this paper we present an algorithm for efficiently counting fixed points
in a finite monoid $M$ under a conjugacy-like action. We then prove a formula
for the character table of $M$ in terms of fixed points and radical, which
allows for the effective computation of the character table of $M$ over a field
of null characteristic, as well as its Cartan matrix, using a formula from
[Thi\'ery '12], again in terms of fixed points. We discuss the implementation
details of the resulting algorithms and provide benchmarks of their
performances. | Balthazar Charles | 2023-03-23T20:11:02Z | http://arxiv.org/abs/2303.13647v1 | # Computing character tables and Cartan matrices of finite monoids with fixed point counting.
###### Abstract
In this paper we present an algorithm for efficiently counting fixed points in a finite monoid \(M\) under a conjugacy-like action. We then prove a formula for the character table of \(M\) in terms of fixed points and radical, which allows for the effective computation of the character table of \(M\) over a field of null characteristic, as well as its Cartan matrix, using a formula from [Thiery '12], again in terms of fixed points. We discuss the implementation details of the resulting algorithms and provide benchmarks of their performances.
## Introduction
The last two decades have seen the development of a new dynamic around the study of monoid representation theory. This is due to applications to certain types of discrete Markov chains and specially Markov chains used to randomly generate combinatorial objects first uncovered in the seminal article of Brown [1]. This has lead to an exploration of the combinatorial properties of monoid representations, for instance in [2], [3] or [4].
In this last article [4], Thiery gives a formula for the Cartan matrix of a finite monoid of \(M\) in terms of number of fixed points and the character table of \(M\). More precisely, the formula involves computing the cardinality of the set \(\{s\in M\,|\,hsk=s\}\) for any \(h,k\in M\). In this paper, we set out to use this formula to effectively compute the Cartan matrix of the algebra of \(M\) over a perfect field \(\mathbf{k}\) of null characteristic.
Two difficulties have to be overcome in the pursuit of this goal. Firstly, the cardinality of many interesting families of monoids tends the increase very quickly. For instance, the cardinality of the _full transformation monoid_\(T_{n}\) of all functions from \([\![1,n]\!]\) to \([\![1,n]\!]\) is \(n^{n}\), making the naive computation of \(|\{s\in M\,|\,hsk=s\}|\) impractical even for small \(n\). To remedy this we provide an algorithm to efficiently compute this statistic. Secondly, to use the formula one has to compute the character table of the monoid. The Clifford-Munn-Ponizovskii Theorem (such as presented in [5]) gives an explicit description of the simple \(\mathbf{k}M\)-modules and technically makes the
computation of the character table possible, provided that we know how to compute the simple modules associated to certain groups. However, this approach is rather convoluted and inefficient. We also note that although some results on character tables are known in the case of many interesting families of monoids, no algorithms are available to compute the character table of an arbitrary finite monoid. Thus, for the general case, we prove a formula for the character table that allows for computation exploiting the Green structure of the monoid for increased efficiency.
In the first section of this paper, we present the results necessary for our fixed-points counting algorithm, with a particular emphasis on the notions of Green classes and Schutzenberger groups. In the second section, we prove a formula for the character table of a finite monoid, after recalling the necessary module and character theoretic results. In the third section, we give and discuss the algorithms and equation systems used for computing the Cartan matrix with two focuses: the algorithms for fixed point counting and the equation system to compute the character table. Finally, in the last section, we discuss the performance of these algorithms in terms of execution time and size of tractable problems.
## I Combinatorics of fixed point counting.
In this section we first recall essential and elementary results on the Green structure of finite monoids and on Schutzenberger groups. The informed reader may skip this first paragraph, with the exception of the notations (4) that are used throughout this paper. We then use these results to devise a fixed point counting method.
In the totality of this paper, we assume that all monoids are finite. We will often use the following special case of finite monoid to illustrate the various results presented hereafter.
**Definition 1** (The full transformation monoid).: Consider the set \(T_{n}\) of all transformations of the set \(\{1,\ldots,n\}\), equipped with the multiplication given by map composition: \(\forall f,g\in T_{n},fg=f\circ g\). This is a monoid, aptly named the _full transformation monoid_. A submonoid of \(T_{n}\) is called a transformation monoid and \(n\) is called its _rank_.
## 1 Green structure and Schutzenberger groups
Although finite monoids have been considered to be much wilder objects than groups, it turns out that, with the right optics, they are actually highly structured by their internal multiplication. Consider the divisibility relation: \(x\) divides \(y\) if \(x=yz\) for some \(z\). If \(x,y,z\) are taken in a group \(G\), the relation is trivial. If however, we take them in a general monoid \(M\), left or right translation by an arbitrary element need not be surjective, making the question of \(x\in M\) being a left (or right) multiple of \(y\in M\) non-trivial. These questions of "divisibility" in a general monoid are studied under the name of Green structure, of which we give a brief overview necessary for our purpose in the subsection below. In the following subsection, we also present the related notion of Schutzenberger groups.
### Green Structure
**Definition 2** (Green's relations).: Let \(M\) be a finite monoid and \(a,b\) two of its elements. The _Green relations_ are:
* The \(\mathcal{L}\) preorder is defined by \(a\leq_{\mathcal{L}}b\Leftrightarrow b=ua\) for some \(u\in M\). The associated equivalence relation is : \(a\in\mathcal{L}(b)\Leftrightarrow a\leq_{\mathcal{L}}b\) and \(b\leq_{\mathcal{L}}a\Leftrightarrow Ma=Mb\).
* The \(\mathcal{R}\) preorder is defined by \(a\leq_{\mathcal{R}}b\Leftrightarrow b=av\) for some \(v\in M\). The associated equivalence relation is : \(a\in\mathcal{R}(b)\Leftrightarrow a\leq_{\mathcal{R}}b\) and \(b\leq_{\mathcal{R}}a\Leftrightarrow aM=bM\).
* The \(\mathcal{J}\) preorder is defined by \(a\leq_{\mathcal{J}}b\Leftrightarrow b=uav\) for some \(u,v\in M\). The associated equivalence relation is: \(a\in\mathcal{J}(b)\Leftrightarrow a\leq_{\mathcal{J}}b\) and \(b\leq_{\mathcal{J}}a\Leftrightarrow MaM=MbM\).
* The \(\mathcal{H}\) equivalence relation is defined by \(a\in\mathcal{H}(b)\Leftrightarrow a\in\mathcal{L}(b)\) and \(a\in\mathcal{R}(b)\).
It can be proven (see for instance [6, Theorem 1.9]) that in finite monoids, the relation \(\mathcal{J}\) is the smallest equivalence relation containing \(\mathcal{R}\) and \(\mathcal{L}\). This is not true in general, and this smallest relation is usually denoted by \(\mathcal{D}\) in the literature. Since we are only interested in finite monoids, we shall only use the terms \(\mathcal{J}\)-relation, \(\mathcal{J}\)-class, etc. From the definition, it is clear that the relation \(\mathcal{L}\) and \(\mathcal{R}\) are finer than \(\mathcal{J}\) and that \(\mathcal{H}\) is finer that both \(\mathcal{L}\) and \(\mathcal{R}\). Because of this, "the \(\mathcal{L}\)-class of some \(\mathcal{H}\)-class \(H^{*}\) or "the \(\mathcal{J}\)-class of some \(\mathcal{R}\)-class \(R^{*}\), etc... are well-defined and we shall denote them by \(\mathcal{L}(H),\mathcal{J}(R)\), etc.
**Example 3** (Green relations in \(T_{n}\)).: Let \(a,b\) be two elements of \(M\).
* If \(a\mathcal{L}b\), if and only if they have the same _kernel_\(\ker a=\{a^{-1}\{i\}\mid i\in\llbracket 1,n\rrbracket\}\). We also say that \(a\) and \(b\) have the same nuclear equivalence.
* If \(a\mathcal{R}b\), if and only if they have the same image, \(\operatorname{Im}\left(a\right)=\operatorname{Im}\left(b\right)\).
* Since \(T_{n}\) is finite, \(\mathcal{J}\) is generated by \(\mathcal{L}\) and \(\mathcal{R}\) so \(a\mathcal{J}b\) if and only if \(\operatorname{Im}\left(a\right)\) and \(\operatorname{Im}\left(b\right)\) (or equivalently \(\ker(a)\) and \(\ker(b)\)) have the same cardinality.
* Since \(\mathcal{H}\) is the intersection of \(\mathcal{L}\) and \(\mathcal{R}\), \(a\mathcal{H}b\) if and only \(a\) and \(b\) have the same image and the same kernel.
These conditions are necessary conditions in any transformation monoid. To get that they are sufficient, we use the fact that \(\mathfrak{S}_{n}\subset T_{n}\) and that we can rearrange both image and kernel as we please.
These relations are illustrated in the case of the monoid \(T_{3}\) in Figure 1.1.
The following notations will prove useful, as the study of Green relations is, in part, the study of the maps given by left and right translations in the monoid.
**Notation 4**.: Let \(h,k\) be elements of \(M\) and \(S\) be a subset of \(M\). We denote by:
* \(h\times_{S}\) the application from \(S\) to \(hS\) defined by \(s\mapsto hs\),
* \(\times_{S}k\) the application from \(S\) to \(Sk\) defined by \(s\mapsto sk\),
* \({}_{M}\operatorname{Stab}(S)=\{m\in M\,|\,mS=S\}\),
* \(\operatorname{Stab}_{M}(S)=\{m\in M\,|\,Sm=S\}\),
* \(\operatorname{Fix}_{S}(h,k)=\{s\in S\,|\,hsk=s\}\).
Using these notations, let us recall Green's Lemma, which is one of the central elements of the theory of Green relations, as it shows that the structure of the relations is actually heavily constrained, making their study practical.
**Lemma 5** (Green's Lemma).: _Let \(a,a^{\prime}\) be two elements in the same \(\mathcal{L}\)-class and let \(\lambda,\lambda^{\prime}\) such that \(\lambda a=a^{\prime}\) and \(\lambda^{\prime}a^{\prime}=a\). Then \(\lambda\times_{\mathcal{R}(a)}\) and \(\lambda^{\prime}\times_{\mathcal{R}(a^{\prime})}\) are reciprocal bijections. Moreover, for any \(\mathcal{L}\)-class \(L\)\(\lambda\times_{\mathcal{R}(a)\cap L}\) and \(\lambda^{\prime}\times_{\mathcal{R}(a^{\prime})\cap L}\) are reciprocal bijections._
_Similarly, if \(a,b\) are two elements in the same \(\mathcal{R}\)-class and \(\rho,\rho^{\prime}\) are such that \(\rho a=b\) and \(\rho^{\prime}b=a\), then \(\times_{\mathcal{L}(a)}\rho\) and \(\times_{\mathcal{L}(b)}\rho^{\prime}\) are reciprocal bijections. Moreover, for any \(\mathcal{R}\)-class \(R\times_{\mathcal{L}(a)\cap R}\rho\) and \(\times_{\mathcal{L}(b)\cap R}\rho^{\prime}\) are reciprocal bijections._
An important consequence of Green's Lemma is that \(\mathcal{J}\)-classes can be neatly organized as _eggbox pictures1_: the \(\mathcal{J}\)-class can be represented as a rectangular array
Figure 1: Green relations in \(T_{3}\).
with the \(\mathcal{L}\)-classes as columns, the \(\mathcal{R}\)-classes as rows and the \(\mathcal{H}\)-classes, the eggs, in the cases, as can be seen in Figure 1. This level organization is actually what allows for efficient computer representation of monoids and most of their algorithmic exploration.
### Schutzenberger groups
The Green structure offers a second way of facilitating computer exploration of monoids through groups that arise as stabilizers of some Green classes. These are called the _Schutzenberger groups_ and - this a running theme of monoid theory - allow for a number of monoid theoretic questions to be formulated in terms of groups for which we dispose of an array of efficient algorithms.
**Definition 6** (Schutzenberger groups).: Let \(H\) be an \(\mathcal{H}\)-class. The set \(\{h\times_{H}\mid h\in{}_{M}\operatorname{Stab}(H)\}\) equipped with map composition is a subgroup of \(\mathfrak{S}(H)\) called the _left Schutzenberger group_ and denoted by \(\Gamma(H)\).
Similarly, \((\{\times_{H}k\,|\,k\in\operatorname{Stab}_{M}(H)\},\circ)\) is a subgroup of \(\mathfrak{S}(H)\) called the _right Schutzenberger group_ and denoted by \(\Gamma^{\prime}(H)\).
**Example 7**.: Consider \(H=\mathcal{H}([1\ 3\ 1])\) (the elements of \(T_{n}\) are given in function notation in all examples). We have :
\[{}_{M}\operatorname{Stab}(H)=\{[1\ 2\ 3],[3\ 2\ 1],[1\ 3\ 3],[3\ 1\ 1],[1\ 1\ 3],[3\ 3\ 1]\}\]
and subsequently, \(\Gamma(H)=\{[1\ 2\ 3]\times_{H},[3\ 3\ 1]\times_{H}\}\). Notice that, as elements of \(\Gamma(H)\), \([3\ 3\ 1]\times_{H}=[3\ 2\ 1]\times_{H}\) and that the only important thing is the permutations induced by the elements of \({}_{M}\operatorname{Stab}(H)\) on \(\operatorname{Im}H\). Thus, in the case of transforma
Figure 2: Green’s Lemma
tion monoids, the left Schutzenberger group of an \(\mathcal{H}\)-class \(H\) can be represented as a subgroup of \(\mathfrak{S}(\operatorname{Im}H)\). In the same way, the right Schutzenberger groups can be represented as subgroups of \(\mathfrak{S}(\ker H)\). This fact is used to represent the Schutzenberger groups in Section III.
Our precedent remark on exploiting Schutzenberger groups to get efficient algorithms for computational monoid theoretic questions is seconded by the fact that Schutzenberger groups do not contain any "superfluous information" in the following sense.
**Proposition 8**.: _Let \(H\) be an \(\mathcal{H}\)-class. The natural actions of \(\Gamma(H)\) and \(\Gamma^{\prime}(H)\) on \(H\) are free and transitive._
We reproduce below a proof for Proposition 8 from [8] for the purpose of showcasing the main argument. The argument itself is widely known and we will use it multiple times in the remainder of this paper.
Proof.: Two elements \(h,h^{\prime}\in H\) are in the same \(\mathcal{L}\)-class so there is some \(u\in M\) such that \(uh=h^{\prime}\). By Green's Lemma, this means that \(u\in{}_{M}\operatorname{Stab}(H)\), so \(\Gamma(H)\) acts transitively on \(H\). Suppose that \(uh=h\) for some \(u\in M\). Since \(h,h^{\prime}\) are also in the same \(\mathcal{R}\) class, there is some \(v\) such that \(h^{\prime}=hv\), so \(uh^{\prime}=uhv=hv=h^{\prime}:\) an element of \(\Gamma(H)\) either fixes all points in \(H\) or fixes none. The only element of \(\Gamma(H)\) that fixes all points (and, consequently, the only one that fixes any point) is the identity and thus the action is free. The same arguments apply for \(\Gamma^{\prime}(H)\).
A special case that is interesting to note, and that will be important later, it the case where \(H\) is the \(\mathcal{H}\)-class of an idempotent :
**Definition 9**.: An element \(e\in M\) is _idempotent_ if \(e^{2}=e\). Given an idempotent \(e\), the set \(G_{e}=\{x\in M\,|\,\exists y\in M,xy=yx=e\}\) is called the _maximal subgroup at \(e\)_. One can check that \(G_{e}\) is indeed a group and that \(G_{e}=\mathcal{H}(e)\).
In that case, \(\Gamma(H)\) and \(\Gamma^{\prime}(H)\) can be defined as before, and are canonically isomorphic to \(G_{e}\), simply because \(G_{e}\subset{}_{M}\operatorname{Stab}(H)\) naturally induce a map making it a subgroup of \(\Gamma(H)\) and that since \(G_{e}\) acts freely and transitively on \(H\), this map must be injective and surjective (and similarly for \(\Gamma^{\prime}(H)\)).
**Example 10**.: Consider, in Example 3, \(e=[1\ 2\ 2]\) and \(H=\mathcal{H}(e)\). \(e\) is an idempotent, and, indeed, \(H=G_{e}\) is group : setting \(t=[2\ 1\ 1]\), we have \(e^{2}=e,t^{2}=e\) and \(et=te=t\). As noted in Example 7 :
\[\Gamma(H)=\mathfrak{S}(\{1,2\}),\qquad\Gamma^{\prime}(H)=\mathfrak{S}(\{\{1 \},\{2,3\}\}).\]
Note that the canonical isomorphism between \(\Gamma(H)\) and \(\mathcal{H}(e)\) is simply given by \(g\in\Gamma(H)\mapsto g\cdot e\in H\).
Counting fixed points
Consider the problem of counting the number of elements of the set \(\operatorname{Fix}_{G}(h,k)\) where \(G\) is a finite group and \(h,k\in G\). If \(\operatorname{Fix}_{G}(h,k)\) is non-empty, it contains an element \(\gamma\) such that \(h\gamma k=\gamma\), or equivalently \(h=\gamma k^{-1}\gamma^{-1}\). So for any \(g\in\operatorname{Fix}_{G}(h,k)\) we have:
\[hgk=g\Leftrightarrow\gamma k^{-1}\gamma^{-1}gk=g\Leftrightarrow\gamma^{-1}g.\]
This means that \(g\in\gamma C_{G}(k)\) where \(C_{G}(k)\) is the centralizer of \(k\) in \(G\). Because the other inclusion is obvious, we get a description of \(|\operatorname{Fix}_{G}(h,k)|\): either \(h\) and \(k^{-1}\) are conjugated in which case there are \(|C_{G}(k)|\) fixed points, or they are not, and there are no fixed points. In the case of a monoid, this reasoning mostly breaks: we crucially used the invertibility property, which monoids lack. The Schutzenberger groups seem to be ideal candidates to get back some of this invertibility. In this section we clarify the role of the Schutzenberger groups for counting fixed points, how to give meaning to "\(h\times_{H}\) and \(\times_{H}k\) are in the same conjugacy class", and how to factorize our previous remark over all the \(\mathcal{H}\)-classes of the same \(\mathcal{J}\)-class.
As the bijections between \(\mathcal{L}\) (and \(\mathcal{R}\)) classes will play an major role in the remainder of this section, we introduce the following notations.
**Notation 11**.: Given \(R,R^{\prime}\) two \(\mathcal{R}\)-classes in the same \(\mathcal{J}\)-class, we say that \((\lambda,\lambda^{\prime})\) is a _left Green pair_ with respect to \((R,R^{\prime})\) if:
* \(\lambda R=R^{\prime}\) and \(\lambda^{\prime}R^{\prime}=R\).
* \((\lambda\lambda^{\prime})\times_{R}=\operatorname{Id}_{R}\) and \((\lambda^{\prime}\lambda)\times_{R^{\prime}}=\operatorname{Id}_{R^{\prime}}\)
Similarly, given two \(\mathcal{L}\)-classes \(L,L^{\prime}\) in the same \(\mathcal{J}\)-classes, \((\rho,\rho^{\prime})\) is a _right Green pair_ with respect to \((L,L^{\prime})\) if:
* \(L\rho=L^{\prime}\) and \(L^{\prime}\rho^{\prime}=L\).
* \(\times_{L}(\rho\rho^{\prime})=\operatorname{Id}_{L}\) and \(\times_{L^{\prime}}(\rho^{\prime}\rho)=\operatorname{Id}_{L^{\prime}}\)
Using Green pairs, one can transport the problem of counting fixed points in an arbitrary \(\mathcal{H}\)-class to a reference \(\mathcal{H}\)-class.
**Proposition 12**.: _Let \(H_{1},H_{2}\subset J\) be two \(\mathcal{H}\)-classes contained in the same \(\mathcal{J}\)-class. Let \(\lambda,\lambda^{\prime},\rho,\rho^{\prime}\) such that:_
* \((\lambda,\lambda^{\prime})\) _is a left Green pair with respect to_ \((\mathcal{R}(H_{1}),\mathcal{R}(H_{2}))\)_,_
* \((\rho,\rho^{\prime})\) _is a right Green pair with respect to_ \((\mathcal{L}(H_{1}),\mathcal{L}(H_{2}))\)_._
_Finally, let \((h,k)\in{}_{M}\operatorname{Stab}(\mathcal{R}(H_{2}))\times\operatorname{ Stab}_{M}(\mathcal{L}(H_{2}))\) and define \((h^{\prime},k^{\prime})=(\lambda^{\prime}h\lambda,\rho k\rho^{\prime})\). Then the maps \(x\mapsto\lambda^{\prime}x\rho^{\prime}\) and \(x\mapsto\lambda x\rho\) are reciprocal bijections between the sets \(\operatorname{Fix}_{H_{2}}(h,k)=\{a\in H_{2}\,|\,hak=a\}\) and \(\operatorname{Fix}_{H_{1}}(h^{\prime},k^{\prime})=\{a\in H_{1}\,|\,h^{\prime }ak^{\prime}=a\}\)._
Proof.: First notice that Green's Lemma give us the existence of \(\lambda,\lambda^{\prime},\rho,\rho^{\prime}\) respecting the hypothesis we demand, and also gives that \(x\mapsto\lambda^{\prime}x\rho^{\prime}\) and \(x\mapsto\lambda x\rho\) are reciprocal bijections between \(H_{1}\) and \(H_{2}\). Let \(a_{1}\) be an element of \(H_{1}\) and denote by \(a_{2}=\lambda a_{1}\rho\). Then:
\[ha_{2}k=a_{2}\Leftrightarrow\lambda^{\prime}ha_{2}k\rho^{\prime}=\lambda^{ \prime}a_{2}\rho^{\prime}\Leftrightarrow(\lambda^{\prime}h\lambda)a_{1}(\rho k \rho^{\prime})=a_{1}\Leftrightarrow h^{\prime}a_{1}k^{\prime}=a_{1}\]
so these bijections restrict to \(\operatorname{Fix}_{H_{1}}(h^{\prime},k^{\prime})\) and \(\operatorname{Fix}_{H_{2}}(h,k)\).
Keeping in mind our computational goals, transporting the problem of counting fixed points from \(H_{2}\) to \(H_{1}\) is helpful, as for the price of 4 monoid multiplications, we can use a lot of precomputations specific to a particular \(\mathcal{H}\)-class, avoiding the repetition of multiple similar computations for each \(\mathcal{H}\)-class.
The question is now to determine the fixed points in a single \(\mathcal{H}\)-class, using our previous remark on conjugacy. Let us first clarify the idea of elements of the left and right Schutzenberger groups being in the same conjugacy class.
**Proposition 13**.: _Given and \(\mathcal{H}\)-class \(H\), \(a\in H\) and \(g\in\Gamma(H)\), we define \(\tau_{a}(g)\) as the unique element of \(\Gamma^{\prime}(H)\) such that \(g\cdot a=a\cdot\tau_{a}(g)\). Then \(\tau_{a}:\Gamma(H)\longrightarrow\Gamma^{\prime}(H)\) is an anti-isomorphism2. Moreover \(\tau_{a}\) gives rise to a bijection between the conjugacy classes of \(\Gamma(H)\) and \(\Gamma^{\prime}(H)\) that is independent of the choice of \(a\)._
Footnote 2: Note that some authors equip the right Schützenberger group with reversed composition, and thus obtain an isomorphism instead of anti-isomorphism.
Proof.: The first part is known since [8]. We want to check that for \(a\in H,g\in\Gamma(H)\), the conjugacy class of \(\tau_{a}(g)\) is defined independently of \(a\). Take any \(a,b\in H\). By definition of \(\Gamma(H)\), there exist some \(h\in\Gamma(H)\) such that \(b=h\cdot a\). So :
\[b\cdot\tau_{a}(g)=(h\cdot a)\cdot\tau_{a}(g)=h\cdot(g\cdot a)=hgh^{-1}\cdot(h \cdot a)=b\cdot\tau_{b}(hgh^{-1}).\]
Since \(\Gamma^{\prime}(H)\) acts freely, this means that \(\tau_{a}(g)=\tau_{b}(h)\tau_{b}(g)\tau_{b}(h)^{-1}\) and thus \(\tau_{a}(g)\) is conjugated with \(\tau_{b}(g)\), which proves that the conjugacy class of \(\tau_{a}(g)\) is indeed defined independently of \(a\). Finally, as \(\tau_{a}\) is a group morphism, the image are of two conjugated elements are conjugated, meaning that \(\tau_{a}\) does indeed induces bijection between the conjugacy classes of the left and right Schutzenberger groups, independently of the choice of \(a\).
In the next proposition, we formalize the idea of searching the fixed points as some centralizer, but in the context of a monoid.
**Proposition 14**.: _Let \(H\) be a \(\mathcal{H}\)-class, \(a\in H\) and \((h,k)\in{}_{M}\operatorname{Stab}(\mathcal{R}(H))\times\operatorname{Stab}_{M }(\mathcal{L}(H))\). Then_
\[|\operatorname{Fix}_{H}(h,k)|=\begin{cases}|C_{\Gamma^{\prime}(H)}(\times_{H} k)|&\text{ if }\tau_{a}(h\times_{H})^{-1}\in\overline{\times_{H}k}\\ 0&\text{otherwise}\end{cases}\]
_where \(\overline{\times_{H}k}\) is the conjugacy class of \(\times_{H}k\) in \(\Gamma^{\prime}(H)\) and \(C_{\Gamma^{\prime}(H)}(\times_{H}k)\) is the centralizer in \(\Gamma^{\prime}(H)\) of \(\times_{H}k\)._
Proof.: For simplicity, we commit an abuse of notation by denoting \(h\times_{H}\) as \(h\) and \(\times_{H}k\) as \(k\). Let \(a\) be any element of \(H\).
\[\operatorname{Fix}_{H}(h,k) =\{b\in H\,|\,hbk=b\}\] \[=\{a\cdot g\,|\,g\in\Gamma^{\prime}(H)\text{ and }ha\cdot gk=a \cdot g\}\] \[=\{a\cdot g\,|\,g\in\Gamma^{\prime}(H)\text{ and }a\cdot\tau_{a}(h) gk=a\cdot g\}\] \[=\{a\cdot g\,|\,g\in\Gamma^{\prime}(H)\text{ and }\tau_{a}(h) gk=g\}.\]
The last equality comes from the fact that \(\Gamma^{\prime}(H)\) acts freely so we can simplify the \(a\). Suppose that \(\operatorname{Fix}_{H}(h,k)\) is non-empty and let \(\gamma\in\Gamma^{\prime}(H)\) such that \(\tau_{a}(h)\gamma k=\gamma\). Then, for any \(g\in\Gamma^{\prime}(H)\) :
\[\tau_{a}(h)gk=g\Leftrightarrow g^{-1}\tau_{a}(h)gk=e\Leftrightarrow g^{-1} \gamma k^{-1}\gamma^{-1}gk\ =e\Leftrightarrow[\gamma^{-1}g,k]=e\]
where \([\cdot,\cdot]\) is the commutation bracket. This means that
\[\operatorname{Fix}_{H}(h,k)=\{a\cdot g\,|\,g\in\gamma C_{\Gamma^{\prime}(H)}(k )\}.\]
Note that because, again, \(\Gamma^{\prime}(H)\) acts freely, \(\operatorname{Fix}_{H}(h,k)\) has the same cardinality as \(C_{\Gamma^{\prime}(H)}(k)\) and that, from Proposition 13 this is independent from the choice of \(a\) which proves the result.
**Example 15**.: Consider \(a=[1\ 2\ 2\ 3]\in T_{4}\) and \(H=\mathcal{H}(a)\). We have \(\operatorname{Im}a=\{1,2,3\}\) and \(\ker a=\{\{1\},\{2,3\},\{4\}\}\). Notice that \(H\) is not a group since \(a^{2}=[1\ 2\ 2\ 2]\notin H\). Considering the Schutzenberger groups as symmetric groups on the image and kernel common to all elements of \(H\) as in Example 7, we have \(\Gamma(H)=\mathfrak{S}(\operatorname{Im}a)\) and \(\Gamma^{\prime}(H)=\mathfrak{S}(\ker a)\).
Let us first check for fixed points under the action of \(h=[1\ 2\ 3\ 4]\) on the left and \(k=[2\ 1\ 1\ 4]\) on the right. Seen as an element of \(\Gamma(H)\), \(h\) corresponds to \(\operatorname{Id}_{\operatorname{Im}a}\) and \(k\) corresponds to \((\{2,3\ \{1\}})\) in \(\Gamma^{\prime}(H)\). Since we have \(\tau_{a}(h)=\operatorname{Id}_{\ker a}\), it follows that \(|\operatorname{Fix}_{H}(h,k)|=0\).
If we now take \(h\) to be \([1\ 3\ 2\ 4]\), the corresponding element in \(\Gamma(H)\) is \((2\ 3)\) and \(\tau_{a}(h)=(\{2,3\ \{4\}})\). Since \((\{2,3\ \{4\}})\) and \((\{2,3\ \{1\}})\) are conjugated in \(\mathfrak{S}(\ker a)\), the set of fixed points is non-empty. Their centralizers have cardinal 2 and one can indeed check that \([2\ 3\ 3\ 1]\) and \([3\ 2\ 2\ 1]\) are the only fixed points in \(H\).
Putting together the previous results, we get the following Corollary on the cardinality of \(\operatorname{Fix}_{J}(h,k)\).
**Corollary 16**.: _Let \(J\) be a \(\mathcal{J}\)-class, \(a\) any element of \(J\) and denote by \(H_{0}=\mathcal{H}(a),R_{0}=\mathcal{R}(a),L_{0}=\mathcal{L}(a)\). Let \(h,k\) be any elements of \(M\). We denote by:_
* \((\lambda_{R},\lambda^{\prime}_{R})\) _a left Green pair with respect to_ \((R_{0},R)\) _for each_ \(\mathcal{R}\)_-class_ \(R\subset J\)_,_
* \((\rho_{L},\rho^{\prime}_{L})\) _a right Green pair with respect to_ \((L_{0},L)\) _for each_ \(\mathcal{L}\)_-class_ \(L\subset J\)
* \(S_{\mathcal{R}}(h)=\{R\subset J\,|\,R\) _is a_ \(\mathcal{R}\)_-class and_ \(hR=R\}\)_,_
* \(S_{\mathcal{L}}(k)=\{L\subset J\,|\,R\) _is a_ \(\mathcal{L}\)_-class and_ \(Lk=L\}\)__
_Denoting the set of conjugacy classes of \(\Gamma^{\prime}(H_{0})\) as \(C\), we further define two vectors:_
* \(r_{J}(h)=(|C_{\Gamma^{\prime}(H)}(g)|\cdot|\{R\in S_{\mathcal{R}}(h)\,|\,\tau_{ a}(\lambda^{\prime}_{R}h\lambda_{R})\in\bar{g}\}|)_{\bar{g}\in C}\)_,_
* \(l_{J}(k)=(|\{L\in S_{\mathcal{L}}(k)\,|\,\rho^{\prime}_{L}k\rho_{L}\in\bar{g} \}|)_{\bar{g}\in C}\)_._
_Then \(\operatorname{Fix}_{J}(h,k)\) has cardinality the dot product of \(r_{J}(h)\) with \(l_{J}(k)\)._
## II Modules: character table and Cartan matrix
In the remainder of this paper, \(\mathbf{k}\) is a perfect field of null characteristic and \(M\) is still a finite monoid. Furthermore, we suppose that \(\mathbf{k}\) is "big enough", meaning that if not algebraically closed, at least a splitting field for the characteristic polynomials of the elements of \(M\) seen as linear maps on \(\mathbf{k}M\).
We will discuss the representation theory of a monoid \(M\) over \(\mathbf{k}\) using the language of modules. That is, a representation of \(M\) over \(\mathbf{k}\) will be a \(\mathbf{k}\)-vector space \(V\) equipped with a linear action of the algebra \(\mathbf{k}M\). As is usage, whenever the action is on the left we will say that \(V\) is a \(\mathbf{k}M-\operatorname{mod}\) and a \(\operatorname{mod}-\mathbf{k}M\) if the action is on the right. If \(M\), \(M^{\prime}\) are two finite, possibly different monoids, a \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}M^{\prime}\) is simply simultaneously a \(\mathbf{k}M-\operatorname{mod}\) and a \(\operatorname{mod}-\mathbf{k}M^{\prime}\). For a monoid \(M\), we denote by \(M^{op}\) the _opposite_ monoid, with multiplication defined by \(m\cdot_{M^{op}}m^{\prime}=m^{\prime}m\). We will use liberally the fact that a \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}M^{\prime}\) is naturally a \(\mathbf{k}M\otimes\mathbf{k}M^{\prime op}-\operatorname{mod}\) and a \(\mathbf{k}(M\times M^{\prime op})-\operatorname{mod}\), and reciprocally. In the totality of this paper, we assume that the modules are finite dimensional as vector spaces over \(\mathbf{k}\). Because of this, the Jordan-Holder Theorem applies and the set of composition factors counted with multiplicities of a module is independent of the choice of a composition series. If \(S\) is a \(\mathbf{k}M\)-module and \(S\) is a simple \(\mathbf{k}M\)-module, we denote by \([V:S]\) the multiplicity of \(S\) as a composition factor of \(V\).
In this section, we deal with monoid representation theory, with the goal in mind to compute the character table of \(M\). Using the Munn-Clifford-Ponizovskii, this can largely be reduced to group representation theory. Stated differently, the representation theory of a monoid \(M\) is an extension of the representation theory of certain groups embedded in \(M\). The groups in question are precisely the groups of Definition 9. In the first part of this section, we use this fact to find a description of an \(\mathcal{L}\)-class containing an idempotent \(e\) quotiented by its radical as a product of simple \(\mathbf{k}G_{e}\)-modules and simple \(\mathbf{k}M\)-modules. In the second part, we translate this decomposition in terms of characters, which gives us the formula we seek. Finally, we recall and discuss the formula for the Cartan matrix from Thiery [4].
## 1 On modules
Note that if we choose an element \(a\in M\) and denote by \(L\) its \(\mathcal{L}\)-class and \(H\) its \(\mathcal{H}\)-class, we can equip \(\mathbf{k}L\) with a \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}\Gamma^{\prime}(H)\) structure. \(\mathbf{k}L\) is already a \(\operatorname{mod}-\mathbf{k}\Gamma^{\prime}(H)\) by definition of \(\Gamma^{\prime}(H)\). We can also make it into a \(\mathbf{k}M-\operatorname{mod}\) by setting, for every \(m\in M\) and \(l\in L\):
\[m\cdot l=\begin{cases}ml\text{ if }ml\in L\\ 0\text{ otherwise}\end{cases}.\]
This is well defined, as \(ml\notin L\) implies that \(l\,>_{\mathcal{L}}ml\) and so for every \(m^{\prime}\in M\), \(l>_{\mathcal{L}}m^{\prime}ml\notin L:\) once fallen out of \(L\), we cannot climb back in.
We have previously stated that the representation theory of monoids is an extension of the representation theory of some subgroups. This mainly expressed using the two following functors.
**Definition 17**.: Let \(e\in M\) be an idempotent, \(\mathcal{L}(e)\) its \(\mathcal{L}\)-class, \(G_{e}\) the associated maximal subgroup. We define the two following maps :
\[\operatorname{Ind}_{G_{e}}^{M}: \begin{cases}\mathbf{k}G_{e}\mathrm{-mod}\longrightarrow\mathbf{ k}M\mathrm{-mod}\\ V\longmapsto\mathbf{k}\mathcal{L}(e)\otimes_{\mathbf{k}G_{e}}V\\ \end{cases}\] \[N_{e}: \begin{cases}\mathbf{k}M\mathrm{-mod}\longrightarrow\mathbf{k}M \mathrm{-mod}\\ V\longmapsto\{v\in V\,|\,eMv=0\}\end{cases}.\]
The idempotents and their maximal subgroups play a central role in the theory. One can show (see for instance [6, Proposition 1.14]) that if \(e,f\) are two idempotents in the same \(\mathcal{J}\)-class, there are some \(x,x^{\prime}\in M\) such that \(xx^{\prime}=e\) and \(x^{\prime}x=f\) and that \(G_{e}\cong G_{f}\). A \(\mathcal{J}\)-class containing an idempotent is called a _regular_\(\mathcal{J}\)-class.
We are almost ready to state the Clifford-Munn-Ponizovskii theorem, which is the central piece connecting group and monoid representation theory. We will need the notion of _apex_ of a \(\mathbf{k}M\)-module. A proof of the Clifford-Munn-Ponizovskii Theorem can be found in [9, Section 5.2].
**Definition 18**.: Let \(V\) be a \(\mathbf{k}M-\operatorname{mod}\), we denote its annihilator in \(M\) by \(\operatorname{Ann}_{M}(V)=\{m\in M\,|\,mV=\{0\}\}\). This is clearly an two-sided ideal of \(M\) and as such is an union of \(\mathcal{J}\)-classes. A regular \(\mathcal{J}\)-class \(J\) is said to be the _apex_ of \(V\) if \(\operatorname{Ann}_{M}(V)=I_{J}\) where \(I_{J}=\{J\not\leq_{\mathcal{L}}\mathcal{J}(s)\,|\,s\in M\}\). If \(e\in J\) is an idempotent, we also say that \(V\) has apex \(e\).
**Theoreme 19** (Clifford-Munn-Ponizovskii).: _Let \(M\) be a finite monoid, \(e\in M\) an idempotent and \(\mathbf{k}\) be a field._
1. _There is a bijection between isomorphism classes of simple_ \(M\)_-modules with apex_ \(e\) _and isomorphism classes of simple_ \(G_{e}\)_-modules given by :_ \[V\longmapsto V^{\#}=\operatorname{Ind}_{G_{e}}^{M}(V)/\operatorname{rad}( \operatorname{Ind}_{G_{e}}^{M}(V)).\] _The reciprocal bijection is given by_ \(S\longmapsto eS\)
2. \(\mathrm{rad}(\mathrm{Ind}_{G_{e}}^{M}(V))=N_{e}(\mathrm{rad}(\mathrm{Ind}_{G_{e}}^{ M}(V)))\)__
3. _Every simple_ \(M\)_-module has an apex._
4. _Every composition factor of_ \(\mathrm{Ind}_{G_{e}}^{M}(V)\) _with the exception of_ \(V^{\#}\) _has an apex strictly_ \(\mathcal{J}\)_-greater than_ \(e\)_. Moreover,_ \(V^{\#}\) _has apex_ \(e\) _and is a factor of multiplicity one._
This allows us the following description of the \(\mathcal{L}\)-class of an idempotent \(e\).
**Proposition 20**.: _Let \(e\in M\) be an idempotent and \(G_{e}\) be the maximal subgroup at \(e\). Let \(\mathrm{Irr}_{e}\) be a set of representatives of the isomorphism classes of simple \(\mathbf{k}G_{e}\)-modules. Then:_
\[\mathbf{k}\mathcal{L}(e)=\bigoplus_{V\in\mathrm{Irr}_{e}}\mathrm{Ind}_{G_{e}}^ {M}(V)\otimes_{\mathbf{k}}V^{*}\]
_where \(V^{*}\) is the dual of \(V\)._
Proof.: By definition, \(\mathrm{Ind}_{G_{e}}^{M}(V)=\mathcal{L}(e)\otimes_{\mathbf{k}G_{e}}V\). Now, since direct sum and tensor product over a ring with identity commute :
\[\bigoplus_{V\in\mathrm{Irr}_{e}}\mathrm{Ind}_{G_{e}}^{M}(V)\otimes V^{*}= \bigoplus_{V\in\mathrm{Irr}_{e}}\mathbf{k}\mathcal{L}(e)\otimes_{G_{e}}V \otimes V^{*}=\mathbf{k}\mathcal{L}(e)\otimes_{G_{e}}\left(\bigoplus_{V\in \mathrm{Irr}_{e}}V\otimes V^{*}\right)\]
Because \(\mathbf{k}\) is of null characteristic, \(\mathbf{k}G_{e}\) is semi-simple. By the Wedderburn-Artin theorem, \(\mathbf{k}G_{e}=\bigoplus_{V\in\mathrm{Irr}_{e}}V\otimes V^{*}\) so :
\[\bigoplus_{V\in\mathrm{Irr}_{e}}\mathrm{Ind}_{G_{e}}^{M}(V)\otimes V^{*}= \mathbf{k}\mathcal{L}(e)\otimes_{\mathbf{k}G_{e}}\mathbf{k}G_{e}=\mathcal{L}(e)\]
since \(\mathbf{k}G_{e}\) is a ring with identity.
Note that this puts in relation three kinds of modules : the simple \(\mathbf{k}G_{e}\)-modules, which are well understood, \(\mathbf{k}\mathcal{L}(e)\) which is understood as well because it is a combinatorial module3, and finally the modules \(\mathrm{Ind}_{G_{e}}^{M}(V)\) which contain, in a sense, the simple \(\mathbf{k}M\)-modules that we are after. According to the Clifford-Munn-Ponizovskii Theorem, we still need to remove the radical of each \(\mathrm{Ind}_{G_{e}}^{M}(V)\) factor. Proposition 23 puts the radical in a form similar to Theorem 19 while Proposition 24 does exactly this. Lemma 21 and its Corollary are technical results on radicals used in the proof of Proposition 23.
Footnote 3: That is, the multiplication of an element of the basis of the module by an element of \(M\) is either an other element of the basis or \(0\).
**Lemma 21**.: _Let \(A,B\) be two finite dimensional algebras over a perfect field \(\mathbf{k}\). Then:_
\[\mathrm{rad}(A\otimes B)=\mathrm{rad}(A)\otimes B+A\otimes\mathrm{rad}(B).\]
While we haven't be able to find a source for that claim it seems to be folklore in the algebra representation community. For the sake of completeness, we reproduce a proof communicated to us by Pr. Pierre-Guy Plamondon4.
Footnote 4: Pr. Pierre-Guy Plamondon, Laboratoire de Mathématiques de Versailles, Université Paris-Saclay, website.
Proof.: Because \(\mathbf{k}\) is perfect, Wedderburn's Principal Theorem applies and we get the decompositions \(A=A^{\prime}\oplus\operatorname{rad}(A),B=B^{\prime}\oplus\operatorname{ rad}(B)\), with \(A^{\prime}\) and \(B^{\prime}\) semi-simple algebras. To prove the result, we show that \(\operatorname{rad}(A)\otimes B+A\otimes\operatorname{rad}(B)\) is a nil radical and that
\[A\otimes B\raisebox{-2.0pt}{\includegraphics[height=14.0pt]{images/.eps}} \operatorname{rad}(A)\otimes B+A\otimes\operatorname{rad}(B)\]
is semi-simple. Let us first show that the quotient is semi-simple. We have:
\[A\otimes B =(A^{\prime}\oplus\operatorname{rad}(A))\otimes(B^{\prime}\oplus \operatorname{rad}(B))\] \[=A^{\prime}\otimes B^{\prime}\oplus A^{\prime}\otimes \operatorname{rad}(B)\oplus\operatorname{rad}(A)\otimes B^{\prime}\oplus \operatorname{rad}(A)\otimes\operatorname{rad}(B).\]
On the other hand, the same decompositions give us
\[A\otimes\operatorname{rad}(B)\oplus\operatorname{rad}(A)\otimes B=A^{\prime} \otimes\operatorname{rad}(B)\oplus\operatorname{rad}(A)\otimes B^{\prime} \oplus\operatorname{rad}(A)\otimes\operatorname{rad}(B).\]
Finally,
\[A\otimes B\raisebox{-2.0pt}{\includegraphics[height=14.0pt]{images/.eps}} \operatorname{rad}(B)\oplus\operatorname{rad}(A)\otimes B=A^{\prime}\otimes B^ {\prime}\]
which, since \(A^{\prime},B^{\prime}\) are semi-simple, is also semi-simple. \(A\otimes\operatorname{rad}(B)\oplus\operatorname{rad}(A)\otimes B\) is also nil, because \(\operatorname{rad}(B)\) and \(\operatorname{rad}(A)\) are, so, indeed, \(\operatorname{rad}(A\otimes B)=A\otimes\operatorname{rad}(B)\oplus \operatorname{rad}(A)\otimes B\).
From Lemma 21, we get the following Corollary by recalling that if \(V\) is a \(A\)-module, \(\operatorname{rad}_{A}(V)=\operatorname{rad}(A)\cdot V\).
**Corollary 22**.: _Let \(A,B\) be two finite dimensional unitary algebras over a perfect field. If \(V_{A}\otimes V_{B}\) is a \(A-\operatorname{mod}-B\) (or equivalently a \(A\otimes B^{op}-\operatorname{mod}\)), then_
\[\operatorname{rad}_{A\otimes B^{op}}(V_{A}\otimes V_{B})=\operatorname{rad}_{A} (V_{A})\otimes B+A\otimes\operatorname{rad}_{B}(V_{B}).\]
This allows us to identify the radical of \(\mathbf{k}\mathcal{L}(e)\).
**Proposition 23**.: _Let \(M\) be a finite monoid, \(e\in S\) an idempotent \(G_{e}\) be the maximal subgroup at \(e\) and \(\mathbf{k}\) be a perfect field. Then:_
\[\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}(\mathbf{k} \mathcal{L}(e))=N_{e}(\mathbf{k}\mathcal{L}(e)).\]
Proof.: Using Lemma 21, for \(V\) a simple \(G_{e}\)-module, we have that :
\[\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}( \operatorname{Ind}_{G_{e}}^{S}(V)\otimes_{\mathbf{k}}V^{*})\] \[= \operatorname{rad}_{\mathbf{k}M}\operatorname{Ind}_{G_{e}}^{M}(V) \otimes V^{*}+\operatorname{Ind}_{G_{e}}^{M}(V)\otimes\operatorname{rad}_{ \mathbf{k}G_{e}^{op}}(V^{*})\] \[\stackrel{{(1)}}{{=}} \operatorname{rad}_{\mathbf{k}M}\operatorname{Ind}_{G_{e}}^{M}(V) \otimes V^{*}\] \[\stackrel{{(2)}}{{=}} N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V))\otimes V^{*}\]
where equality (1) comes from the simplicity of \(V^{*}\) as a \(\mathbf{k}G_{e}^{op}\)-module and (2) is the second point of Theorem 19.
Since radical and direct sums commute, denoting by \(\operatorname{Irr}_{e}\) a set of representatives of the isomorphism classes of simple \(\mathbf{k}G_{e}\)-modules, we know that:
\[\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}(\mathcal{L}(e))= \bigoplus_{V\in\operatorname{Irr}_{e}}N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V) )\otimes V^{*}.\]
It remains to be seen why
\[\bigoplus_{V\in\operatorname{Irr}_{e}}N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V) )\otimes V^{*}=N_{e}(\mathbf{k}\mathcal{L}(e)).\]
It is clear the direct sum on the left is a subset of the set on the right. For the other inclusion, we see that if \(V,V^{\prime}\) are \(\mathbf{k}M\)-modules, \(N_{e}(V\oplus V^{\prime})=N_{e}(V)\oplus N_{e}(V^{\prime})\). Given the proposition 20, it is enough to show that \(N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V))\otimes V^{*}=N_{e}(\operatorname{ Ind}_{G_{e}}^{M}(V)\otimes V^{*})\). Let \(x\in\operatorname{Ind}_{G_{e}}^{M}(V)\otimes V^{*}\) be such that for every \(m\in M,emx=0\). \(x\) can be writen as \(\sum_{i}(\sum_{j}x_{i,j}b_{j})\otimes b_{i}^{\prime}\) where \(\{b_{j}\}_{j}\) is a basis of \(\operatorname{Ind}_{G_{e}}^{M}(V)\) and \(\{b_{i}^{\prime}\}_{i}\) is a basis of \(V^{*}\). For every \(m\in M\), we have :
\[em\cdot x=em\cdot\sum_{i}(\sum_{j}x_{i,j}b_{j})\otimes b_{i}^{\prime}=\sum_{i} (em\cdot\sum_{j}x_{i,j}b_{j})\otimes b_{i}^{\prime}=0\]
that is, for every \(b_{i}^{\prime}\) we get \(em\cdot\sum_{j}x_{i,j}b_{j}=0\) so \(\sum_{j}x_{i,j}b_{j}\in N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V))\) which means \(x\in N_{e}(\operatorname{Ind}_{G_{e}}^{M}(V))\otimes V^{*}\).
**Proposition 24**.: _Let \(e\in M\) be an idempotent and \(G_{e}\) be the maximal subgroup at \(e\). Let \(\operatorname{Irr}_{e}\) be a set of representatives of the isomorphism classes of simple \(\mathbf{k}G_{e}\)-modules. Then:_
\[\mathbf{k}\mathcal{L}(e)/\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{e }^{op}}(\mathbf{k}\mathcal{L}(e))\cong\bigoplus_{V\in\operatorname{Irr}_{e}}V ^{\#}\otimes V^{*}.\]
Proof.: From Proposition 23, we have a decomposition of \(\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}(\mathcal{L}(e))\) as a direct sum adapted to the decomposition of \(\mathbf{k}\mathcal{L}(e)\) as \(\bigoplus_{V\in\operatorname{Irr}_{e}}\operatorname{Ind}_{G_{e}}^{S}(V)\otimes _{\mathbf{k}}V^{*}\). So:
\[\mathbf{k}\mathcal{L}(e)/\operatorname{rad}_{\mathbf{k}M\otimes\mathbf{k}G_{ e}^{op}}(\mathbf{k}\mathcal{L}(e))\cong\bigoplus_{V\in\operatorname{Irr}_{e}}( \operatorname{Ind}_{G_{e}}^{S}(V)\otimes_{\mathbf{k}}V^{*})/(N_{e}( \operatorname{Ind}_{G_{e}}^{S}(V))\otimes V^{*}).\]
From Theorem 19, we know that
\[0\longrightarrow N_{e}(\operatorname{Ind}_{G_{e}}^{S}(V))\longrightarrow \operatorname{Ind}_{G_{e}}^{S}(V)\longrightarrow V^{\#}\longrightarrow 0\]
is a short exact sequence. Since \(N_{e}(\operatorname{Ind}_{G_{e}}^{S}(V))\otimes V^{*}\) a submodule of \(\operatorname{Ind}_{G_{e}}^{S}(V)\otimes V^{*}\) and because tensor product is right exact we have a short exact sequence :
\[0\longrightarrow N_{e}(\operatorname{Ind}_{G_{e}}^{S}(V))\otimes V^{*} \longrightarrow\operatorname{Ind}_{G_{e}}^{S}(V)\otimes V^{*}\longrightarrow V ^{\#}\otimes V^{*}\longrightarrow 0\]
which proves the result.
On characters
One of the major features of the finite group representation theory is the fact that all the information on a representation can be summarized in its _character_. This (partially) carries over to monoid representation theory, as we shall see in this section where we reformulate the results of the previous section in terms of characters.
**Definition 25**.: If \(V\) is a finite dimension \(\mathbf{k}M\)-module, its _character_ is the map from \(M\) to \(\mathbf{k}\) defined by \(\chi^{V}_{\mathbf{k}M}:m\longmapsto\operatorname{Tr}(v\mapsto m\cdot v)\).
We recall the following well know facts about characters. Proofs for fact 2 and 3 are respectively (ii) and (iii) of [9, Proposition 7.12]5.
Footnote 5: Note that for Fact 3, our reference deals only with the case \(M=M^{\prime}\), but the proof is the same.
**Proposition 26**.:
1. _Let_ \(V\) _be a_ \(\mathbf{k}M\)_-module. We have_ \(\chi^{V}_{\mathbf{k}M}=\chi^{V^{*}}_{\mathbf{k}M^{op}}\)_._
2. _Consider the short exact sequence of_ \(\mathbf{k}M\)_-modules :_ \[0\longrightarrow A\longrightarrow B\longrightarrow B/A\longrightarrow 0.\] _Then_ \(\chi^{B/A}_{\mathbf{k}M}=\chi^{B}_{\mathbf{k}M}-\chi^{A}_{\mathbf{k}M}\)_._
3. _Consider_ \(M,M^{\prime}\) _two finite monoids,_ \(V\) \(a\) \(\mathbf{k}M\)_-module and_ \(W\) \(a\) \(\mathbf{k}M^{\prime op}\)_-module. Then_ \(\chi^{V\otimes W}_{\mathbf{k}M\otimes\mathbf{k}M^{\prime}}=\chi^{V}_{\mathbf{ k}M}\chi^{W}_{\mathbf{k}M^{\prime}}\)_._
The previous properties are simply extensions of similar properties on groups, and their proof is similar. From groups, we also keep in the case of monoids the linear independence of irreducible characters (see [9, Theorem 7.7] for reference):
**Proposition 27**.: _The irreducible characters \(\{\chi^{S}_{\mathbf{k}M}\mid S\) is a simple \(\mathbf{k}M-\operatorname{mod}\}\) are linearly independent as \(\mathbf{k}\) valued functions._
This, together with the second point in the Proposition 26, has a nice consequence. As we are interested in finite dimensional module over finite monoids, those modules have a composition series. Say that a \(\mathbf{k}M\)-module \(V\), has \(S\) as a composition factor with multiplicity \([V:S]\) for any simple \(\mathbf{k}M\)-module \(S\). Then:
\[\chi^{V}_{\mathbf{k}M}=\sum_{S}[V:S]\chi^{S}_{\mathbf{k}M}.\]
In that way, since characters of the simple modules are linearly independent, the character of a module can be seen as a record of its composition factors.
The question of where to compute characters is worth asking: in the case of groups, one needs only to compute the character for a transversal of conjugacy classes to get its value everywhere. The case of monoids was described for the first time by McAlister in [10].
**Definition 28**.: We say that two elements \(m,m^{\prime}\) in \(M\) are in the same _generalized conjugacy class_ or _character equivalency class_ if for every \(\mathbf{k}M\)-module \(V\), \(\chi_{M}^{V}(m)=\chi_{M}^{V}(m^{\prime}).\) We note \(C_{M}\) the set of generalized conjugacy classes.
**Proposition 29**.: _([10, Proposition 2.5]) Let \(\mathcal{E}=\{e_{1},\ldots,e_{n}\}\) be idempotent representatives of the regular \(\mathcal{J}\)-classes of \(M\) and for each \(e_{i}\) let \(\mathcal{C}_{i}=\{c_{i,1},\ldots,c_{i,m_{i}}\}\) be representatives of the conjugacy classes of \(G_{e_{i}}\). Then the set \(\mathcal{C}_{M}=\bigcup_{e_{i}\in\mathcal{E}}\mathcal{C}_{i}\) is a set of representatives of character equivalency classes of \(M\)._
We can now recall the definition of the character table of a monoid.
**Definition 30**.: Let \(\mathrm{Irr}_{M}\) be the set of isomorphism classes of simple \(\mathbf{k}M\)-modules and \(C_{M}\) as in definition 28. The _character table_ of \(M\) over \(\mathbf{k}\) is the (square) matrix defined by :
\[X(M)=(\chi_{\mathbf{k}M}^{V}(m))_{V\in\mathrm{Irr}_{M},m\in C_{M}}.\]
Moreover, if \(e\in M\) is an idempotent, we define \(X_{e}(M)\) as the matrix obtained by extracting from \(X(M)\) only the lines corresponding to simple modules with apex \(e\).
Finally, we can apply the language of characters to Proposition 24, which yield a formula for computing the character table of \(M\) over \(\mathbf{k}\) given the character tables of the groups \(G_{e}\) over \(k\).
**Proposition 31**.: _Let \(e\in M\) be an idempotent, \(G_{e}\) be the maximal subgroup at \(e\). We have the formula for \(X_{e}(M)\) :_
\[X_{e}(M)={}^{t}X(G_{e})^{-1}\cdot\left(\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e} ^{op}}^{\mathbf{k}\mathcal{L}(e)}(m,g)-\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e} ^{op}}^{N_{e}(\mathbf{k}\mathcal{L}(e))}(m,g)\right)_{g\in C_{G_{e}},m\in C_{M}}\]
_where the dot is the matrix product._
Proof.: First, we have, because of Proposition 26-2, we have:
\[\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^{\mathbf{k}\mathcal{L}(e)/N_{e} (\mathbf{k}\mathcal{L}(e))}=\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^{ \mathbf{k}\mathcal{L}(e)}(m,g)-\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^ {N_{e}(\mathbf{k}\mathcal{L}(e))}(m,g)\]
Then, from Proposition 24, we know that:
\[\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^{\mathbf{k}\mathcal{ L}(e)/N_{e}(\mathbf{k}\mathcal{L}(e))}(m,g) =\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^{\mathbf{\oplus}V \mathbf{\oplus}V^{*}}\] \[=\sum_{V\in\mathrm{Irr}_{e}}\chi_{\mathbf{k}M\otimes\mathbf{k}G_{ e}^{op}}^{V\mathbf{\oplus}V^{*}}\] \[=\sum_{V\in\mathrm{Irr}_{e}}\chi_{\mathbf{k}M}^{V\mathbf{\oplus} V^{*}}(m)\chi_{\mathbf{k}G_{e}}^{V}(g)\]
This last sum is clearly the dot product between the column of \(X(G_{e})\) indexed by \(g\) and the column of \(X_{e}(M)\) indexed by \(m\). That is, the coefficient in position \((g,m)\) of \(\chi_{M-G_{e}}^{\mathcal{L}(e)/\operatorname{rad}(\mathcal{L}(e))}\) is equal to the coefficient in position \((g,m)\) of \({}^{t}X(G_{e})\cdot X_{e}(M)\), which, together with Proposition 26(ii), proves the equality.
One step beyond: The Cartan matrix
We are, at last, in measure to state the formula from Thiery for the Cartan Matrix. Without getting into the specific details, the Cartan matrix can be seen as measure of how "not semi-simple" the algebra of the monoid is. We use a non standard definition of the Cartan matrix, first given in [4, Definition 2.6]. A formal proof that this is equivalent to the usual definition can be found in [9, Corolary 7.28].
**Definition 32**.: Let \(\{S_{1},\ldots,S_{n}\}\) be a set of representatives of the isomorphism classes of simple \(\mathbf{k}M\)-modules. The simple \(\mathbf{k}M\otimes\mathbf{k}M^{op}\) modules are the \(S_{i}\otimes S_{j}^{*}\) for all \(i,j\in\llbracket 1,n\rrbracket\). Denote by \([\mathbf{k}M:S_{i}\otimes S_{j}^{*}]\) the multiplicity of \(S_{i}\otimes S_{j}^{*}\) as a composition factor of \(\mathbf{k}M\).
The Cartan matrix of \(\mathbf{k}M\) is defined by:
\[C(\mathbf{k}M)=([\mathbf{k}M:S_{i}\otimes S_{j}^{*}])_{i,j}\]
In other words, the Cartan matrix is a recording of the multiplicities of the composition factors of \(\mathbf{k}M\) as a \(\mathbf{k}M\otimes\mathbf{k}M^{op}\) module. But so is its character! The difference being that the character of \(\mathbf{k}M\) as it is computed is expressed in the basis of the character equivalency classes of \(M\times M^{op}\) while the Cartan matrix is expressed directly in the basis of the simple modules. Since the basis change between the two is precisely given by the character table and hence, we have the Thiery's Formula for the Cartan matrix.
**Proposition 33**.: _The Cartan matrix is given by the formula:_
\[C(\mathbf{k}M)={}^{t}X_{M}^{-1}BX_{M}^{-1}\]
_where \(B=(|\{s\in M\,|\,msm^{\prime}\}|)_{m,m^{\prime}\in C_{M}}\)_
## III Some explicit computations
In this section, \(M\) is a fixed submonoid of \(T_{n}\). We explain how to combinatorially compute the characters of three interesting modules: \(\mathbf{k}J\) for a \(\mathcal{J}\)-class \(J\), \(\mathbf{k}M\) as \(\mathbf{k}M-\mathrm{mod}-\mathbf{k}M\) and \(\mathbf{k}\mathcal{L}(e)\) for an \(\mathcal{L}\)-class containing an idempotent \(e\) as a \(\mathbf{k}M-\mathrm{mod}-\mathbf{k}G_{e}\). For clarity, we will designate these characters as _bicharacters_.
We begin by discussing the computational hypotheses we make before presenting the algorithms themselves. In a third subsection, we detail how to compute the radical of \(\mathbf{k}\mathcal{L}(e)\) to apply our formula from Proposition 31 to compute the character table of a monoid. This latter case is not as tidy as the former ones, as we don't have a purely combinatorial way of computing the radical.
## 1 Computational hypotheses
In this section we discuss the computational hypotheses necessary for the algorithms in the next section. This section is based on the work [11] in which East, Egry-Nagy,
Mitchell and Peresse provide efficient algorithms for all basic computational questions on finite semigroups (which include monoids). Although we limit our scope to the case of transformation monoids, methods described [11] allow the algorithms described below to be applied to other interesting classes of monoids. Moreover, they can theoretically be applied to any finite monoid using a Cayley embedding in a full transformation monoid. In general however, this is very inefficient and not feasible in practice.
Following the authors of [11], we make the following fundamental assumptions that we can compute:
* Assumption I : a product of two elements of the monoid.
* Assumption II : the image and kernel of a transformation (note that we do not explicitly use this assumption, but that it is necessary for the algorithms of [11] that we do use).
* Assumption III : Green pairs.
* Assumption IV : Given \(h\in{}_{M}\operatorname{Stab}(H)\) compute the corresponding element in \(\Gamma(H)\) (understood as a permutation group of the image common to all elements of \(H\) as seen in Example 7), and similarly on the right.
Not only do we directly need to be able to do these computations for our own algorithm, but they are also prerequisite for the algorithms from [11]. As such, we refer to the top of Section 5.2 of [11] on how to realize these computations in the case of transformation monoids.
We, again, refer to [11] for the specific algorithms meeting our computational prerequisites.
* Computing the Schutzenberger groups: [11, Algorithm 4]
* Checking membership of an element in a Green class: [11, Algorithms 7 & 8].
* Finding idempotents: [11, Algorithm 10]. This algorithm also allows for finding the regular \(\mathcal{J}\)-classes.
* Decomposing the monoid in \(\mathcal{R},\mathcal{L}\) and \(\mathcal{J}\)-classes : [11, Algorithm 11] and its discussion. Note that by storing this decomposition, we can, given an element of the monoid, find the classes that contain it.
* Obtaining a representative of a Green class: this is given by the data structure representing the Green classes described at the top of [11, Section 5.4].
Finally, we require the following points that, although they are not described in [11], are easily obtained from it.
* Computing a set \(C_{M}\) of character equivalency representatives: given Proposition 29, this can be done in four steps:
1. compute a set \(\mathcal{E}\) of idempotent representatives of the regular \(\mathcal{J}\)-classes, 2. compute \(\Gamma(\mathcal{H}(e))\) for each \(e\in\mathcal{E}\), 3. compute a set \(C_{e}\) of representatives of the conjugacy classes of \(\Gamma(\mathcal{H}(e))\) for each \(e\in\mathcal{E}\), using for instance the procedure described in [12], 4. for each \(e\in\mathcal{E}\) and \(c\in C_{e}\) compute the corresponding element of \(\mathcal{H}(e)\) as in Example 10.
* Computing \(\tau_{a}\) as in Proposition 13 : given \(g\in\Gamma(H)\), \(\tau_{a}(g)\) is simply, seen as an element of \(\mathfrak{S}(\ker a)\) : \[a^{-1}\{i\}\mapsto(g\cdot a)^{-1}\{g\cdot a(i)\},\] which can be computed in \(O(n)\). Note that this is a special case of the application described in [11, Proposition 3.11 (a)]
* Testing that two elements \(g,g^{\prime}\) in \(\Gamma^{\prime}(\mathcal{H}(a))\) are conjugated : \(\Gamma^{\prime}(H)\) is represented as a subgroup of \(\mathfrak{S}(\ker a)\) and known procedures, such as the one described in [13], can be used.
* Computing the cardinality of a conjugacy class of a Schutzenberger group: for instance, the computer algebra system GAP uses the method described in [12].
## 2 Combinatorial bicharacter computing: 3 applications.
We are now ready to present the algorithm for fixed-point counting, keeping in mind that we want first to use the formula from Section II.2 to compute the character table of the monoid and further to compute the Cartan matrix of the monoid. In the cases we are interested in, we use the formalism of character computing, since, as stated in the Lemma below, computing the characters of so called _combinatorial modules_ is actually counting fixed points.
**Lemma 34**.: _Let \(M,M^{\prime}\) be two finite monoids and \((V,B)\) a finite dimensional \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}M^{\prime}\) space equipped with a basis \(B\). If the actions of \(M,M^{\prime}\) on \((V,B)\) are combinatorial, meaning for any \((m,b,m^{\prime})\in M\times B\times M^{\prime},mbm^{\prime}\in(B\cup\{0\})\), then:_
\[\chi_{\mathbf{k}M\otimes\mathbf{k}M^{\prime\text{op}}}^{V}=|\{b\in B\,|\,mbm^{ \prime}=b\}|.\]
Proof.: In the basis \(B\), the matrix of the linear map \(x\mapsto mxm^{\prime}\) is a \(\{0,1\}\)-matrix, with for every \(b\in B\) exactly one 1 in the \(b\)-th column, that 1 being on the \(b\)-th row if \(mbm^{\prime}=b\). Thus, the trace counts the number of fixed points.
Note that we have already defined a structure of combinatorial \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}G_{e}\) on \((\mathbf{k}\mathcal{L}(e),\mathcal{L}(e))\) for any idempotent \(e\). In the same way, \(\mathbf{k}J\) for a \(\mathcal{J}\)-class \(J\), can be equipped with a structure of \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}M\) by setting for every \((m,j)\in M\times J\):
\[m\cdot j=\begin{cases}mj\text{ if }mj\in J\\ 0\text{ otherwise}\end{cases}\text{ and }j\cdot m=\begin{cases}jm\text{ if }jm\in J \\ 0\text{ otherwise}\end{cases}.\]
As before, this is well defined: firstly because the actions on the left and on the right commute (because the monoid's law is associative by assumption) and secondly because either \(m\leq_{\mathcal{L}}m^{\prime}\) or \(m\leq_{\mathcal{R}}m^{\prime}\) imply \(m\leq_{\mathcal{J}}m^{\prime}\) so, as in Section II.1, if \(ml\) or \(lm\) has "fallen to 0", it can't "climb back up" to \(J\).
This structure makes \((\mathbf{k}J,J)\) into a combinatorial module and we may apply our fixed points counting methods to compute its character.
**Algorithm 35** (Computing the bicharacter of a \(\mathcal{J}\)-class).: Keeping the assumptions and notations of the previous Paragraph 1, we get from Corollary 16 an algorithm to compute the bicharacter of \(\mathbf{k}J\) as a \(\mathbf{k}M-\operatorname{mod}-\mathbf{k}M\):
* Input : A \(\mathcal{J}\)-class \(J\), a set of representatives of the character equivalency classes \(C_{M}\).
* Output : A matrix \((|\{m\in J\,|\,hmk=m\}|)_{(h,k)\in C_{M}^{2}}\)
1. Preparations: 1. Choose \(a\in J\) and define \(H=\mathcal{H}(a)\). 2. Compute Green pairs \((\lambda_{R},\lambda_{R}^{\prime})\) (respectively \((\rho_{L},\rho_{L}^{\prime})\)) for \((\mathcal{R}(a),R)\) (resp. \((\mathcal{L}(a),L)\)) for all \(\mathcal{R}\)-class \(R\subset J\) (resp. \(\mathcal{L}\)-class \(L\subset J\)). 3. Compute the set \(C\) of conjugacy classes of \(\Gamma^{\prime}(H)\).
2. For each character equivalency representative \(h\in C_{M}\), initialize \(r_{J}(h)\) and \(l_{J}(h)\) to both be \((0)_{\bar{g}\in C}\). 1. For each \(\mathcal{R}\)-class \(R\subset J\), test if \(h\lambda_{R}a\in R\). If so, denoting by \(\bar{g}\) the conjugacy class of \(\tau_{a}((\lambda_{R}^{\prime}h\lambda_{R})\times_{H})\) in \(\Gamma^{\prime}(H)\), increment \(r_{J}(h)\) by \(|C_{\Gamma^{\prime}(H)}(g)|\) at position \(\bar{g}\). 2. For each \(\mathcal{L}\)-class \(L\subset J\), test if \(a\rho_{R}h\in L\). If so, denoting by \(\bar{g}\) the conjugacy class of \(\times_{H}(\rho_{L}^{\prime}h\rho_{L})\) in \(\Gamma^{\prime}(H)\), increment \(r_{J}(h)\) by \(1\) at position \(\bar{g}\).
3. Compute the matrix \(\chi=(r_{J}(h)\cdot l_{J}(k))_{(h,k)\in C_{M}^{2}}\) using the previously computed vectors and return \(\chi\).
**Example 36**.: An _aperiodic_ monoid is a monoid where all \(\mathcal{H}\)-classes are singletons. Let us apply the algorithm we just described in the case of a \(\mathcal{J}\)-class \(J\) with trivial \(\mathcal{H}\)-classes. Several simplifications occur : first, we don't need to check for the conjugacy class, as there is only one. Secondly, the conjugacy class has cardinality one. Consider the vectors \(r_{J}=(|S_{\mathcal{R}}(h)|)_{h\in C_{M}}\) and \(r_{J}=(|S_{\mathcal{L}}(h)|)_{h\in C_{M}}\) with \(S_{\mathcal{L}}(h)\) and \(S_{\mathcal{R}}(h)\) defined as in Corollary 16. The bicharacter is simply the matrix product of \(r_{j}^{T}\) with \(l_{J}\). The particular case of this algorithm for aperiodic monoid is described in [4, Section.1]
**Algorithm 37** (Computing the bicharacter of \(\mathbf{k}M\)).: If we consider \((\mathbf{k}M,M)\) as a combinatorial \(\mathbf{k}M-\mathrm{mod}-\mathbf{k}M\), we immediately have that:
\[\chi_{\mathbf{k}M\otimes\mathbf{k}M^{\mathit{top}}}^{\mathbf{k}M}=\sum_{J\in \mathcal{J}}\chi_{\mathbf{k}M\otimes\mathbf{k}M^{\mathit{top}}}^{\mathbf{k}J}\]
and we can therefore compute the bicharacter of the whole monoid \(M\): we first compute a set \(C_{M}\) of representatives of the character equivalency classes and we the iterate Algorithm 35 over all \(\mathcal{J}\)-classes and sum the results.
The final useful example is the case of counting fixed points in a single regular \(\mathcal{L}\)-class, for the purpose of computing the character table of the monoid.
**Algorithm 38** (Computing the bicharacter of an \(\mathcal{L}\)-class).: Let \(e\) be an idempotent en let \(L=\mathcal{L}(e)\). In this example \(\mathbf{k}L\) is still a combinatorial module, but it has the particularity, compared with the other two examples, that the monoids on the left and right are not the same. However, as the maximal subgroup at \(e\), \(G_{e}\), is a subsemigroup of \(M\), the same results apply at no extra costs.
We can simply adapt the algorithm of Algorithm 35. Since an element of \(C_{G_{e}}\) acts "as itself" on the right, we don't need to keep track of the action of the right with a vector \(r_{L}\) as we did previously.
1. Initialize \(\chi\) to \((0)_{(h,k)\in C_{M}\times C}\)
2. For each \(h\in C_{M}\), for each \(\mathcal{H}\)-class \(H\), test if \(h\lambda_{H}a\in H\). If so, denoting by \(k\) the conjugacy class of \(\lambda^{\prime}_{H}h\lambda_{H}\) in \(G_{e}\), increment \(\chi\) by \(|C_{G_{e}}(h)|\) at position \((h,k)\).
3. Return \(\chi\)
## 3 Computing \(N_{e}(\mathbf{k}L)\)
We are now almost in position to use the formula of Proposition 31: the character tables of the groups are supposed to be given, as we dispose of efficient group algorithms in the literature to compute them, from Algorithm 38 we now know how the efficiently compute the bicharacter of \(\mathbf{k}\mathcal{L}(e)\) as a \(\mathbf{k}M\otimes\mathbf{k}G_{e}^{\mathit{op}}\)-module for some idempotent \(e\in M\). It remains to compute the bicharacter of \(N_{e}(\mathbf{k}\mathcal{L}(e))\) as a \(\mathbf{k}M\otimes\mathbf{k}G_{e}^{\mathit{op}}\)-module, which we discuss now.
Let \(L=\mathcal{L}(e)\). Recall that, by definition, \(N_{e}(\mathbf{k}L)=\{x\in\mathbf{k}L\,|\,eMx=0\}\). Taking \(L\) as a basis for \(\mathbf{k}L\), we can form a matrix with rows indexed by \(M\times L\) and columns indexed by \(L\), with the coefficient at \(((m,l),l^{\prime})=1\) if \(eml=l^{\prime}\) and 0 otherwise. Computing the kernel of this matrix yields a basis for \(N_{e}(\mathbf{k}L)\) but is extremely inefficient as the number of rows is many times the cardinality of the monoid.
Notice first that for any \(m\in M\), \(em\leq_{\mathcal{R}}e\) so we can consider only the elements of \(M\) that are \(\mathcal{R}\)-smaller than \(e\). Conversely, recall that the structure of \(\mathbf{k}M\)-module on \(M\) is defined by \(m\cdot l=ml\) if \(ml\in L\) and \(0\) otherwise and that this latter case
happens if \(ml\leq_{\mathcal{L}}l\). Since if \(m\leq_{\mathcal{L}}l\) implies \(ml\leq_{\mathcal{L}}l\) we have that the \((m,l)\)-th row of the matrix is null and that we may omit it. This shows that we need only to consider the element of \(M\) that are not \(\mathcal{L}\)-below \(e\). Together with the previous point, this means that the similarly defined matrix but whose rows are only indexed by \(\mathcal{R}(e)\times L\) has the same kernel. This is good news, as we may now exploit the structure of the \(\mathcal{J}\)-class given by Green's Lemma to further reduce the dimension of this matrix. Indeed, another consequence of Green's Lemma is the so called "Location Theorem" from Clifford and Miller. A proof can be found in [14, Theorem 1.11]
**Lemma 39** (Location Theorem).: _Let \(r,l\) be two elements in the same \(\mathcal{J}\)-class. We have:_
\[rl=\begin{cases}\gamma\in\mathcal{R}(r)\cap\mathcal{L}(l)\text{ if }\mathcal{L}(r)\cap \mathcal{R}(l)\text{ contains an idempotent},\\ \gamma<_{\mathcal{J}}l,r\text{ otherwise.}\end{cases}\]
**Lemma 40**.: _Let \(e\in M\) be an idempotent, \(R=\mathcal{R}(e)\) its \(\mathcal{R}\)-class and \(R^{\prime}\) another \(\mathcal{R}\)-class of \(\mathcal{J}(e)\). Let \((\lambda,\lambda^{\prime})\) be a left Green pair for \((L,L^{\prime})\). Then \((\lambda e,e\lambda^{\prime})\) is a left Green pair for \((R,R^{\prime})\)._
_Similarly, if \(L=\mathcal{L}(e)\), \(L^{\prime}\) is a \(\mathcal{L}\)-class of \(\mathcal{J}(e)\) and \((\rho,\rho^{\prime})\) is a right Green pair for \((L,L^{\prime})\), then \((e\rho,\rho^{\prime}e)\) is a right Green pair for \((L,L^{\prime})\)_
Proof.: Let \(g\) be any element of \(\mathcal{H}(e)\) and \(g^{\prime}=\lambda g\). Since \(e\) is idempotent, \(\mathcal{H}(e)\) is a group with identity \(e\) so we have \(e\lambda^{\prime}\lambda eg=e\lambda^{\prime}\lambda g=eg=g\) and \(\lambda ee\lambda^{\prime}g^{\prime}=\lambda eg=\lambda g=g^{\prime}\) which, from Green's Lemma, make \((\lambda e,e\lambda^{\prime})\) a left Green pair for \((L,L^{\prime})\). A similar argument applies for the second part of the proposition.
**Remark**.: This lemma means that for a regular \(\mathcal{J}\)-class \(J\) and for any two \(\mathcal{L}\)-class (or \(\mathcal{R}\)-class) it contains, we may choose a corresponding Green pair among the elements of those two classes.
**Proposition 41**.: _Let \(e\in M\) be an idempotent and \(H=\mathcal{H}(e),L=\mathcal{L}(e),R=\mathcal{R}(e)\) and \(J=\mathcal{J}(e)\). For each \(\mathcal{R}\)-class \(R^{\prime}\subset J\), we choose a left Green pair \((l,l^{\prime})\in J^{2}\). We
Figure 3: Location Theorem
denote by \(\mathfrak{L}\) the set of all \(l\) for the chosen left Green pairs. We define \(\mathfrak{R}\) similarly. Then \(N_{e}\) is the set of solutions of :_
\[\forall r\in\mathfrak{R},\forall g\in H,\quad\sum_{l\in\mathfrak{L}}\mathds{1}_ {H}(rl)x_{l(rl)^{-1}g}=0\]
Proof.: Consider an element \(a\in R\). \(a\) can be written in a unique way as \(gr\), with \(g\in G\) and \(r\in\mathfrak{R}\) corresponding to \(\mathcal{L}(a)\). Similarly, an element \(b\) in \(L\) as a unique decomposition as \(l\gamma,l\in\mathfrak{L},\gamma\in H\). For an element \(x\in\mathbf{k}L\) we note:
\[x=\sum_{l\in\mathfrak{L},\gamma\in H}x_{l\gamma}l\gamma\]
its decomposition over the basis \(L\).
We want to find the equations that describe \(\ker(gr\times_{L})\) (where \(gr\times_{L}\) is the linear map on \(\mathbf{k}L\) obtained by extending the monoid's multiplication by linearity). From the Location Theorem, we get that \(\operatorname{Im}\left(gr\times_{L}\right)\subset\mathbf{k}H\). For \(k\in H\), denote by \(f_{k,gr}\) the \(k\)-th coordinate function of \(gr\times_{L}\). Because \(gr\times_{L}\) acts combinatorially on \(\mathbf{k}L\), we have :
\[f_{k,gr}(x)=\sum_{l\in\mathfrak{L},\gamma\in H}\mathds{1}_{\{k\}}(grl\gamma)x _{l\gamma}\]
Note that \(x_{l\gamma}\) appears in the sum if and only if \(grl\gamma=k\). From the Location Theorem, and because we choose \(l\in L,r\in R\), we have \(grl\gamma=k\) if and only if \(rl\in H\) and \(\gamma=(rl)^{-1}g^{-1}k\) and thus the equation becomes:
\[f_{k,gr}(x)=\sum_{l\in\mathfrak{L}}\mathds{1}_{H}(rl)x_{l(rl)^{-1}g^{-1}k}.\]
For \(x\) to be in \(\ker(gr\times_{L})\), \(x\) must cancel simultaneously \(f_{k,gr}\) for all \(k\in H\). We now have a set of equations for \(ker(gr\times_{L})\), and we can deduce that the set of equations
\[\forall r\in\mathfrak{R},\forall g,k\in H,\quad f_{k,gr}(x)=\sum_{l\in \mathfrak{L},\gamma\in H}\mathds{1}_{H}(rl)x_{l(rl)^{-1}g^{-1}k}=0\]
describes \(N_{e}(\mathbf{k}L)\). However, the equation system is redundant as the equation \(f_{k,gr}(x)=0\) is the same for all pairs \((g,gk^{\prime})\) with \(k^{\prime}\in H\). Removing the duplicates equation gives the system announced in the proposition.
**Example 42** (\(N_{e}\) in the case of an aperiodic monoid).: As in Example 36, let us consider the case of a \(\mathcal{J}\)-class with trivial \(\mathcal{H}\)-classes. In this case, we have \(L=\mathfrak{L},R=\mathfrak{R}\) and \(H=\{1_{H}\}\), so the equations become:
\[\forall r\in R,\quad\sum_{l\in M}\mathds{1}_{H}(rl)x_{l}.\]
Again from the Location Theorem, we have that \(\mathds{1}_{H}(rl)=1\) if and only if there is an idempotent in \(\mathcal{L}(r)\cap\mathcal{R}(l)\). So if we form a matrix \(A\) with rows indexed by
and columns indexed by \(R\), and with coefficients 1 at \((\mathcal{L}(r),\mathcal{R}(l))\) if \(\mathcal{L}(r)\cap\mathcal{R}(l)\) contains an idempotent and 0 otherwise, the above equations becomes :
\[(x_{l})_{l\in L}^{T}A=0,\]
that is, in the case of an \(\mathcal{H}\)-trivial \(\mathcal{J}\)-class, \(N_{e}(\mathbf{k}L)\) is the left kernel of the eggbox picture seen as a \(\{0,1\}\)-matrix.
Note that given this set of equations, we can compute the character \(\chi_{\mathbf{k}M\otimes\mathbf{k}G_{e}^{op}}^{N_{e}(\mathbf{k}\mathcal{L}(e ))}\) from the formula in Proposition 31 using classical linear algebra algorithms to find a basis of \(N_{e}(\mathbf{k}\mathcal{L}(e))\) and then computing the value of the character at any \((m,g)\in C_{M}\times C_{G_{e}}\) by iterating over the basis vectors, applying \((m,g)\) as a linear map and computing the relevant coefficient in the image vector.
## IV Performances, computational complexity and benchmarks
In this section, we discuss performances in terms of complexity whenever we can, and in terms of benchmarks for timings and memory usage. In the next paragraph, we discuss the challenges in measuring performances and the subsequent choices made. Given these considerations, in the three paragraphs following, we discuss the computationnal performances of our three main objects of interest: the combinatorial bicharacter (_i.e._ fixed-point counting), the character table and finally the Cartan matrix.
## 1 Discussion and Challenges
Despite concentrating on transformation monoids, monoids being as diverse as they are, a meaningful analysis of the complexities of the above algorithms is difficult to do in terms of elementary parameters such as the rank (_i.e._ the cardinality of the set acted upon), the number of generators or even the cardinality. Using only these parameters, we would have to make so many simplifying assumptions as to lose all meaning in our analysis.
Indeed, for our algorithms, the relevant metrics tends to be related to the Green-class structure and are the number of \(\mathcal{L}\) and \(\mathcal{R}\)-classes in a \(\mathcal{J}\)-class, the cardinality of the \(\mathcal{H}\)-classes of a \(\mathcal{J}\)-class as well as the number of conjugacy classes in its Schutzenberger groups, the number of generalized conjugacy classes, etc. In some sense, these metrics vary a lot: two transformation monoids acting on the same number of points with the same number of generators can have vastly different Green structures. Given this hurdle, it seems that the most meaningful level of analysis in terms of complexity is at the level of the \(\mathcal{J}\)-class, where these parameters are a constant, rather than on the level of the whole monoid where we do not have any good monoid-wide estimate
of how these useful metrics average out on all (regular) \(\mathcal{J}\)-classes. Thus, in the following subsections, we will comment in any depth on the time complexity only for Algorithm 35 and for the solving of the equation system from Proposition 41.
Although a theoretical complexity of our algorithms is hard to precisely provide on the monoid-wide level, the real test of viability is to see if the algorithm effectively terminates in practice on typical examples. Thus, we provide benchmarks (for time and memory usage) for the computation of the three main objects of our discussion: the bicharacter, the character table and the Cartan Matrix. We would like to know what is the typical performance on a "randomly chosen finite monoid". However, the question of taking a "generic" transformation monoid acting on a set number of points is a subject in itself, and outside the scope of this paper. Thus we chose to use two families of test cases: firstly the full transformation monoids and secondly random monoids generated by \(m\) elements picked uniformly at random in \(T_{n}\) and denoted by \(R(m,n)\). The full transformation monoids constitute a interesting test case in the sense that they are as big as possible. However, they are also highly structured and many computer algebra systems, including GAP, are smart enough to detect that the Schutzenberger groups are actually symmetric groups and thus use some non general algorithms that could not be used on typical finite monoids. This may introduce bias in our measurements on the full transformations monoids (and indeed probably does given Figure 4), hence why we separate the measure made on them from those made on the \(R(m,n)\) monoids.
The measurements provided for these new algorithms (as well as the computation results presented hereafter) all come from an implementation using the computer algebra system GAP. All performance measures are realized on a laptop equipped with an Intel Core i7-10850H @ 2.7GHz (on one core) and 16 GB memory. The measures on random monoids presented in this section are realized on monoids of the form \(R(m,n)\), with \((m,n)\in\{(4,3),(5,3),(5,4),(6,5),(7,6),(9,8)\}\) (with \((9,8)\) excluded in the character table and Cartan matrices benchmarks) with 10 randomly chosen test cases for each \((m,n)\) in this set. Every appearance of \(R(m,n)\) for a specific pair \((m,n)\) refers to the same monoid. Our specific implementation, as well as the test cases used and the raw data, are publicly available on our git repository6.
Footnote 6: github.com/ZoltanCoccyx/monoid-character-table
## 2 Fixed point counting
In the case of Algorithm 35, we can give some analysis of the time complexity in terms of the Green structure of the particular \(\mathcal{J}\)-class Algorithm 35 is applied to.
**Proposition 43**.: _Consider a \(\mathcal{J}\)-class \(J\) containing \(n_{L}\)\(\mathcal{L}\)-classes, \(n_{R}\)\(\mathcal{R}\)-classes, containing an \(\mathcal{H}\)-class \(H\) with \(n_{C}\) conjugacy classes in \(\Gamma^{\prime}(H)\), and let \(C_{M}\) be a set representatives of the character equivalence classes, as before. Then, the Algorithm 35 does:_
* \(n_{C}\) _cardinality computations of conjugacy classes of_ \(\Gamma^{\prime}(H)\) _(assuming memoization to be able to do a lookup in step 2-a of Algorithm_ 35_, instead of computing it on the fly),_
* \(O(|C_{M}|(n_{L}+n_{R}))\) _monoid multiplications, Green class membership tests and conjugacy class of_ \(\Gamma^{\prime}(H)\) _membership tests,_
* \(O(|C_{M}|n_{R})\) _computations of_ \(\tau_{a}\)_,_
* \(O(|C_{M}|n_{L})\) _conjugacy class of_ \(\Gamma^{\prime}(H)\) _cardinality lookups,_
* \(n_{C}|C_{M}|^{2}\) _integer multiplications._
Proof.: This simply results from an inspection of Algorithm 35.
As explained before, we cannot meaningfully extend this analysis to Algorithm 35. We can get a similar result by inspection of Algorithm 38: it is essentially the same algorithm, except that it is applied on only one \(\mathcal{L}\)-class and that we don't need the final integer multiplications at the end.
Note that we do not provide a cumulative formula for the complexity of Algorithm 35 as for instance the complexity of a conjugacy class membership test heavily depends on the algorithm used by the computer algebra system, that can itself vary depending on the characteristics of the Schutzenberger groups. This makes the task of providing a meaningful evaluation of the global complexity of the algorithm quite difficult, mainly because expressing the complexity of those "elementary" operations
Figure 4: Computation time and memory usage of Algorithm 37.
of monoid multiplications, membership testing, etc... in terms of the same parameters is not straightforward and in some cases even unknown as noted in [13]. However, we can at least compare this to the naive algorithm of testing if every element of \(J\) is a fixed point which demands \(O(n_{L}n_{R}|H|^{2}|C_{M}|^{2})\) monoid multiplications: as long as the complexity of the more complex operations of Green class or conjugacy class membership testing remains limited in terms of monoid multiplications, our complexity is better. For instance, in the case of the monoid \(T_{n}\), all the required operations can be done on \(O(n)\), making Algorithm 35 (and, in turn, Algorithm 37) more efficient than the naive algorithm, as can be seen in Table 1, with a sub linear (with respect to cardinality) measured complexity (Figure 4).
## 3 Character table
As shown in Table 2, the computation of the character table takes much longer. This is due to the fact that, to compute the radical of \(\mathbf{k}\mathcal{L}(e)\) for an idempotent \(e\), we must solve a linear system of size \(|\mathcal{R}(e)|\times|\mathcal{L}(e)|\) which necessitates \(O(|\mathcal{R}(e)|^{2}|\mathcal{L}(e)|)\) arithmetic operations. In the case of the full transformation semigroup \(T_{n}\), if \(e\) as \(k\) elements in its image, \(|\mathcal{L}(e)|=k!\times\binom{n}{k}\), while \(|\mathcal{R}(e)|=k!\times S(n,k)\) where \(S(n,k)\) is a Stirling number of the second kind, which gives \(|\mathcal{R}(e)|\sim k^{n}\). The size of that linear system becomes rapidly untractable. Moreover, once we have a basis of \(N_{e}(\mathbf{k}L)\) of cardinality \(d\), we still have to compute the \(C_{M}^{2}\) character values in \(O(d^{2})\) operations each. Experiments show that the computation time of the character tables of the maximal subgroups is small in comparison of all radical related computations. As can be seen on Figure 5, time and memory usage are in lockstep (at least for big enough monoids) and the limiting factor is memory (the test on random monoids fail for the random monoids of the form \(R(9,8)\) by exceeding the 16GB memory capacity of our testing machine).
Finally, for the computation of the Cartan Matrix, the previous timings show that the vast majority of the computation time is spent computing the character table of the monoid. As the computation of the combinatorial bicharacter is more than a hundred times faster than the computation of the character table, this is a clear invitation to improve in particular the computation of the character of the radical of the \(\mathcal{L}\)-classes.
\begin{table}
\begin{tabular}{c|c|c|c|c} Monoid & Cardinality & Coefficients & Naive & Ours \\ \hline \(T_{3}\) & 27 & \(6^{2}\) & 29 ms & 18 ms \\ \(T_{4}\) & 256 & \(11^{2}\) & 92 ms & 63 ms \\ \(T_{5}\) & 3125 & \(18^{2}\) & 1.44 s & 113 ms \\ \(T_{6}\) & 46656 & \(29^{2}\) & 53.0 s & 0.34 s \\ \(T_{7}\) & 823543 & \(44^{2}\) & \textgreater{}30 min & 1.59 s \\ \(T_{8}\) & 16777216 & \(66^{2}\) & \(\cdots\) & 8.86 s \\ \(T_{9}\) & 387420489 & \(96^{2}\) & \(\cdots\) & 56.7 s \\ \end{tabular}
\end{table}
Table 1: Computation time of the regular representation bicharacter.
In Table 3, we show some timings for that computation, and a comparison with Sage generalist algorithm (based on the Peirce decomposition of the monoid algebra) for the computation of the Cartan Matrix: despite its limitations our specialized algorithm allows for the handling of larger objects. Indeed, our algorithm has near linear performance with respect to cardinality, while sage's has roughly cubic complexity.
Again, time and memory usage are in lockstep, and although memory fails before time for \(T_{8}\) and onward and for the monoids of the form \(R(9,8)\), using the regression we can predict a computation time of around 6 hours on our testing machine if it was not memory limited.
\begin{table}
\begin{tabular}{c|c|c|c} Monoid & Cardinality & Coefficients & Ours \\ \hline \(T_{3}\) & 27 & \(6^{2}\) & 27 ms \\ \(T_{4}\) & 256 & \(11^{2}\) & 151 ms \\ \(T_{5}\) & 3125 & \(18^{2}\) & 1.74 s \\ \(T_{6}\) & 46656 & \(29^{2}\) & 29.8 s \\ \(T_{7}\) & 823543 & \(44^{2}\) & 11.0 min \\ \end{tabular}
\end{table}
Table 2: Computation time of the character table.
Figure 5: Computation time and memory usage for computing the character table using Propositions 31 and 41. The blue points correspond to the random monoids, the yellow ones to \(T_{n}\) for \(n\in[3,7]\). As before, the yellow points are excluded of the linear regression although in this case, \(T_{n}\) behave more or less like the randomly chosen monoids. The measured complexity on random monoids is slighly more than linear in time and memory.
An example of a Cartan matrix obtained using our Algorithms and Thiery's formula is pictured in Figure 7.
## Conclusion and perspectives
The methods presented in this paper provide a new tool for the computational exploration of finite monoids representation theory. We give a method to compute the character table of a finite monoid in the general case as well as a method for the
\begin{table}
\begin{tabular}{c|c|c|c} Monoid & Coefficients & Sage’s & Ours \\ \hline \(T_{3}\) & \(6^{2}\) & 575 ms & 56 ms \\ \(T_{4}\) & \(11^{2}\) & 5.23 min & 173 ms \\ \(T_{5}\) & \(18^{2}\) & \textgreater{}2h & 1.82 s \\ \(T_{6}\) & \(29^{2}\) & \(\cdots\) & 34.6 s \\ \(T_{7}\) & \(44^{2}\) & \(\cdots\) & 11.5 min \\ \end{tabular}
\end{table}
Table 3: Computation time of the Cartan matrix.
Figure 6: Computation time and memory usage for computing the Cartan Matrix using Proposition 33.
The blue points correspond to the random monoids, the yellow ones to \(T_{n}\) for \(n\in\llbracket 3,7\rrbracket\). As before, the yellow points are excluded of the linear regression although in this case, \(T_{n}\) behave more or less like the randomly chosen monoids. The measured complexity on random monoids is slighly more than linear in time and memory.
computation of the Cartan matrix. In the latter case, although general algorithms for any finite dimensional algebra already exist, by specializing to monoid algebras, we achieve vastly shorter computation times, thus making the question tractable for bigger monoids. Although we have presented the methods in details only for transformation monoids, the underlying formulas are true in general for finite monoids, and it remains computationally applicable whenever wherever the hypotheses of Section III are verified. We also invite the reader the consult and test our implementation, available on our github repository7. As this paper is inspired by the combinatorial research on monoid representation theory which have seen renewed activity in recent years we hope that providing this effective tool will allows for the observation of new phenomena.
Footnote 7: github.com/ZoltanCoccyx/monoid-character-table
This work has two natural continuations: improving and expanding. For the improvement part, we have noted that by far the most inefficient part of our algorithm is the computation of the radical of the \(\mathcal{L}\)-classes, which happens to be the only point where linear algebra is necessary and combinatorics are seemingly not enough. We
Figure 7: Cartan Matrix of \(T_{7}\)
can ask whether this step could be replaced by a combinatorial computation. Some experiments show that, even in relatively small and very regular cases (\(T_{5}\) for instance) the basis we find for the radical by solving the equation system described in Proposition 41 does not have easily understandable structure, once the common denominator of the coefficients is eliminated. It therefore seem unlikely to us that a general method for computing the radical of an entirely combinatorial nature exists, although we remain optimistic that in very regular cases (again, \(T_{n}\)), the issue lies with us not finding the method rather than it not existing. More modestly, in a general context, we could try to exploit further the structure of the equations that define the radical to reduce the size of the system, which is a major bottleneck.
Another improvement, although perhaps less impactful, could be made by exploiting redundancy: it can happen that two \(\mathcal{L}\)-classes \(L_{1},L_{2}\) of a submonoid \(M\) of \(T_{n}\) are contained in the same \(\mathcal{L}\)-class \(L\) of \(T_{n}\). Thus, in step \(2-b\) of Algorithm 35 (for instance), instead of visiting each \(\mathcal{L}\) of \(M\), we could visit each \(\mathcal{L}\) of \(T_{n}\) that contain an \(\mathcal{L}\)-class of \(M\) and count them with some multiplicity. Although this probably would not lead to great improvements in efficiency, this has the advantage of making, in some sense, \(T_{n}\) the worst case scenario, allowing for a finer complexity analysis.
As for extending this work, the natural path seem to adapt these methods for fields of finite characteristic. At this point it appears to us that this question is tractable as the theory remains essentially the same, although it is somewhat difficult to implement in practice. The main hurdle arise, again, when computing the radical of an \(\mathcal{L}\)-class: an equivalent of Proposition 23 would have to take into account the role of the radical of the maximal subgroup algebra, which can be non-trivial in positive characteristic. This would translate in needing to effectively compute this radical. Although algorithms are available (for instance in GAP), this is a theoretically difficult and computationally expensive problem, considerably reducing the maximum size of a tractable problem. While modular representation theory is known to be a difficult subject in groups it seems that, again, the situation is not much more complicated for monoids than it is for groups as it is standard practice to reduce monoid theoretic questions to group theoretic ones. Treating modular group representation theory as a black box coming with already existing algorithms (much as we did here for null characteristic group representation theory as a matter of fact), we hope to be able to provide a modular version of our algorithms along with an implementation in the near future.
## Acknowledgements
The research work devoted to this project was funded by a PhD grant from the French _Ministere de la recherche et de l'enseignement superieur_, in the form of a _Contrat Doctoral Specifique Normalien_ attributed for a PhD in the STIC (_Sciences et Technologies de l'Information et de la Communication_) doctoral school of Paris-Saclay University, in the LISN (_Laboratoire Interdisciplinaire des Sciences du Numerique_) under the supervision of Pr. Nicolas Thiery. |
2302.04012 | CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models | Large language models (LLMs) for automatic code generation have achieved
breakthroughs in several programming tasks. Their advances in competition-level
programming problems have made them an essential pillar of AI-assisted pair
programming, and tools such as GitHub Copilot have emerged as part of the daily
programming workflow used by millions of developers. The training data for
these models is usually collected from the Internet (e.g., from open-source
repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these
vulnerabilities and propagate them during the code generation procedure. While
these models have been extensively assessed for their ability to produce
functionally correct programs, there remains a lack of comprehensive
investigations and benchmarks addressing the security aspects of these models.
In this work, we propose a method to systematically study the security issues
of code language models to assess their susceptibility to generating vulnerable
code. To this end, we introduce the first approach to automatically find
generated code that contains vulnerabilities in black-box code generation
models. To achieve this, we present an approach to approximate inversion of the
black-box code generation models based on few-shot prompting. We evaluate the
effectiveness of our approach by examining code language models in generating
high-risk security weaknesses. Furthermore, we establish a collection of
diverse non-secure prompts for various vulnerability scenarios using our
method. This dataset forms a benchmark for evaluating and comparing the
security weaknesses in code language models. | Hossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz | 2023-02-08T11:54:07Z | http://arxiv.org/abs/2302.04012v2 | # Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
###### Abstract.
Recently, large language models for code generation have achieved breakthroughs in several programming language tasks. Their advances in competition-level programming problems have made them an emerging pillar in AI-assisted pair programming. Tools such as _GitHub Copilot_ are already part of the daily programming workflow and are used by more than a million developers (Hossein et al., 2019). The training data for these models is usually collected from open-source repositories (e.g., GitHub) that contain software faults and security vulnerabilities. This unanimtized training data can lead language models to learn these vulnerabilities and propagate them in the code generation procedure. Given the wide use of these models in the daily workflow of developers, it is crucial to study the security aspects of these models systematically.
In this work, we propose the first approach to automatically finding security vulnerabilities in black-box code generation models. To achieve this, we propose a novel black-box inversion approach based on few-shot prompting. We evaluate the effectiveness of our approach by examining code generation models in the generation of high-risk security weaknesses. We show that our approach automatically and systematically finds 1000s of security vulnerabilities in various code generation models, including the commercial black-box model _GitHub Copilot_.
Language Models, Machine Learning Security, Software Security +
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted none:
Footnote: copyrighted none:
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
By using few-shot prompting, we approximate the inverse of the code generation model in the black-box setting. We use the approximated inversion of the code generation model to generate prompts that potentially lead the models to reveal their security vulnerability issues. Figure 1 provides an overview of our black-box model inversion approach. Using our method, we have found 1000s of vulnerabilities in state-of-the-art code generation models. These vulnerabilities contain twelve different types of _Common Weaknesses Enumerations_ (CWEs).
In summary, we make the following contributions in this paper:
1. We propose an approach for automatically finding security vulnerabilities in code generation models. We achieve this by proposing a novel black-box model inversion approach via few-shot prompting.
2. We discover 1000s of vulnerabilities that we found in state-of-the-art code generation models--including the widely used Github Copilot.
3. At the time of publication, we will publish a set of promising security prompts to investigate the security vulnerabilities of the models and compare them in various security scenarios. We generate these prompts automatically by applying our approach to finding security issues of different state-of-the-art and commercial models.
4. At the time of publication, we will release our approach as an open-source tool that can be used to evaluate the security issues of the black-box code generation models. This tool can be easily extended to newly discovered potential security vulnerabilities.
We provide the generated prompts and codes with the security analysis of the generated codes as additional material.
## 2. Related Work
In the following, we briefly introduce existing work on large language models and discuss how this work relates to our approach.
### Large Language Models and Prompting
Large language models have advanced the natural language processing field in various tasks, including question answering, translation, and reading comprehension (Weng et al., 2017; Wang et al., 2018). These milestones were achieved by scaling the model size from hundreds of millions (Han et al., 2017) to hundreds of billions (Weng et al., 2017), self-supervised objective functions, and huge corpora of text data. Many of these models are trained by large companies and then released as pretrained models. Brown et al. (2017) show that these models can be used to tackle a variety of tasks by providing only a few examples as input - without any changes in the parameters of the models. The end user can use a template as a few-shot prompt to guide the models to generate the desired output for a specific task. In this work, we show how a few-shot prompting approach can be used to invert black-box code generation models.
### Large Language Models of Source Codes
There is a growing interest in using large language models for source code understanding and generation tasks (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). Feng et al. (2018) and Guo et al. (2018) propose encoder-only models with a variant of objective functions. These models (Han et al., 2017; Wang et al., 2018) primarily focus on code classification, code retrieval, and program repair. Ahmad et al. (Ahmad et al., 2018), Wang et al. (2018) employ encoder-decoder architecture to tackle code-to-code, and code-to-text generation tasks, including program translation, program repair, and code summarization. Recently, decoder-only models have shown promising results in generating programs in left-to-right fashion (Han et al., 2017; Wang et al., 2018). These models can be applied to zero-shot and few-shot program generation tasks (Han et al., 2017; Wang et al., 2018; Wang et al., 2018), including code completion, code infilling, and text-to-code tasks. Large language models of code have mainly been evaluated based on the functional correctness of the generated codes without considering potential security vulnerability issues (see subsection 2.3 for a discussion). In this work, we propose an approach to automatically find security vulnerabilities of these models by employing a novel black-box model inversion method via few-shot prompting.
### Security Vulnerability Issues of Code Generation Models
Large language code generation models have been pre-trained using vast corpora of open-source code data (Han et al., 2017; Wang et al., 2018). These open-source codes can contain a variety of different security vulnerability issues, including memory safety violations (Han et al., 2017), deprecated API and algorithms (e.g., MD5 hash algorithm (Shi et al., 2017; Wang et al., 2018)), or SQL injection and cross-site scripting (Shi et al., 2017; Wang et al., 2018) vulnerabilities. Large language models can learn these security patterns and potentially generate vulnerable codes given the users' inputs. Recently, Pearce et al. (Pearce et al., 2018) and Siddiq and Santos (Siddiq and Santos, 2018) show that the generated codes using code generation models can contain various security issues.
Pearce et al. (Pearce et al., 2018) use a set of manually-designed scenarios to investigate potential security vulnerability issues of GitHub Coplic (Weng et al., 2018). These scenarios are curated by using a limited set of vulnerable codes. Each scenario contains the first few lines of the potentially vulnerable codes, and the models are queried to complete the scenarios. These scenarios were designed based on MITRE's Common Weakness Enumeration (CWE) (Brown et al., 2016). Pearce et al. (Pearce et al., 2018) evaluate the generated codes' vulnerabilities by employing the GitHub CodeQL static analysis tool. Previous works(Pearce et al., 2018; Wang et al., 2018; Wang et al., 2018) investigated the security issues of the code generation models using a set of limited manually-designed scenarios. In contrast, in our work, we propose a systematic approach to finding security vulnerabilities by automatically generating various scenarios at scale.
### Model Inversion and Training Data Extraction
Deep model inversion has been applied to model explanation (Wang et al., 2018), model distillation (Wang et al., 2018), and more commonly to reconstruct private training data (Han et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). The general goal in model inversion is to reconstruct a representative view of the input data based on the models' outputs(Wang et al., 2018). Recently, Carlini et al. (Carlini et al., 2018) showed that it is possible to extract memorized data from large language models. These data include personal information such as e-mail, URLs, and phone numbers. In this work, we use few-shot prompting to invert the black-box cod models. Using the inverse of the code generation models, we automatically find the scenarios (_prompts_) that lead the models to generate vulnerable codes.
## 3. Technical Background
Detecting software bugs before deployment can prevent potential harm and unforeseeable costs. However, automatically finding security critical bugs in code is a challenging task in practice. This also includes model-generated code, especially given the black-box nature and complexity of such models. In the following, we elaborate on recent analysis methods and classification schemes for code vulnerabilities and provide an overview of the evaluated code generation models.
### Evaluating Security Issues
Various security testing methods can be used to effectively find software vulnerabilities to avoid bugs during the run-time of a deployed system (Han et al., 2017; Li et al., 2018; Li et al., 2019). To achieve this goal, these methods attempt to detect different kinds of programming errors, poor coding style, deprecated functionalities, or potential memory safety violations (e.g., unauthorized access to unsafe memory that can be exploited after deployment or obsolete cryptographic schemes that are insecure (Li et al., 2019; Li et al., 2019; Li et al., 2019)). Broadly speaking, current methods for security evaluation of software can be divided into two categories: static (Han et al., 2017; Li et al., 2019) and dynamic analysis (Li et al., 2019; Li et al., 2019). While static analysis evaluates the code of a given program to find potential vulnerabilities, the latter approach executes the codes. For example, fuzz testing (_fuzzing_) generates random program executions to trigger bugs in the program.
For the purpose of our work, we choose to use static analysis to evaluate the generated code as it enables us to classify the kind of found vulnerability. Specifically, we use CodeQL, an open-source static analysis engine released by GitHub (Li et al., 2019). For analyzing the language model generated code, we query the code via CodeQL to find security vulnerabilities in the code. We use CodeQL's CWE classification output to categorize the type of vulnerability that has been found during our evaluation and to define a set of vulnerabilities that we further investigate throughout this work.
### Code Security Types
_Common Weaknesses Enumerations_ (CWEs) is a list of vulnerabilities in software and hardware defining specific vulnerabilities in code provided by MITRE (Li et al., 2018). In total, more than 400 different CWE types are defined and categorized into different classes and variants of vulnerabilities e. g. memory corruption errors. Figure 2 shows an example of CWE-502 (Deserialization of Untrusted Data) in Python. In this example from (Li et al., 2018), the Pickle library is used to deserialize data: The code parses data and tries to authenticate a user based on validating a token but without verifying the incoming data. A potential attacker can construct a pickle, which spawns new processes, and since Pickle allows objects to define the process for how they should be unpickled, the attacker can direct the unpickle process to call the _subprocess_ module and execute /bin/sh.
For our work, we focus on the analysis of twelve representative CWEs that can be detected via static analysis tools to show that we can systematically generate vulnerable code and their input prompts. Other CWEs that require considering the context during the execution of the code can only be automatically detected via fuzzing and would require much more time to evaluate and would be out of scope for this work. The twelve analyzed CWEs, including a brief description, are listed in Table 1. From the twelve listed CWEs, eight are from the top 25 list of the most important vulnerabilities. The description is defined by MITRE (Li et al., 2018).
### Code Generation
Large language models make a major advancement in current deep learning developments. With increasing size, their learning capacity allows them to be applied to a wide range of tasks, including code generation for AI-assisted pair programming: Given a prompt describing the function, the model generates suitable code. Besides open-source models, e. g. Codex (Li et al., 2019) and CodeGen (Li et al., 2019), there are also released tools and SDK extensions, like GitHub Copilot, that are easily accessible.
In this work, we mainly focus on two different models, namely _CodeGen_ and _Codex_. Both models and their architecture are described below:
_CodeGen._ CodeGen is a collection of models with different sizes for code synthesis (Li et al., 2019). Throughout this paper, all experiments are performed with the second largest model with 6 billion parameters. The transformer-based autoregressive language model is trained on natural language and programming language consisting of a collection of three data sets and includes GitHub repositories with >100 stars (ThePile), a multi-lingual dataset (BioQuery), and a mono-lingual dataset in Python (BioPthron). CodeGen is a next-token prediction language modeling.
_Codex._ The Codex model is fine-tuned on GPT-3 (Li et al., 2019), a generic transformer-based autoregressive language model trained on natural text. For fine-tuning, 54 million public software repositories hosted on GitHub are used in the training set. The final Codex model has in total 12 billion parameters and is also used for GitHub Copilot.
Figure 2. Python CWE-502 from (Li et al., 2018), showing an example for deserialization of untrusted data.
## 4. Systematic Security Vulnerability Discovery of Code Generation Models
We propose an approach for automatically and systematically finding security vulnerability issues of black-box code generation models and their responsible input prompts (we call them non-secure prompts). To achieve this, we trace non-secure prompts that lead the target model to generate vulnerable code(s). We formulate the problem of generating non-secure prompts as a model inversion problem; Using the inverse of the code generation model and generated vulnerable codes, we can automatically generate a list of non-secure prompts. For this, we have to tackle the following major obstacles: (1) We do not have access to the distribution of the generated vulnerable codes and (2) access to the inverse of black-box models is not a straightforward problem. To solve these two issues, we propose a novel black-box model inversion via few-shot prompting: By providing examples, we guide the code generation models to approximate the inverse of itself.
In the following, we describe our black-box model inversion approach. We can consider the code generation model as a function \(\mathbf{F}\). Given a prompt \(\mathbf{x}\), containing the first lines of the desired code, we can complete \(\mathbf{x}\) using code generation model \(\mathbf{y}=\mathbf{F}(\mathbf{x})\) where \(\mathbf{y}\) is the completion of the provided prompt \(\mathbf{x}\). In this paper, we consider the entire code (input prompts with the output of the model) as \(\box
### Inverting Black-box Code Generation Models via Few-shot Prompting
Inverting black-box large language models is a challenging task. In the black-box scenario, we do not have access to the architecture, parameters, and gradient information of the model. Even in white-box settings, this typically requires training a dedicated model. In this work, we employ few-shot prompting to approximate the inverse of model F. Using a few examples of desired input-output pairs, we guide the model F to approximate F\({}^{-1}\).
In this work, we investigate three different versions of few-shot prompting for model inversion using different parts of the code examples. This includes using the entire vulnerable code, the first few lines of the codes, and providing only one example. The approaches are described in detail below.
#### 4.1.1. FS-Code
Equation 3 provides the formulation of our approach to approximate the inversion of the black-box model F.
\[\text{FS-Code:}\quad\boxed{\mathbf{\mathsf{=}}}=\mathbf{F}^{-1}(\boxed{\mathbf{ \mathsf{=}}})\approx\mathbf{F}(\boxed{\mathbf{\mathsf{=}}}\boxed{\mathbf{\mathsf{=}}},...,\boxed{\mathbf{\mathsf{=}}},\boxed{\mathbf{\mathsf{=}}},\boxed{\mathbf{\mathsf{=}}}) \tag{3}\]
We guide the model F to approximate F\({}^{-1}\) by providing a few examples of codes with the desired security vulnerability. \(\boxed{\mathbf{\mathsf{=}}}\) with different colors represent the first few lines of a vulnerable code. In this paper, we call each of them _non-secure prompt_. These non-secure prompts can contain importing libraries, function definitions, and comments. We represent the rest of the vulnerable codes using \(\boxed{\mathbf{\mathsf{=}}}\) in different colors. Note that in Equation 3, we provide a few examples of \(\boxed{\mathbf{\mathsf{=}}}\) to guide the model to generate non-secure prompts given a few examples of vulnerable codes and their corresponding non-secure prompt. We add the \(\boxed{\mathbf{\mathsf{=}}}\) to the end of provided examples to prime the model to generate non-secure prompts for \(\boxed{\mathbf{\mathsf{=}}}\). In the rest of the paper, we call this approach FS-Code (Few-Shot-Code). Figure 5 provide an example of few-shot prompt for FS-Code approach. In Figure 5, we separate the examples using ***. To separate the vulnerable part of the codes and the first few lines of the codes, we use **second** and **first** tags, respectively. Note that to prime the model to generate relevant non-secure prompts, we also provide a few libraries at the end of the few-shot prompt. To provide the vulnerable code examples for the few-shot prompts for FS-Code and also other two approaches (FS-Prompt and OS-Prompt), we use three different sources: (1) The example provided in the dataset published by Siddiq and Santos (2010), (2) examples provided by the CodeQL (Zhou et al., 2011) documentation, and (3) published vulnerable code examples by Pearce et al. (2011).
Figure 4. Overview of our proposed approach to automatically finding security vulnerability issues of the code generation models.
Figure 5. An example of a few-shot prompt of our FS-Code approach. The few-shot prompt is constructed by the codes containing the deserialization of untrusted data (CWE-502).
#### 4.1.2. FS-Prompt
We investigate two other variants of our few-shot prompting approach. In Equation 4, we introduce FS-Prompt (Few-Shot-Prompt).
(4)
Here, we only use non-secure prompts () without the rest of the code () to guide models to generate variations of the prompt. By providing a few examples of non-secure prompts, we prime the model \(\mathbf{F}\) to generate relevant non-secure prompts. We use the first few lines of code examples with vulnerabilities we used for FS-Code. To construct the few-shot prompt for this approach, we only used the parts with **second** tag in Figure 5.
#### 4.1.3. OS-Prompt
OS-Prompt (One-Shot-Prompt) in Equation 5 is another variant of our approach, where we use only one example of non-secure prompts to approximate \(\mathbf{F}^{-1}\). To construct a one-shot prompt for this approach, we only used one example of the parts with **second** tag in Figure 5.
(5)
We investigate the effectiveness of each approach in approximating \(\mathbf{F}^{-1}\) to generate non-secure prompts by conducting a set of different experiments.
### Sampling Non-secure Prompts and Finding Vulnerable Codes
Using the proposed approximation of \(\mathbf{F}^{-1}\), we generate non-secure prompts that potentially lead the model \(\mathbf{F}\) to generate codes with particular security vulnerabilities. Given the output distribution of the \(\mathbf{F}\), we sample multiple different non-secure prompts using a beam search algorithm (Kumar et al., 2017; Kumar et al., 2017) and random sampling. Sampling multiple non-secure prompts allows us to find the models' security vulnerabilities at a large scale. Lu et al. (Lu et al., 2017) show that the order of examples in few-shot prompting affects the output of the models. Therefore, to increase the diversity of the generated non-secure prompts, in FS-Code and FS-Prompt, we use a set of few-shot prompts with permuted orders. We provide the details of the different few-shot prompt sets in section 5.
Given a large set of generated non-secure prompts and model \(\mathbf{F}\), we generate multiple potentially vulnerable code samples and spot security vulnerabilities of the target model via static analysis. To generate potentially vulnerable code using the generated non-secure prompts, we employ different strategies (e.g., beam search algorithm) to sample a large set of different codes.
### Confirming Security Vulnerability Issues of Identified Samples
We employ our approach to sample a large set of non-secure prompts (), which can be used to generate a large set of code () from the targeted model. Using the sampled non-secure prompts and their completion, we can construct the completed code (). To analyze the security vulnerabilities of the generated codes, we query the constructed codes () via CodeQL (Kumar et al., 2017) to obtain a list of potential vulnerabilities.
In the process of generating non-secure prompts which lead to a specific type of vulnerability, we provide the few-shot input from the targeted CWE type. Specifically, if we want to sample "SQL Injection" (CWE-089) non-secure prompts, we provide a few-shot input with "SQL Injection" security vulnerabilities.
## 5. Experiments
In this section, we present the results of our experimental evaluation. First, we explain the details of the experimental setup. Then we provide the results of finding the models' security vulnerabilities and study the efficiency and scalability of the proposed approach. We also investigate the transferability of the generated non-secure prompts among the different models. Furthermore, we show that our approach is capable of finding security vulnerability issues in GitHub Coplilot.
### Setup
We start with an overview of the setup, including the details of the models, few-shot prompts, sampling strategies, and the CodeQL settings.
#### 5.1.1. Code Generation Models
In our experiments, we focus on CodeGen with 6 billion parameters (Kumar et al., 2017) and Codex (Codex, 2017) model with 12 billion parameters. We provide the details of each model in Section 3.3. In addition to these two models, we also provide the results for the GitHub Coplilot AI programming assistant (Kumar et al., 2017).
We conduct the experiments for CodeGen model using two NVIDIA 40GB Ampere A100 GPUs. To run the experiments on Codex, we use the OpenAI API (Codex, 2017) to query the model. In the generation process, we consider generating up to 25 and 150 tokens for non-secure prompts and code, respectively. We use beam search to sample \(k\) non-secure prompts from CodeGen. Using each \(k\) sampled non-secure prompts, we sample \(k^{\prime}\) completion of the given input non-secure prompts. For the Codex model, we also set the number of samples for generating non-secure prompts and code to \(k\) and \(k^{\prime}\), respectively. In total, we sample \(k\times k^{\prime}\) completed codes. For both models, we set the sampling temperature to 0.7, where the temperature describes the randomness of the model's output and, therefore, their variance. The higher the temperature, the more random the output, while 0.0 would always output the most likely prediction. For the few-shot prompts, we use three different sources: the example provided in the dataset published by Siddiq and Santos (Siddiq and Santos, 2017), examples provided by CodeQL (Kumar et al., 2017), and published vulnerable code examples by Pearce et al. (Pearce et al., 2017).
#### 5.1.2. Constructing Few-shot Prompts
We use the few-shot setting in FS-Code and FS-Prompt to guide the models to generate the desired output. Previous work has shown that the optimal number for the few-shot prompting is between two to ten examples (Kumar et al., 2017; Kumar et al., 2017). Due to the difficulty in accessing potential security vulnerability code examples, we set the number to four in all of our experiments for FS-Code and FS-Prompt.
To construct each few-shot prompt, we use a set of four examples for each CWEs in Table 1. The examples in the few-shot prompts are separated using a special tag (**##)**. It has been shown that the order of examples affects the output (Lu et al., 2017). To generate a diverse set of non-secure prompts, we construct five few-shot prompts with four
examples by randomly shuffling the order of examples. In total, for each type of CWE, we use four examples. Using these four examples, we construct five different few-shot prompts. Note that each of the four examples contains at least one security vulnerability of the targeted CWE. Using the five constructed few-shot prompts, we can sample \(5\times k\times k^{\prime}\) completed codes from each model.
#### 5.1.3. CWEs and CodeQL Settings
By default, CodeQL provides queries to discover 29 different CWEs in Python code. Here, we generate non-secure prompts and codes for 12 different CWEs, listed in Table 1. However, we analyzed the generated codes to detect all 29 different CWEs. We summarize all CWEs that are not in the list in Table 1 but are found during the analysis as _Other_.
### Evaluation
In the following, we present the evaluation results and discuss the main insights of these results.
#### 5.2.1. Generating Codes with Security Vulnerabilities
We evaluate our different approaches for finding vulnerable codes that are generated by the CodeGen and Codex models. We examine the performance of our FS-Code, FS-Prompt, and OS-Prompt in terms of quality and quantity. For this evaluation, we use five different few-shot prompts by permuting the input order. We provide the detail of constructing these five few-shot prompts using four code examples in subsection 5.1. Note that in one-shot prompts for OS-Prompt, we use one example in each one-shot prompt, followed by importing relevant libraries. In total, using each few-shot prompt or one-shot prompt, we sample top-5 non-secure prompts, and each sampled non-secure prompts is used as input to sample top-5 code completion. Therefore using five few-shot or one-shot prompts, we sample \(5\times 5\times 5\) (125) complete codes from CodeGen and Codex models.
effect of sampling the codes on finding the number of vulnerable codes. We provide more details and result in Appendix A.
Qualitative Examples.Figure 8 and Figure 9 provide two examples of vulnerable codes generated by CodeGen and Codex, respectively. These two codes contain a security vulnerability of type CWE-502 (deserialization of untrusted data). In Figure 8, lines 1 to 8 are used as the non-secure prompt, and the rest of the code example is the completion for the given non-secure prompt. The code contains a vulnerability in line 14, where the code deserializes data without sufficiently verifying the data. In Figure 9, lines 1 to 9 are the non-secure prompt, and the rest of the code is the output of Codex given the non-secure prompt. The code contains a vulnerability of type CWE-502 in line 14.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline Methods & CWE-020 & CWE-022 & CWE-078 & CWE-079 & CWE-089 & CWE-094 & CWE-117 & CWE-327 & CWE-502 & CWE-601 & CWE-611 & CWE-732 & Other & Total \\ \hline FS-Codes & 4 & 19 & **4** & 25 & 3 & 0 & **15** & **45** & 4 & **11** & **12** & **12** & **32** & **186** \\ FS-Prompts & 0 & 22 & 1 & 27 & 4 & 0 & 7 & **45** & 6 & 6 & 3 & 4 & 16 & 141 \\ OS-Prompt & **10** & **28** & 2 & **40** & 1 & 0 & 6 & 20 & 2 & 1 & 7 & 1 & 27 & 145 \\ \hline \hline \end{tabular}
\end{table}
Table 2. The number of discovered vulnerable codes that are generated by the CodeGen model using FS-Code, FS-Prompt, and OS-Prompt methods. Columns two to thirteen provide results for different CWEs (see Table 1). Column fourteen provides the number of found vulnerable codes with the other sixteen CWEs that are queried by CodeQL. The last column provides the sum of all codes with at least one security vulnerability.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline Methods & CWE-020 & CWE-022 & CWE-078 & CWE-079 & CWE-089 & CWE-094 & CWE-117 & CWE-327 & CWE-502 & CWE-601 & CWE-611 & CWE-732 & Other & Total \\ \hline FS-Codes & **8** & 43 & **21** & **81** & **25** & **20** & 39 & **82** & 23 & **29** & **41** & **22** & **47** & **481** \\ FS-Prompts & 1 & 40 & 8 & 79 & 10 & 4 & **41** & 50 & **49** & 28 & 18 & 1 & 39 & 370 \\ OS-Prompt & 0 & **47** & 3 & 49 & 5 & 3 & 16 & 55 & 12 & 13 & 5 & 9 & 35 & 252 \\ \hline \hline \end{tabular}
\end{table}
Table 3. The number of discovered vulnerable codes that are generated by the Codex model using FS-Code, FS-Prompt, and OS-Prompt methods. Columns two to thirteen provide results for different CWEs (see Table 1). Column fourteen provides the number of found vulnerable codes with the other sixteen CWEs that are queried by CodeQL. The last column provides the sum of all codes with at least one security vulnerability.
Figure 6. Percentage of the discovered vulnerable codes using the non-secure prompts that are generated for specific CWE. (a), (b), and (c) provide the results of the generated code by CodeGen model using FS-Code, FS-Prompt, and OS-Prompt, respectively. (d), (e), and (f) provide the results of the generated code by the Codex model using FS-Code, FS-Prompt, and OS-Prompt, respectively.
#### 5.2.3. Transferability of the Generated Non-secure Prompts
In the previous experiments, we generated the non-secure prompts and completed codes using the same model. Now we investigate if the generated non-secure prompts are transferable among different models. For example, we want to answer the question of whether the generated non-secure prompts by Codex can lead the CodeGen model to generate vulnerable codes. For this experiment, we collect a set of "promising" non-secure prompts that we have generated with the CodeGen and Codex models in Section 5.2.2. We consider a non-secure prompt as promising if it at least leads the model to generate one vulnerable code. After de-duplication, we collected 381 of the non-secure prompts that were generated by the CodeGen
```
1fromflaskimportrequest
2importpickle
3importyaml
4
5defgetData(data):
6...
7
8returnunixtime,
9thenumberoftransactionsinthedatabase
10thenumberofusersinthedatabase
11...
12data=request.form.get('data')
13#data=request.data
14data=pickle.loads(data)
15data=data[0]
16#printdata
17returndata
```
Figure 8. A vulnerable code example generated by CodeGen. The code contains CWE-502 vulnerability in line 14. In this example, the first eight lines are the non-secure prompts, and the rest of the code is the completion of the given non-secure prompts.
Figure 7. The number of discovered vulnerable codes versus the number of sampled codes generated by (a) CodeGen, and (b) Codex. The non-secure prompts and codes are generated using our FS-Code method.
model and 537 non-secure prompts that the Codex model generated. All of the prompts were generated using our FS-Code approach.
To examine the transferability of the promising non-secure prompts, we use CodeGen to complete the non-secure prompts that Codex generates. Furthermore, we use Codex to complete the non-secure prompts that CodGen generates. Table 4 provides results of vulnerable codes generated by CodeGen and Codex models using promising non-secure prompts that are generated by CodeGen and Codex model. We sample \(k^{\prime}=5\) for each of the given non-secure prompts. In Table 4, #Code refers to the number of the generated codes, and #Vul refers to the number of codes that contain at least one CWE. Table 4 shows that non-secure prompts that we sampled from CodeGen are also transferable to the Codex model and vice versa. Specifically, the non-secure prompts that we sampled from one model generate a high number of vulnerable codes in the other model. For example, in Table 4, we observe that the generated non-secure prompts by CodeGen leads Codex to generate 552 vulnerable codes. We also observe that the non-secure prompts lead to generating more vulnerable codes on the same model compared to the other model. For example, non-secure prompts generated by Codex leads Codex to generate 1540 vulnerable codes, while it only generates 938 vulnerable on the CodeGen model. Furthermore, Table 4 shows that the non-secure prompts of Codex models can generate a higher fraction of vulnerabilities for CodeGen (\(938/2685=0.34\)) in comparison to the CodeGen's non-secure prompts (\(590/1905=0.30\)).
#### 5.2.4. Finding Security Vulnerabilities in GitHub Copilot
Finally, we evaluate the capability of our FS-Code approach in finding security vulnerabilities of the black-box commercial model GitHub Copilot. GitHub Copilot employs Codex family models (Kumar et al., 2017) via OpenAI APIs. This AI programming assistant uses a particular prompt structure to complete the given codes. This includes suffix and prefix of the user's code together with information about other written functions (Kumar et al., 2017). The exact structure of this prompt is not publicly documented. We evaluate our FS-Code approach by providing five few-shot prompts for different CWEs (following our settings in previous experiments). As we do not have access to the GitHub Copilot model or their API, we manually query GitHub Copilot to generate non-secure prompts and codes via the available Visual Studio Code extension (Kumar et al., 2018). Due to the labor-intensive work in generating the non-secure prompts and codes, we provide the results for the first four of twelve representative CWEs. These CWEs include CWE-020, CWE-022, CWE-078, and CWE-079 (see Table 1 for a description of these CWEs). In the process of generating non-secure prompts and the code, we query GitHub Copilot to provide the completion for the given sequence of the code. In each query, GitHub Copilot returns up to 10 outputs for the given code sequence. GitHub Copilot does not return duplicate outputs; therefore, the output could be less than 10 in some cases. To generate non-secure prompts, we use the same constructed few-shot prompts that we use in our FS-Code approach. After generating a set of non-secure prompts for each CWE, we query GitHub Copilot to complete the provided non-secure prompts and then use CodeQL to analyze the generated codes.
Table 5 provides the results of generated vulnerable codes by GitHub Copilot using our FS-Code approach. The results are the number of codes with at least one vulnerability. In total, we generate 783 codes using 109 prompts for all four CWEs. In Table 5, column 2 to 5 provides results for different CWEs, and column 6 provide the sum of the codes with other CWEs that CodeQL detects. The last column provides the sum of the codes with at least one security vulnerability. In Table 5, we observe that our approach is also capable of finding security vulnerability issues in a black-box commercial model.
Figure 10 and Figure 11 show two examples of the generated codes by GitHub Copilot that contain security vulnerabilities. Figure 10 depicts a generated code that contain CWE-022, which is known as path traversal vulnerability. In this example, lines 1 to 6 are the non-secure prompt, and the rest of the code is the completion of the given non-secure prompt. The code in Figure 10 contains a path traversal vulnerability at line 10, where it enables arbitrary file write during tar file extraction. Figure 11 shows a generated codes that contains CWE-079, this issue is related to cross-site scripting attacks. Lines 1 to 8 of Figure 11 contain the input non-secure prompt, and the rest of the code is the completion of the non-secure prompt. The code in this figure contains a cross-site scripting vulnerability in line 12.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Generated prompts} \\ & \multicolumn{2}{c}{CodeGen} & \multicolumn{2}{c}{Codex} \\ \hline & \#Code & \#Vul & \#Code & \#Vul \\ CodeGen & 1905 & 590 & 2685 & 938 \\ Codex & 1905 & 552 & 2685 & 1540 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Transferability of the generated non-secure prompts. Each row shows the models that have been used to generate the codes using the provided non-secure prompts. Each column shows the prompts that were generated using different models. #Code indicates the number of generated codes, and #Vul refers to the number of vulnerable codes.
Figure 10. A vulnerable code example generated by GitHub Copilot. The code contains a CWE-022 vulnerability in line 10. In this example, the first six lines are the non-secure prompts, and the rest of the code is the completion of the given non-secure prompts.
## 6. Discussion
Besides manual methods, our approach can systematically find CWEs and non-secure prompts and is therefore scaleable for finding more vulnerability issues within code language models. This allows extending our benchmark of promising non-secure prompts with more examples per CWE and adding more CWEs in general. By publishing the implementation of our approach, we enable the community to contribute more CWEs and extend our dataset of promising non-secure prompts.
For our work, we have focused on CWEs that can be detected with static analysis tools, for which we use CodeQL to detect and classify the found vulnerabilities. As with any analysis tool, CodeQL might lead to false positives, and the tool will not be able to find all CWEs, especially not any kind Specific CWEs require considering the context, which can be done via other program analysis techniques or dynamic approaches such as fuzzing that permute the input to trigger bugs. Nevertheless, we have shown that our approach successfully finds non-secure prompts for different CWEs, and we expect this to be extendable without changing our general few-shot approach. Therefore, our benchmark can be augmented in the future with different kinds of vulnerabilities and code analysis techniques.
In our evaluation, we have shown that the found non-secure prompts are transferable across different language models, meaning that non-secure prompts that we sample from one model will also generate a significant number of codes with CWEs if they are used with another model. Specifically, we have found that non-secure prompts sampled via Codex can even find a higher fraction of vulnerabilities generated via CodeGen.
In our experiments with GitHub Copilot, we have shown that our few-shot prompting approach also works for commercial black-box models and specifically for the model that is already used by millions of developers. Also, this indicates that vulnerabilities in automatically generated codes are not solely an academic problem but already an issue that needs to be considered during the development of AI-assisted pair programming tools. Even though a developer will take care that their code is as secure as possible, they cannot check for all cases, and utilizing a model that will not generate or suppress vulnerable code can already prevent a lot of potential harm and unpredictable costs.
## 7. Conclusions
There have been a tremendous amount of advances in large-scale language models for code generation, and state-of-the-art models are now used by millions of programmers every day. Unfortunately, we do not yet fully understand the shortcomings and limitations of such models, especially with respect to insecure code generated by different models. Most importantly, we lack a method for systematically identifying prompts that lead to code with security vulnerabilities. In this paper, we have presented an automated approach to address this challenge. We introduced a novel black-box inversion approach based on few-shot prompting, which allows us to automatically find different sets of security vulnerabilities of the black-box code generation models. More specifically, we provide examples in a specific way that allows us to guide the code generation models to approximate the inverse of themselves. We investigated three different few-shot prompting strategies and used static analysis methods to check the generated code for potential security vulnerabilities.
We empirically evaluated our method using the CodeGen and Codex models and the commercial black-box implementation of GitHub Copilot. We showed that our method is capable of finding 1000s of security vulnerabilities in these code generation models. To foster research on this topic, we publish the set of de-duplicate promising non-secure prompts that are generated by CodGen and Codex models as a benchmark to investigate the security vulnerabilities of current code generation models. We use 381 non-secure prompts that are generated by CodeGen and 537 non-secure prompts that are generated by codex as a set of promising non-secure prompts. This set can be used to evaluate and compare the vulnerabilities of the models regarding various CWEs.
Figure 11. A vulnerable code example generated by GitHub Copilot. The code contains a CWE-079 vulnerability in line 12. In this example, the first eight lines are the non-secure prompts, and the rest of the code is the completion of the given non-secure prompts.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & CWE-020 & CWE-022 & CWE-078 & CWE-079 & Other & Total \\ \hline GitHub Copilot & 21 & 80 & 26 & 108 & 8 & 243 \\ \hline \hline \end{tabular}
\end{table}
Table 5. The number of discovered vulnerable codes that are generated by the Codex model using FS-Code. Columns two to four provide results for different CWEs (see Table 1). Column five provides the number of discovered vulnerable codes with the other CWEs that are queried by CodeQL. The last column provides the sum of all codes with at least one security vulnerability. |
2303.03454 | Photonic quantum computing with probabilistic single photon sources but
without coherent switches | We present photonic quantum computing architectures that can deal with both
probabilistic (heralded) generation of single photons and probabilistic gates
without making use of coherent switching. The only required dynamical element
is the controllable absorption of all photons in a given mode. While the
architectures in theory scale polynomially in the resources required for
universal quantum computation, as presented their overhead is large and they
are illustrative extreme points in the configuration space of photonic
approaches, rather than a recipe that anybody should seriously pursue. They do,
however, prove that many things presumed necessary for photonic quantum
computing, in fact are not. Of potentially independent interest may be that the
architectures make use of qubits which have many possible microstates
corresponding to a single effective qubit state, and the technique for dealing
with probabilistic operations is to, when necessary, just enlarge the set of
such microstates to incorporate all possibilities, while making heavy use of
the subsequent ability to `coherently erase' which particular microstate a
given qubit is in. | Terry Rudolph | 2023-03-06T19:24:51Z | http://arxiv.org/abs/2303.03454v1 | # Photonic quantum computing with probabilistic single photon sources but without coherent switches
###### Abstract
We present photonic quantum computing architectures that can deal with both probabilistic (heralded) generation of single photons and probabilistic gates without making use of coherent switching. The only required dynamical element is the controllable absorption of all photons in a given mode. While the architectures in theory scale polynomially in the resources required for universal quantum computation, as presented their overhead is large and they are illustrative extreme points in the configuration space of photonic approaches, rather than a recipe that anybody should seriously pursue. They do, however, prove that many things presumed necessary for photonic quantum computing, in fact are not. Of potentially independent interest may be that the architectures make use of qubits which have many possible microstates corresponding to a single effective qubit state, and the technique for dealing with probabilistic operations is to, when necessary, just enlarge the set of such microstates to incorporate all possibilities, while making heavy use of the subsequent ability to "coherently erase" which particular microstate a given qubit is in.
_This paper is prepared for the Jonathan P. Dowling memorial issue of AVS Quantum Science. The last scientific discussion I had with Jon, circa mid-2017, he had read Ref. [1] and was pressing me on aspects of linear optical quantum computing and silicon photonics. I had recently realized that architectures such as the ones presented herein were also possible, and I tried to tell him "everything you think you know about what is strictly necessary for LOQC is wrong!" Unfortunately at that time I couldn't tell him in detail about these results, and the chance never came._
+
Footnote †: preprint: APS/123-QED
## I Preamble
This article contains the concatenation of two papers written between 2016-18 but never published. Other than minor editing and replacement of the introduction and conclusions by Sec. II, they are unchanged. This means the underlying assumptions about how to prove an architecture universal are based on slightly overcomplicated ideas regarding generation of cluster states and percolation theory and so on. Since that time we have realized that rather than overcoming the non-determinism of photonic gates by teleportation through implausibly large ancillate states [2], or via cluster state computation [3] using percolated graph states [4; 5; 6; 7] (which accumulate error as they are renormalized), it is possible to do _fusion based quantum computing_[8] (FBQC) where we use a high erasure-threshold code to directly deal with _both_ the non-determinism of the gates and all error correction simultaneously. The exposition of the results in this article could have been a little clearer if FBQC had been the stated target from the start.
## II Overview of the architectures
It is typically presumed that high-efficiency sources of single photons will be necessary to do photonic quantum computing 2. Roughly speaking there are two approaches to the construction of such: generate the photons from a deterministic emitter (ions, quantum dots etc), or generate pairs of photons randomly using a weak nonlinear-optical process, and herald
Figure 1: An extremely high level overview of the first of the multi-rail architectures. Grey lines denote spatial modes. Spatial modes/time bins are occupied by (heralded) random single photons (red). These pass through alternating sequences of passive interferometers (pink), detectors (green) which via classical feedforward (blue) control subsets of the modes to be blocked (black). Only the configurations of modes being blocked depend on the initial random state of the photons and of the computation being performed, the passive circuits (which may contain fixed length delays) are predetermined and independent of both. Note also that larger/longer computations only (polynomially) increases the number of modes and time bins, the optical depth (number of passive interferometers) experienced by any photon is independent of such.
the existence of the desired single photon by the detection of its partner. While the efficiency of the latter procedure may be low, considerations of achievable repetition rates, photon purity and compatibility with silicon photonics have led to proposals to ameliorate the low efficiency by _multiplexing_9. That is, we build many copies of the heralded-but-inefficient-source, and then use a switch to select out a success. The same principle can then be applied at subsequent stages wherein one probabilistically creates larger and larger entangled states, until achieving a size suitable for use in a computation.
Footnote 9: In certain technologies the controllable absorption-of/transparency-to light could be combined with the quantum Zeno effect to make an approximately coherent switch. That is not what we will be doing here.
Multiplexing in the fashion just described requires switches that maintain the quantum state of the light being switched. The switches used by the telecom industry to switch coherent (laser) light are also of this form, and from now on they will be referred to as _coherent switches_.
The architectures we present in this article investigate a different approach that lets us use randomly generated photons directly and yet requires no coherent switching! The approach still requires a dynamically controlled'switch', but it need only be of a "blocking" or "incoherent/filtering/absorbing" type10. This is achieved at the expense of building much larger (but fixed, passive) interferometers. The methods presented should be viewed very much as "proofs of principle" - although in theory they scale polynomially with the size of the computations under consideration, the scaling of the versions presented here is not practical.
Footnote 10: A simple example of ‘coherent erasure’ from standard quantum information would be destroying information about the computational basis value \(|0\rangle/|1\rangle\) by performing a measurement in the \(|\pm\rangle\) basis. This maintains phase relationships between systems that the erased qubit may be entangled with.
A very high level summary of the first architecture is given in Fig. 1. Note that all reasonable photonic architectures must have an optical depth which is constant with the size of the computation - that is any individual photon passes through only a constant (and hopefully \(<10\)) number of components, because loss will accumulate exponentially if not. The architectures presented here satisfy this requirement.
Even for the non-comonoisseur of photonic quantum computing there are aspects of these results which may have more general interest. For example, they make use of qubits that have many possible (orthogonal) microstates corresponding to a single qubit state. While this sounds similar to an error correcting code, the mechanism here still only uses a single physical photon per qubit. The advantage of being able to associate many microstates with a single qubit state is that, when limited to probabilistic processes, we can lump together _all_ possible microstates output from repeated copies of the probabilistic process and redefine our qubit to encompass them all.
Critical to getting away with this while still having our photons act as qubits is being able to perform _coherent erasure_ of microstate information, by which we mean a process wherein it is possible to destroy information regarding the specific microstate the system is in, while maintaining relevant phase information 'between' the microstates encoding different possible qubit states11.
Footnote 11: A simple example of ‘coherent erasure’ from standard quantum information would be destroying information about the computational basis value \(|0\rangle/|1\rangle\) by performing a measurement in the \(|\pm\rangle\) basis. This maintains phase relationships between systems that the erased qubit may be entangled with.
III Paper I: Architecture for Photonic Quantum Computing Based on Multirail Encodings and Passive Multiplexing
### Stochastic Sources
In what follows we will only consider photons which occupy discrete spatiotemporal modes. That is, a photon can only be found in one of a finite set of discrete waveguides, which define spatial modes \(\vec{k_{i}}\), within a discrete time bin \(t_{j}\). The degree of temporal discretization is typically provided by the mechanism responsible for generating the photons, the full temporal extent of the photon's wavepacket is presumed to fit within the time bin.
One of our ultimate goals will be to show that even in the absence of coherent switches, rather than getting a photon into a single such spatiotemporal mode with high efficiency ("good single photon sources"), for universal photonic quantum computing it suffices to only get a single photon randomly into _any one_ of a set of spatiotemporal modes with high efficiency. The remaining modes within the set must be unoccupied (vacuum). Such a source will be termed a _stochastic source_, although it should be emphasized that the stochasticity refers to the fact that each use of the source will produce a single photon in a random (though known) spatiotemporal mode, the _efficiency_ - ie the probability that one (and only one) photon is produced - will need to still be close to 1.
Consider then some variations on how such a source might be constructed. Imagine we have one (heralded) probabilistic single-photon source with efficiency \(\eta\). By firing it \(m\) times, we could get lucky and one and only one photon be produced. This would happen with probability \(m\eta(1-\eta)^{m-1}\), which is an improvement in efficiency over the single source alone, although there is no way to make this probability arbitrarily close to 1. We could, however, arrange to fire the source \(m\) times, but turn off the pump as soon as a single photon has been produced. This 'dump-the-pump' style switching has the advantage it does not involve directly switching the photons that will be our ultimate carriers of quantum information. The source efficiency would now be \(1-(1-\eta)^{m}\), which goes to 1 for large \(m\). A final alternative, achieving the same efficiency, is to fire the source \(m\) times, and if more than one photon is produced to filter the extra photons out, using an incoherent 'blocking' or 'absorbing' style of switch.
The \(m\)-mode source of the previous paragraph, however created, will be a "purely temporal" stochastic single-photon source, since the \(m\) modes are distinguished only by time - the photons all lie in the same waveguide (spatial mode). We can also consider constructing a high efficiency "purely spatial" stochastic source, that is one where all \(m\) modes are defined by multiple waveguides at a single time. For this case we could begin with \(m\) heralded single photon sources firing into
different waveguides. By firing some or all of them simultaneously, potentially also with dump-the-pump and/or blocking style switching (and suitable fixed delays), a wide variety of possibilities exist to make purely spatial stochastic sources with high efficiency.
The most general stochastic source would involve a mix of spatial and temporal modes. Whilst we might prefer to (say) use a purely spatial stochastic source for the single photons, at later stages in the architecture we may prefer to end up with our quantum information encoded in a mixed spatiotemporal form, and so will consider how to deal with this as well.
To capture the various types of sources in a compact notation, we will denote by an \([a,b;p]\)-source one that with probability \(p\) has a single photon in any one of \(a\) spatial modes and \(b\) temporal modes (and vacuum in all other modes). For example, a regular heralded source at \(20\%\) efficiency is a \([1,1;0.2]\)-source. By pumping it 16 times (and turning off the pump or filtering appropriately) we create a purely temporal \([1,16;0.972]\)-source, while by taking four such sources and pumping each up to 8 times we can create a mixed spatio-temporal \([4,8;0.999]\)-source.
### Multi-Rail encoding of qubits
A common way of encoding a photonic qubit is to pick any two modes of the electromagnetic field which differ only in a degree of freedom that can be manipulated passively with linear optics. For example, we might use polarization, or angular momentum or spatial mode, for definiteness we will stick to using the latter unless otherwise indicated1. The logical2 qubit basis \(\ket{0},\ket{1}\) is then defined across the two chosen modes via
Footnote 1: Encoding in frequency is fully compatible with the architectures - and in fact can make them considerable more spatially compact. Note that during the coherent erasure (Hadmar) transformations requires a “frequency beamsplitter”, a dynamical object2. Here the emphasis is on avoiding all dynamical objects other than blocking-type switches.
Footnote 2: Logical2 in this paper never refers to a logical qubit from the theory of fault tolerance. It refers to the ‘bare’ qubit for which a single qubit basis state might correspond to many physically-distinct states. Fault tolerant logical qubits need to be built on top of this architecture, e.g. via FBQC3
Footnote 3: Encoding in the dual-rail encoding of qubits is a “classical” encoding of qubits.
\[\ket{0}\Leftrightarrow\ket{1\!\succ\!\ket{0\!\succ\!}},\quad\ket{1} \Leftrightarrow\ket{0\!\succ\!\ket{1\!\succ\!}}, \tag{1}\]
where \(\ket{n\!\succ\!}\) denotes a single-mode Fock (number) state occupied by \(n\) photons. This is often termed a 'dual-rail' encoding.
By increasing the number of modes one can certainly encode higher-dimensional (i.e. qudit) quantum information, although there do not appear to be large advantages to doing so14. Here we instead introduce a somewhat odd encoding, where we take \(2m\) discretized spatio-temporal modes, and the logical qubit state \(\ket{0}\) is defined by allowing the photon to be in _any one_ of the first \(m\) modes, while the logical state \(\ket{1}\) is defined by allowing the photon to be in any one of the second \(m\) modes. For example:
Footnote 4: Encoding in frequency is fully compatible with the architectures - and in fact can make them considerable more spatially compact. Note that during the coherent erasure (Hadmar) transformations requires a “frequency beamsplitter”, a dynamical object2. Here the emphasis is on avoiding all dynamical objects other than blocking-type switches.
Footnote 2: Logical2 in this paper never refers to a logical qubit from the theory of fault tolerance. It refers to the ‘bare’ qubit for which a single qubit basis state might correspond to many physically-distinct states. Fault tolerant logical qubits need to be built on top of this architecture, e.g. via FBQC4
\[\ket{0}\Leftrightarrow\ket{0,0,1,0\!\succ\!}\ket{0,0,0,0\!\succ= :\ket{1_{3}\!\succ\!\ket{0\!\succ}}}\] \[\ket{1}\Leftrightarrow\ket{0,0,0,0\!\succ\!}\ket{0,1,0,0\!\succ= :\ket{0\!\succ\!\ket{1_{2}\!\succ}}}. \tag{2}\]
The stochastic sources of the preceding section are then interpreted as a computational basis state preparation of a multi-rail qubit.
For any given photon/qubit we will know which of the modes encode its computational basis, but, as the above example emphasizes, it is _not_ the case that the \(\ket{0}\), \(\ket{1}\) states for a single qubit need be defined using "matching" (i.e. complementary) pairs of modes \((i,m+i)\). This immediately raises the question, how can we (passively) do a single qubit gate? In the dual rail encoding, where there are only two (say) spatial modes, they are automatically "matched appropriately" (i.e. \(i=m=1\) the only option) and so combining the corresponding modes on a 2-mode beamsplitter performs the desired transformation whilst automatically remaining in the subspace defining the qubit. In the example of Eq. (2), a beamsplitter between modes 3 and 5 would be fine, but one between any other pairs of modes would take us out of the defined computational basis - for example giving terms that have both photons somewhere in modes \(1,\ldots 4\) or both in \(5,\ldots 8\), which are not multirail qubit states as defined. If we have to adapt the particular pair of modes we apply an operation to based upon which of the modes our (stochastic) single photon source happens to occupy, we would be back to requiring coherent switching. Fortunately, as we will see later, this will not be necessary.
Moreover, if the modes we are using span several time bins then we may need a 'temporal beamsplitter' of some form, a device that cannot be constructed with purely passive linear optics. Again, fortunately this will turn out to not be necessary.
### Production of maximally entangled states
In this section we restrict attention to production of multi-rail qubits from purely spatial stochastic sources.
#### ii.3.1 Review: Production of dual-rail encoded Bell pairs using deterministic sources
Recall a scheme for heralded generation of an entangled state5 from four single photons. Fig. 2(a) shows an interferometer with 8 spatial modes, that can take in four single photons (all in the same time bin) in spatial modes 1-4, and produce a maximally entangled pair of photons in the output spatial modes 1-4. Despite the apparent simplicity, the output state just prior to detection is a superposition of 144 terms - that is, a large amount of multiphoton interference occurs! Success is heralded when two of the four detected modes 5-8 detect a single photon (and the other two detectors register no photon). Success can therefore occur \(\binom{4}{2}=6\) different ways, the probability of success is 6/32, and the output state on modes 1-4 when it does occur is one of:
Footnote 5: Logical2 in this paper never refers to a logical qubit from the theory of fault tolerance. It refers to the ‘bare’ qubit for which a single qubit basis state might correspond to many physically-distinct states. Fault tolerant logical qubits need to be built on top of this architecture, e.g. via FBQC4
\[\ket{1,0\!\succ\!\ket{1,0\!\succ\!-\!\ket{0,1\!\succ\!\ket{0,1 \!\succ\!\succ}}}}\] \[\ket{1,0\!\succ\!\ket{0,1\!\succ\!\!-\!\ket{0,1\!\succ\!\succ\!\!\succ \!\ket{1,0\!\succ\!\succ}}}}\] \[\ket{1,1\!\succ\!\ket{0,0\!\succ\!\succ\!\ket{0,0\!\succ\!\ket{1,1 \!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\! \succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\! \succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\! \succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!\!\succ\!\succ\!\!\succ\!\succ\!\succ\!\succ\!\succ\!\succ\!
While these are all maximally entangled, the last possibility would require some shuffling around of modes to get it into the standard qubit dual-rail encoding, and henceforth we will not consider it as a successful generation of a photonic Bell state. As such the success probability reduces to 4/32.
#### ii.2.2 Production of multi-rail encoded Bell pairs using stochastic sources
In this section we overview a method to generate multi-rail Bell states using stochastic sources. The scheme has almost the same success probability as for the dual rail case. Quite remarkably, the success probability does _not_ depend on the size (i.e. total number of modes) of the stochastic source - the non-determinism of the source only requires construction of a larger interferometer.
An example of such multi-rail Bell state generation using four stochastic sources is given in Fig. 3. A \([4,1;1]\)-source at each of the inputs labelled 1-4, will lead to successful generation of a multi-rail encoded Bell pair across multi-rails \(1,2\) for the first qubit, and \(3,4\) for the secondg). The exact success probability is given by the probability of the state \(\ket{\text{Tr}(\ket{\text{Tr}(\ket{\text{Tr}(\ket{\text{Tr}(\ket{\text{Tr}( \ket{\text{Tr}(\ket{\text{Tr}(\ket{\text{
cess probability depends on the particular modes randomly occupied by each of the four input photons, but we find it is never less than half of that obtained for the original scheme of Fig. 2 using deterministic photons, i.e it is at least \(2/32\). This remains true even if we construct the obvious generalizations to \(8,16,32\ldots\) copies of the basic interferometer in order to deal with larger stochastic source inputs - that is, there is no further decrease in success probability no matter how many copies we use10
Footnote 10: It was this surprising observation - which ultimately arises because we are using interferometers restricted to only \(\pm 1\) phases that prevents the interference “going anywhere else” - which predicated the discovery of these architectures.
Particularly intriguing are the cases where the four input photons happen to each enter a completely different interferometer. Success is heralded by one and only one photon at the detector(s), meaning that parts of the total quantum state that lead to success must have had only one photon entering the erasure beamsplitter just prior to the detector. Thus it becomes clear that no multi-photon interference need occur as part of the successful Bell state generation. By increasing the number of rails (copies of the initial interferometer) we could ensure vanishing amplitudes for multi-photon interference in both the success and failure parts of the wavefunction. All that is required is interference/erasure of "Feynman paths", not the bosonic nature of the photons, and therefore there are numerous other physical systems in which one can imagine implementing these basic procedures.
### Avoiding switches via "passive multiplexing"
As the entangled state generation just presented is probabilistic it is tempting to think we must now resort to standard multiplexing: using a coherent multimode switch to shuffle our spatially multi-rail encoded qubits to be dual rail encoded, and furthermore making many copies of the Bell state generator and using a switch to select out one which has succeeded.
Later we will show, however, that the remaining primitives of photonic quantum computing (fusion gates and creation of arbitrarily large cluster states) can all be performed via passive interferometers on multi-rail encoded qubits with the same probability they can be performed on dual-rail qubits. That is, the size of the multi-rail encoding does not affect the success probabilities, it changes only the size of the photonic circuits. Since we therefore need not (in principle!) care how many modes the multirail encoding is distributed over, we can simply build many such Bell state generators and trivially combine the outputs to make a Bell state with a larger multi-rail encoding. This is depicted in the simplest case of two spatially parallel Bell state generators in Fig. 5.
We can further imagine repeatedly firing many multi-rail Bell state generators and interpreting the output as a spatio-temporal stochastic source of multi-rail encoded Bell pairs. The efficiency of each is low (around 2/32), but (similar to the situation with single photons) there are various options to combine many such low-efficiency devices to create a near-deterministic Bell state source. This new source will generically be of a mixed spatio-temporal character, although a purely spatially encoded source could be built by generalizing Fig. 5. Note that failure outcomes for generating the Bell states leave undesired photons in output modes that would need to be filtered with a blocking type of switch, similar to the case of building a stochastic single photon source
Figure 5: By increasing the number of rails over which the multi-rail qubit is encoded we can ‘parallelize’ and effectively ‘passively multiplex’ the entangled state production from many independent Bell state generators up to near unit efficiency without having to do any coherent switching at all. We do need to use a blocking (absorbing/filtering) incoherent switch in order to ensure extraneous photons from failed generators do not contaminate the multi-rail output. Here a purely-spatial passive multiplexing of two generators is depicted. Either (or both) of the Bell state generators could produce a multi-rail Bell pair with each qubit defined in a four spatial mode encoding, after the passive multiplexing the output qubits are defined across an eight mode encoding.
### Killing Time
A crucial element of our ability to use a (purely spatial) stochastic single photon source to generate Bell states is the existence of an "erasing" interferometer for spatial modes, namely the one described by the mode transformation \(H^{\otimes k}\). If we want to make use of mixed spatio-temporal sources we need to consider the extent to which similar procedures exist for erasure of temporal mode information. Such erasure will also be important for doing fusion gates with multi-rail qubits of a mixed spatiotemporal character.
In the Bell state generation of Fig. 3 we begin with four separate spatial interferometers, copies of the regular Bell state generator of Fig. 2. Consider a different scenario, where four stochastic sources of \([1,M,1]\) type (for some \(M\gg 1\)) photons are used at the four inputs of Fig. 2. Then we effectively are populating \(M\) independent copies of the Bell state generator. To mimic the structure of Fig. 3 - which erased spatial information via an \(H^{\otimes k}\) interferometer - we need an analogous method of erasing temporal information.
Typically manipulating quantum information using time-bin encodings necessitates the use of "active" (dynamical) elements, and our overarching goal here is to avoid such components. As we now show, it is possible to suitably _erase_ temporal information using passive generalizations of Franson14 interferometers, although there is a finite chance of (heralded) failure.
Footnote 14: This is why erasing time bin information using jittery detectors etc is not sufficient! We require _coherent erasure_.
To understand the general strategy, consider a single photon in one spatial mode, but potentially in any one of a finite set of time bins \(t_{i}\). Imagine we can uniformly'spread' the photon across many spatial and temporal modes:
\[|1_{\vec{k}_{1},t_{i}}\succ\longrightarrow\sum_{a,b}C_{a,b}(i)|1_{\vec{k}_{a },t_{b}}\succ, \tag{3}\]
where by'many' we mean the discrete indices \(a,b\) have combined range much greater than the range of \(i\). As long as the coefficients \(C_{a,b}(i)\) are uniform in magnitude, and as long as \(C_{a,b}(i)\neq 0\) for all initial time bins \(t_{i}\) over a large range of \(a,b\), then when we detect the photon we will have erased our knowledge of which time bin \(t_{i}\) it originated in. Note that the phases of the \(C_{a,b}(i)\) are important to know since our photon is typically entangled with others.15 Some examples of interferometers which achieve this are shown in Fig. 6. For these examples there will be some values of the detected spatial mode/time bin (i.e values of \(\vec{k}_{a},t_{b}\)) which can only arise from a single initial time bin, or more generally from less than the full set of initial time bins \(t_{i}\) of interest, and such detections amount to a failure of the temporal erasure.
Footnote 15: This is why erasing time bin information using jittery detectors etc is not sufficient! We require _coherent erasure_.
The upshot of all this is that we can readily combine erasure of both spatial and temporal information. This may or may not be useful for the generation of the initial entangled states. However, as mentioned above, with repeated use of hardware the entangled photons we produce can often be interpreted as be multi-rail encoded, where the modes comprising the rails vary both spatially and temporally.
We now have the tools in hand to consider using multi-rail entangled states directly for quantum computing, that is, without using coherent switches to "compress down" the photons into a smaller number of spatio-temporal modes as per the standard multiplexing ideas.
### Multi-Rail Fusion Gates
#### iii.6.1 Type-I fusion
The question to which we now turn is whether it is possible to use a multi-rail encoded entangled state as part of a full quantum computation. In the standard architecture we build larger entangled (cluster) states by fusing16 together smaller ones.
Footnote 16: This is why erasing time bin information using jittery detectors etc is not sufficient! We require _coherent erasure_.
The simplest variant of a fusion is performed by the Type-I gate. On dual rail photonic qubits it amounts to simply a
Figure 6: Examples of passive interferometers that allow for erasure of a single photon’s timing information with arbitrary (heralded) probability of success (when extrapolated in the obvious pattern). (a) A photon in a single spatial mode is coupled to one other empty ancilla spatial mode via a series of beamglisters. One spatial mode is delayed, the integer indicates the number of time bins each delay takes. The output state is a uniform superposition over 16 time bins and the two spatial modes. (b) A simplified notation for two beamsplitters and a delay of \(n\) timesbins. (c) Another interferometer that erases timing information, but spreads the input over increasing numbers of spatial modes.
beasmplitter between the first mode of qubit 1 and the last mode of qubit 2. This generates the evolution:
\[|0\rangle|0\rangle = |10\!\succ\!|10\!\succ\!\rightarrow|10\!\succ\!|10\!\succ\!|10 \!\succ\!+\!|00\!\succ\!|11\!\succ \tag{4}\] \[|0\rangle|1\rangle = |10\!\succ\!|01\!\succ\!\rightarrow|20\!\succ\!|00\!\succ\!-|0 0\!\succ\!|02\!\succ\] (5) \[|1\rangle|0\rangle = |01\!\succ\!|10\!\succ\!\rightarrow|01\!\succ\!|10\!\succ\] (6) \[|1\rangle|1\rangle = |01\!\succ\!|01\!\succ\!\rightarrow|11\!\succ\!|00\!\succ\!-|0 1\!\succ\!|01\!\succ. \tag{7}\]
From these expressions we see that the only way the detectors in modes 1 and 4 can both detect one and only one photon is if the initial qubits were in the same logical state. Moreover, when this occurs we have no information about whether they were both \(|0\rangle|0\rangle\) or whether both \(|1\rangle|1\rangle\) - the beamsplitter renders both options equally likely - but the middle two modes 2,3 are now left in a new dual-rail encoded qubit state, that can be interpreted as logical \(|0\rangle\) if the original two qubits were \(|0\rangle|0\rangle\) and logical \(|1\rangle\) if the original two qubits were \(|1\rangle|1\rangle\). When the initial two qubits are in different logical states, we can tell from the detection pattern in modes 1 and 4 (either no photons at all, or two photons detected) which of the two cases pertains.
When the action of the fusion gate is described in words as in the preceding paragraph, it becomes essentially obvious how a multi-rail encoded version of the gate can be implemented - the paragraph could be re-read replacing "mode 1" by "multi-rail mode 1" and so on. All that is required is a generalization of the beamsplitter between modes 1,4. That is, we want a device that, in the case where there is only a single photon in one or other of these two modes, can detect that fact without revealing which particular mode it originated from. But this is exactly what we can achieve for purely-spatial multi-trail encodings using the \(H^{\otimes k}\) interferometers, for temporal encodings using the techniques in Section IIIE above, and for mixed encodings by combining the two.
It should be emphasized again that multi-photon interference is not strictly necessary, which is essentially why we can do spatial erasure and then temporal erasure - we are only ever selecting as success the occasions when a single photon went through the erasing device(s).
#### iii.2.2 Beyond Type-1
Type-I fusion works with probability 1/2; if we need to erase temporal as well as spatial information it will work with slightly lower probability (dependent on how much we'stretch' the photons temporally to erase them). We can use Type I fusion to create (passively multiplexed) versions of multi-rail GHZ states.
It is often convenient to use 'boosted' Type-II fusion gates[6] Boosted fusion[15] involves an ancillary Bell pair of photons. the four photons in 8 modes evolve through the interferometer into a quantum state containing thousands of terms, many of which have multi-photon interference as exhibited by terms involving Fock states \(|2\succ,|3\succ,|4\succ,\) and unlike the cases above the "success" outcomes often involve detection of such multiphoton events. As such it would seem unreasonable to presume that it can be trivially performed in the multi-rail way. But for the variant which involves failure outcomes that project into the computational basis the way it works can be "talked through" almost as simply as the case for Type-I fusion above. This is done in Fig. 8. One we appreciate that the gate relies only on eliminating the information about origin of
Figure 8: Boosted Type-II gate, as in the original[15] here we revert to mixed spatial and polarization encoding which makes the following explanation of the correlations observed a little simpler: A Bell pair in the state \(|HH\rangle+|VV\rangle\) is injected as an ancilla, along with the two qubits. If the two qubits have opposite polarizations then one photon goes to the left and one to the right, while both photons from the ancillary Bell pair go the same way. Thus if we erase the information as to the origin of the photons (which is what the arrangement of a PBS surrounded by \(45^{\circ}\) degree polarization rotators achieves on both the left and the right), and count 3 on one side and 1 on the other, we know the qubits had opposite polarization (but not what it was). If the qubits have the same polarization they both go left or both go right. If it so happens that the ancillary photons go the same way then the gate fails - we know for sure what the qubits polarization was. However if the ancillary photons go to the opposite side to the qubits, and if the distinguishing features of the photons are removed the gate succeeds because we count 2 photons on the left and 2 on the right, which heralds only that the photons were the same polarization.
Figure 7: (a) A regular Type-I fusion gate on a dual rail encoding is simply a beamsplitter between one rail of each of the two dual rail qubits encoded in modes 1,2 and 3,4 respectively. When the gate succeeds the output qubit is encoded across modes 3,2. (b) On spatially multi-rail encoded qubits a general Type-I gate can be achieved using the generalized “large Hadamard” beamsplitters introduced earlier, which completely erase spatial mode information. If the photons are also spread across different time bins then these can be erased independently, by placing interferometers such as those in Fig. 6 just prior to the detectors.
the photons that enter the detector - the polarizing beamsplitter surrounded by rotators is simply the generalized beamsplitter \(H^{\otimes k}\) gate- it becomes clear that the gate works just as well on a multi-rail qubit (though potentially necessitating a much larger version of such a Hadamard interferometer!)
### Creating multi-rail cluster states despite no single qubit Hadamard gates
For simplicity in this section we focus on purely spatially multi-rail states.
Because of the impossibility of doing arbitrary single qubit gates on a generic multi-rail qubit, and because we have only shown how to create Bell/GHZ states and fusion gates (that fuse in the computational basis), it is still not given that we can actually create suitable multi-rail versions of a cluster state.
To spell the issue out in more detail: Consider we do a beamsplitter to perform the supposed Hadamard gate on an \(m=2\) rail Bell state, where the rails have "misaligned" photons:
\[|0\rangle|10\!\succ\!|00\!\succ+|1\rangle|00\!\succ|01\!\succ\] \[\to|0\rangle\langle 10\!\succ|00\!\succ+|00\!\succ|10\!\succ)\] \[\quad+|1\rangle\langle 01\!\succ|00\!\succ-|00\!\succ|01\!\succ)\] \[\neq|0\rangle|+\rangle+|1\rangle|-\rangle)\]
We see that unless we allowed for different qubit encodings for different parts of a superposition (we do not - such really would stretch the notion of a qubit!) naive application of a Hadamard cannot create the desired 2-qubit cluster state. Not unexpectedly, the multi-rail nature of the state has interfered with interference.
One simple solution to this is to modify the generator which produced Bell states from 4 single photons to one which produces 2-qubit cluster state directly. This can, in fact, be done by replacing the \(H^{\otimes 2}\) interferometer in modes 5-8 of Fig. 2(a) with an interferometer described by the following unitary matrix:
\[\frac{1}{2}\left[\begin{matrix}1&i&-1&i\\ 1&i&1&-i\\ -1&i&i&-1\\ 1&-i&i&-1\end{matrix}\right].\]
For the regular case where we have four single photons input to modes 1-4 this will produce dual-rail encoded 2-qubit cluster state with probability \(2/32\). Note that we have not done a single quit gate on the output modes, rather we are doing judiciously chosen quantum steering to collapse the outputs to the desired state.
When we incorporate this modified generator into a multi-rail architecture with stochastic sources, following (the arbitrarily large generalization of) Fig. 3, depending on the exact modes the photons are input the success probability may fall to \(1/32\), but it is never lower than this.
Once we have pieces of linear cluster state we can proceed to use (boosted) fusion to percolate a lattice - the pieces of the cluster state now have the Hadamard transformation 'built in', so the fusion will work with probability \(3/4\) as described previously. From there the creation of arbitrarily large cluster states is immediate.
### Adaptive single qubit measurements
Performing single qubit unitary rotations about the \(Z\)-axis is simple on multi-rail qubits, it involves implementing phase shifters across all the relevant modes defining the qubit. As such computation using a multi-rail cluster state reduces to whether we have the ability to make both \(X\) and \(Z\) measurements. Note that The \(Z\) measurements are obviously unaffected by the multi-rail nature of the state.
The \(X\) measurements can be performed using the generalized Hadamard beamsplitters \(H^{\otimes k}\). To see this consider the example:
\[|A_{0}\rangle|10\!\succ\!|00\!\succ+|A_{1}\rangle|00\!\succ|01\!\succ,\]
where the 4 modes encode an \(m=2\) multi-rail qubit. An \(X\) measurement on the qubit that has been singled out should leave the remainder of the cluster state in \(|A_{0}\rangle\pm|A_{1}\rangle\). After the \(H^{\otimes 2}\) beamsplitter we have
\[|A_{0}\rangle(|10\!\succ\!|00\!\succ+|01\!\succ|00\!\succ+|00\! \succ|10\!\succ+|00\!\succ|01\!\succ)\] \[+|A_{1}\rangle(|10\!\succ\!|00\!\succ-|01\!\succ|00\!\succ-|00 0\!\succ|10\!\succ+|00\!\succ|01\!\succ)\]
and so regardless of which mode we detect the photon in, we obtain one of the desired collapsed states.
We need, however, to be able to _adaptively_ choose whether we perform an \(X\) versus a \(Z\) measurement. Surely this requires a coherent switch? In principle it does not: we can set up a static interferometer as if we are doing an \(X\) measurement, let the photons through if that is what we want to do, but absorb (with a blocking switch) the set of rails encoding a \(|0\rangle\) at the input if we prefer to do a \(Z\) measurement. In the latter case then absence of a detected photon would correspond to the \(|0\rangle\) outcome while detection would correspond to a \(|1\rangle\).2
Footnote 2: It should be emphasized that while this allows us to in principle only use a blocking switch throughout, in practice it completely negates one of the best features of dual/multirail encodings, namely that photon loss (the dominant error mechanism) becomes heralded. Such loss would now lead to Pauli error!
Once we allow for mixed spatio-temporal encodings things become a bit less efficient, but the basic principles remain the same. Although it should be said that if the photons were in a _purely_ temporal multirail encoding (e.g. a 3-photon GHZ with all three photons in the same spatial mode) things become very messy. However our Bell state generation does not produce such entangled states - they always come out across multiple spatial as well as temporal modes. Moreover, given our ability to manipulate spatial degrees of freedom better than temporal ones, we should presumably only consider the case that at least two spatial modes are involved in encoding a multi-rail qubit.
IV Paper II: A Modular, Networked Quantum Computer Based Upon Qubits Fully Delocalised Across the Network Nodes.
In a standard networked architecture for a quantum computer qubits are localized at nodes of the network, and can interact readily with other qubits at the same node. Interactions between qubits at different nodes are then either mediated by exchange of quantum information (normally photons) or via teleportation (perhaps with ancilla entangled qubits distributed continuously from a central resource). Typically all nodes send measurement results and receive instructions by exchanging classical information with a central controller (classical computer).
The delocalized network architecture (DNA) we outline in this paper uses delocalized photonic qubits, such that every qubit has some "piece" (amplitude for being located) at every node (Fig. 9). Interactions between qubits are still performed locally, but each interaction happens at every node (and all nodes are structurally identical). When measurements are performed the qubit (presumed destroyed) is, of course, only detected at one of the nodes. Classical information is sent to a central controller (by whichever node did detect the qubit) regarding the measurement outcome, and the central controller transmits classical instructions back to all nodes regarding operations to be performed on the remaining qubits. No other quantum information exchange between nodes is required.
The only quantum information exchange is the distribution of single photons to the nodes, where each single photon is in a superposition of being sent to every node. This step is the entanglement distribution to the network.
A natural objection might be: "Standard photonic quantum computing depends only on having "identical" photons, by which we actually mean we only need photons that can be decomposed
\[\left(\sum_{\alpha}c_{\alpha}a^{\dagger}_{\alpha}|0\!\succ\right)\;\otimes\; \left(\sum_{\beta}c_{\beta}a^{\dagger}_{\beta}|0\!\succ\right)\;\otimes\ldots\]
where the expansion coefficients \(c_{\alpha},c_{\beta}\) are completely arbitrary, though they need to be the same (even if not known!). Therefore if we let the modes \(\alpha,\beta\) be discretized and spatial, we can obviously do a distributed computation of the form described above."
Note the single photon states used in the objection above all have overlap 1. What we will see in the DNA is that when \(|c_{\alpha}|^{2}=|c_{\beta}|^{2}=\ldots\) (ie uniform magnitude photonic states) we can still do the distributed, delocalized computation even if the phases of the \(c_{\alpha},c_{\beta}\) are such that the states in question are completely orthogonal!
Interestingly, phase stability between the network nodes is not required, although phase stability between all modes held by any given node must be maintained. Moreover, each of the nodes can itself be split into sub-nodes to which quantum information is sent, and phase stability need only be maintained between the modes entering a given sub-node (a feature which repeats itself - the depth of sub-nodes is determined by the number of probabilistic elements used within any particular choice of photonic architecture).
As in Paper I, a surprising feature is that in principle this architecture only requires a 'blocking' (absorbing/filtering/incoherent) switches. In fact one variant of the DNA shows that (in principle) all but one of the blocking switches do not even need to act on the single modes in which quantum information is stored - they can act only on the classical pump fields which drive the single photon sources! While this extreme variant requires very long (fixed) delay lines, it shows that it is in principle possible to do photonic quantum computing in an architecture where every photon passes through a constant-depth, passive interferometer followed by just one switch just prior to being detected.
### High Level Summary
To set some terminology we first consider a regular non-distributed photonic architecture. We call a photonic quantum computer which can take in deterministic single photons and perform a quantum computation a level 0 quantum computer. We call a photonic quantum computer which can take in deterministic photons entangled over a small number of qubits (Bell pairs, GHZ states, something else depending on the specific proposal) and perform a quantum computation a level 1 quantum computer. We call a photonic quantum computer which can take in deterministic photons entangled over a large number of qubits (percolated cluster state lattice, something else depending on the specific proposal) and perform a quantum computation a level 2 quantum computer. For certain variants there may be more intermediate levels - e.g. Bell pairs being fused into small GHZ states would increase the depth, but for simplicity we stick to levels 0,1,2 computers.
Regular photonic architectures use multiplexing (performing an operation many times in parallel and careful selection
Figure 9: Schematic of a delocalized networked architecture for a quantum computer.
and switching out of successful outputs) to compress out the inherent randomness of photonic gates/processes. Multiplexing allows one to nest a level 1 computer into the level 0 computer, and a level 2 computer into the level 1 computer. Multiplexing is also an option to create the initial deterministic single photons (although of course many other options exist for deterministic photon production).
At a practical level multiplexing photons requires the ability to switch coherently - at a minimum to be able to actively select \(1\) from \(m\) modes and coherently (i.e. preserving the quantum state) transfer the light within that mode to some other part of the computer. It is a dynamic process which'removes the randomness', as it were, by effectively erasing the information as to which particular probabilistic element it was that succeeded. As such, from now on we will refer to such a process as _dynamic multiplexing_.
The DNA uses a different approach to dealing with a probabilistic processes. The generic procedure is depicted in Fig. 10. Multiple copies of the process are performed, such that there is high probability that at least one of them succeeds. All outputs, except those of one of the successful processes, are blocked. We then uniformly "spread" across all of the potential outputs of all of the probabilistic elements as depicted. The spreading is performed by a passive interferometer, and it is this process which'removes the randomness', as it were, effectively erasing (from the state amplitudes) the information as to which particular probabilistic element it was that succeeded. Of course information as to which element succeeded remains in the (known) relative phases between modes of the output state. If we restricted to using a power of two copies of the probabilistic process then the passive interferometers could be taken to be specified by the unitary matrix \(H^{\otimes k}\) where \(H\) is the 2d Hadamard matrix, and relative phases are all 0 or \(\pi\). In general other uniformly spreading interferometers such as the discrete fourier transform can be used. From now on we will refer to'removing the randomness', as it were, by the procedure of Fig. 10 as _passive-multiplexing_.
The outputs are passed to sub-nodes (higher level quantum computers) which are built to treat the incoming modes _as if they deterministically contained the desired state_. As with the nodes themselves, the subsequent sub-nodes do not exchange any quantum information and so do not need to be kept phase stable with respect to each other, a feature of practical importance.
Once this generic procedure is implemented within sub-nodes of the DNA it can be incredibly confusing to keep track of levels and modes and nodes and so on. For this reason we turn to a highly idealized example that captures the basic quantum and classical information flow within the DNA.
### Medium Level Summary
To illustrate the rather complicated manner in which the photons are delocalized through several stages, it is helpful to imagine we have a standard photonic architecture with the following (very highly idealized!) properties:
(i) We can produce single photons from stochastic sources with a high enough probability that if we have \(7\) single photon sources it is almost certain that at least one produces a photon.
(ii) Bell pairs can be produced from \(4\) single photons with a high enough probability that we need only make \(3\) attempts to produce a Bell pair and then we will have one with almost certainty.
(iii) We are able to fuse together Bell pairs with a high enough probability that we need only 5 Bell pairs and we will almost certainly produce a large enough entangled state that we can solve the problem we wish to tackle on our quantum computer(!). Solving that problem will necessitate being able to choose between \(X\) and \(Z\) single qubit measurements.
We now consider how the DNA would look built using the same processes, but replacing dynamic with passive multiplexing.
#### iv.2.1 Step 1 - Distribution of single photons
The DNA will have \(7\) nodes, labelled \(A,B,C,\ldots,G\), each of which is a self-contained level 0 computer, i.e. they treat their incoming single modes _as if_ they contained deterministic single photons.
We begin with \(420=7\times 4\times 3\times 5\) single photon sources. Our "effectively deterministic" single photon source is built
Figure 10: The generic “passive multiplexing” procedure replacing dynamic multiplexing in the DNA. Horizontal lines denote modes _not_ qubits. The output modes from all but one successful process are blocked. After the blocking switches the passive interferometers which spread the output do so uniformly. That is, the unitary matrix describing such interferometers has complex entries of equal magnitude, such as a discrete fourier transform or \(H\otimes H\), with \(H\) the \(2\times 2\) Hadamard matrix.
by taking the stochastic sources in groups of \(7\) at a time, and passively multiplexing them as depicted in Fig. 11(a).
The central controller needs to keep a record of which specific stochastic sources fired, since there are important relative phases between the output modes of each deterministic source. However, it is fine for all of the modes which are passed to a given node to receive the same random phase (such as might occur if taken to their destination node along the same multi-core optical fiber, Fig. 11(b)), uncorrelated with the random phases on groups of modes being passed to other nodes. This is explained in more detail later.
#### iii.2.2 Step 2 - Creation of Bell pairs
For concreteness we describe the level 0 computer of node \(A\), however all other nodes are built and function identically. Node \(A\) has \(4\times 3\times 5\) incoming modes, which it treats as if they contain a deterministic single photon (although if actually measured at this stage there is only a \(1/7\) probability of finding any given mode occupied because any given photon has equal likelihood of ultimately being detected at any of the nodes).
The incoming single photon modes are gathered into groups of 4 and each group sent through a probabilistic Bell-state generator (BSG), see Fig. 12. In our toy example, \(3\) such BSG's are required for almost certain generation of a Bell pair. As such, Node \(A\) takes groups of \(3\) BSG's and uses passive multiplexing to produce 'deterministic' Bell pairs. The output from one successful BSG is not blocked, and thus enters the spreading interferometer. As depicted in Fig. 12 the outputs are then sent to sub-nodes of the DNA, labelled \(A_{1},A_{2},A_{3}\). These sub-nodes are level 1 quantum computers - that is, they are built to treat each incoming group of 4 modes as if they
Figure 11: (a) A ‘deterministic’ single photon source built from (herladed) stochastic sources. Based on the heralding pattern the output from one of the successful sources (in this example the third source) is passed through onto a passive ‘spreading’ interferometer. (b) The output from as many single photon sources as required in the computation (in this example 420) are distributed to the network nodes. All modes going to a given network need to be kept phase stable with respect to each other, but stability between the nodes is not necessary. This raises the possibility of well-separated nodes, receiving modes distributed via optical fiber(s), as long as the noise processes within are collective (ie act identically on the propagating modes). (c) An alternative to blocking the single photon output modes of a passively multiplexed source. With suitable use of fixed delays (of up to \(6\) clock cycles for this example) we could attempt to produce a photon at the first source and if it succeeded turn off the classical _pump_ laser to all the subsequent sources. If it failed we would then attempt to fire the second source and so on. This has the advantage of only needing a blocking switch on the classical pump and not the quantum system that is eventually part of our computer.
Figure 12: Bell state generation and passive multiplexing within node A, which is a Level 0 quantum computer - that is, it treats incoming single photon modes _as if_ they held a deterministic single photon.
contained a deterministic Bell pair. As such they would implement some kind of fusion gates on these modes. Each of the sub-nodes operates independently (no quantum information is exchanged with any other sub-nodes) and so it would not matter if all the modes entering a given sub-node received the same unknown phase (e.g. during transmission to the sub-node from the higher node).
At this stage one may wonder how it can be that BSG's with potentially no incoming photons can herald successful generation of a Bell pair? This is where the Central Controller (CC) comes in. The detection patterns registered in each BSG are transmitted back to the CC who determines which of the BSG's in each triple of BSGs has succeeded. That is, a BSG typically needs detection of two photons to herald success. Those detections might actually occur in completely different nodes, \(B\) and \(F\) for example - so node \(A\) needs to be told by the CC which BSGs succeeded in order to block/unblock outputs appropriately.
#### iv.2.3 Step 3 - Fusion and single qubit measurement
The architecture iterates. Fusion of the small entangled states into larger ones is a probabilistic process and can be passively multiplexed. This is not always necessary - there are variants of photonic architectures where percolation theory guarantees that we have a state universal for quantum computing straight after the fusion. Either way, the final fate of all modes is to be measured, typically in the \(X\) or \(Z\) basis. At this stage our qubit is highly delocalized and it is a key feature of the architecture that the appropriate single qubit measurement can be performed locally within each sub-node, and results transmitted to the CC who can collate them so as to work out which outcome obtained.
### More details
We now discuss in a bit more detail how one choice of universal probabilistic procedures - entangled stage generation, fusion and single logical qubit measurement, work under passive multiplexing. There are more complicated versions of all of these procedures which are more efficient (in the sense of both using less resources and having higher success probabilities), but we will focus on the easiest versions for pedagogical clarity.
#### iv.2.1 Logical Qubit encoding
In Paper I the architecture was built on encoding the logical qubits in any one of a large number of orthogonal states. In particular, a logical qubit was defined across \(2m\) modes, with the logical qubit state \(|0\rangle\) corresponding to a single photon in _any one_ of the first \(m\) modes (with vacuum in the second \(m\) modes). Conversely, the logical qubit state \(|1\rangle\) corresponds to a single photon in _any one_ of the second \(m\) modes (with vacuum in the first \(m\) modes). For example, with \(m=4\) we could have:
\[|0\rangle \Leftrightarrow |0,0,1,0\!\succ|0,0,0,0\!\succ=:|1_{3}\!\succ|0\!\succ\] \[|1\rangle \Leftrightarrow |0,0,0,0\!\succ|0,1,0,0\!\succ=:|0\!\succ|1_{2}\!\succ. \tag{8}\]
For simplicity, from now on we take \(m=2^{k}\) and only use interferometers described by \(H^{\otimes k}\). Such interferometers obey a bunch of simple recursive identities (see examples in Fig. 13) that are useful in terms of designing procedures for the DNA which implement desired logic.
The DNA uses logical4 states that are 'uniformly spread' versions of the ones of Paper I. In particular, DNA logical qubits are defined by taking the original multirail logical states and passing the \(2m\) modes through an interferometer described by \(H^{\otimes k}\oplus H^{\otimes k}\):
Footnote 4: Again: “Logical” in this paper never refers to a logical qubit from the theory of fault tolerance. It refers to the ‘bare’ qubit for which a single qubit basis state might correspond to many physically-distinct states. Fault tolerant logical qubits need to be built on top of this architecture, e.g. via FBQC5
\[|\vec{0}\rangle \Leftrightarrow \sum_{i=1}^{m}h_{i}^{(j)}|1_{i}\!\succ\!\otimes|0,\ldots,0\! \succ =:|\widetilde{1_{j}}\!\succ\!\otimes|\widetilde{0}\!\succ\] \[|\tilde{1}\rangle \Leftrightarrow |0,\ldots,0\!\succ\!\otimes\sum_{i=1}^{m}h_{i}^{(j^{\prime})}|1_ {i}\!\succ =:|\widetilde{0}\!\succ\!\otimes|\widetilde{1_{j^{\prime}}}\!\succ \tag{9}\]
Here \(j,j^{\prime}\in\{1,\ldots m\}\) labels the mode that the single photon was in prior to passing through the interferometer, and the coefficients \(h_{i}^{(j)}\) are the \((i,j)\) entries of \(H^{\otimes k}\), which are all of the form \(\pm\frac{1}{\sqrt{2}^{k}}\).
The fact that in general \(j\neq j^{\prime}\), i.e. that we can use many "microscopically distinct" physical states to encode the same logical state, is at the heart of the DNA. This feature - that
Figure 13: Useful, easily generalized, matrix identities. Unlike previous figures where this notation loosely encompassed a variety of suitable interferometers, from now on we restrict to \(H^{\otimes k}\) type Hadamard interferometers.
there are many orthogonal states corresponding to a single logical qubit state - was also a key feature of of Paper I. In the DNA the orthogonality is carried by the relative phases between modes - all logical states have equal probability of finding the photon in a given mode if it is measured.
#### iv.2.2 Bell state generation
The standard BSG takes four single photons along with four vacuum modes, and heralds detection of a Bell pair after two photons are detected. As discussed in detail in Paper I and depicted in Fig. 14(a), we can take any power-of-2 number such BSGs and add extra interferometers (blue in the figure) just prior to the detectors. Doing this now allows us to input the first single photon into _any_ of modes \(1,9,17,25\), (numbering top to bottom - i.e. any of the first ports of the separate BSGs)); the second single photon into _any_ of modes \(2,10,18,26\), and so on. When the procedure succeeds we may, for example, end up with a state like \(|1_{9}\!\succ\!|1_{27}\!\succ\!+|1_{1}\!\succ\!|1_{20}\!\succ\) (the exact locations of the photons depends on which ports they were input and where the two were detected). In the terms of the original multirail encoding (see of Eq.8) this is a logical bell state \(|0\rangle|0\rangle+|1\rangle|1\rangle\).
The circuit of Fig. 14(b) is identical to that of (a). Consider first the pink interferometers at the output. Their effect was to "despread" the output photons. They implement the (self inverse) transformation \(H^{\otimes k}\) that maps between the logical \(|0\rangle\) and \(|\tilde{0}\rangle\) and \(|1\rangle\) and \(|\tilde{1}\rangle\) states. Thus if we remove the pink interferometers we obtain an output Bell state of the form \(|\tilde{0}\rangle|\tilde{0}\rangle+|\tilde{1}\rangle|\tilde{1}\rangle\).
We now identify the 4 BSGs as lying in nodes A,B,C,D of the DNA as described in the preceding section. Without the pink interferometers these nodes do not need to exchange quantum information. The orange interferometers now can be taken to be part of the passively multiplexed single photon source of Fig. 11(a). (For comparison, the top BSG in Fig. 14(b) could then be taken as the top BSG in Fig 12.)
In summary we have seen that we can perform preparation of Bell states with flow of quantum and classical information corresponding to that claimed in Sections IV A,IV B such that we output a logical Bell state in the \(\{|\tilde{0}\rangle,|\tilde{1}\rangle\}\) basis.
As discussed in Paper I a key issue with encodings that allow many distinct orthogonal states to encode a single logical qubit, is that single qubit unitary operations are not automatically available. In particular, on a standard dual rail qubit the logical Hadamard is easy and deterministic (a beamsplitter), and so the ability to produce Bell pairs automatically implies the ability to produce the two-qubit cluster state by doing a Hadamard on one qubit. On a generic multirail encoded qubit the Hadamard can only be done with certainty using coherent switching, which we are trying to avoid. As in Paper I, however, we can use a modified Bell state generator that outputs states with a Hadamard operation "pre-wired" in.
Generalizations of standard state generation procedures for GHZ states and useful multi-qubit cluster states are all possible. It should be emphasized that the success probability of any of these procedures does _not_ depend on the number of nodes of the DNA (i.e. on the "amount" of the passive multiplexing), although depending on details the success probability may be reduced by a constant factor (meaning more copies of the process are required to be passively multiplexed in order to hit a target final success probability).
#### iv.2.3 Fusion
Once we can generate entangled (cluster) states we want to be able to fuse them [4]. Fusion is a effected via a parity measurement, which can be achieved by doing a partial Bell measurement. We want to be able to perform an operation that, for example, tells us the logical states of the qubits were "the same" without telling us which of the two options (both 0, both 1) actually pertains.
In the standard dual rail encoding of two qubits to do type-II fusion we apply beamsplitters between modes (1,4) and (2,3) (i.e. as in Fig. 15(a)). We see immediately that the \(|0\rangle|1\rangle=|1,0,0,1\!\succ\!\) and \(|1\rangle|0\rangle=|0,1,1,0\!\succ\!\) terms both lead to two photons detected in the same interferometer (both at (1,4) or both at (2,3)). However the \(|0\rangle|0\rangle=|1,0,1,0\!\succ\!\) and \(|1\rangle|1\rangle|=|0,1,0,1\!\succ\!\) terms will always lead to one photon at modes (1,4) and one at (2,3). Moreover, the beamsplitter erases the information as to which mode the single photons originated from, and so we can only tell from the detection that the logical states of the qubits were the same, and not
Figure 14: Two interferometers that are identical (if vacuum is input on the modes indicated). The labelling on output modes of the form Q1L0 means “Qubit 1, Logical 0” etc.
what they actually were individually.
We want to achieve the same effect, but now with our delocalized logical states, which, in the notation of Eq.(9) take the general form:
\[|\tilde{0}\rangle|\tilde{0}\rangle =|\widetilde{I_{j}}\!\succ|\widetilde{0}\!\succ|\widetilde{1_{k}} \!\succ|\widetilde{0}\!\succ \tag{10}\] \[|\widetilde{1}\rangle|\widetilde{1}\rangle =|\widetilde{0}\!\succ|\widetilde{1_{j^{\prime}}}\!\succ| \widetilde{0}\!\succ|\widetilde{1_{k^{\prime}}}\!\succ\] (11) \[|\widetilde{0}\rangle|\widetilde{1}\rangle =|\widetilde{1_{j}}\!\succ|\widetilde{0}\!\succ|\widetilde{0} \!\succ|\widetilde{1_{k^{\prime}}}\!\succ\] (12) \[|\widetilde{1}\rangle|\widetilde{0}\rangle =|\widetilde{0}\!\succ|\widetilde{1_{j^{\prime}}}\!\succ| \widetilde{1_{k^{\prime}}}\!\succ|\widetilde{0}\!\succ \tag{13}\]
Consider Fig. 15(b), which depicts an example of two DNA qubits, with modes corresponding to different nodes but the same logical state grouped together. That is, normally we imagine all the green modes in one node of the DNA spatially separate from all the orange ones in another node and so on, but have now drawn them gathered together. If the first qubit is in the logical 0 state then it is in some quantum state with uniform amplitude aver the four modes labelled Q1L0, and similarly for the other logical states depicted.
Imagine, for example, that after sending the incoming logical qubits through Fig. 15(b) the green node reports one photon in its first mode, and the pink node reports one photon in its fourth mode (the very top and bottom modes of Fig. 15(b)). Then this detection pattern can only have arisen from the logical state in Eq. (12). This would be failure of the fusion gate. If, instead, the green node reported one photon in its first mode, while the orange node reports one photon in its second mode (the first and sixth modes of Fig. 15(b)) then this detection pattern arises with equal likelihood from the logical states in Eqs. (10),(11), and we have succeeded in fusing the logical states with even parity.
Similar types of reasoning can be used to show that Type-I and various boosted fusion protocols can all also be used directly "as is" within the DNA, implying that each node can operate as a standard level 0 quantum computer, the only difference being that the classical processing of measurement results and instructions about what subsequent operations to perform must be made by the CC who, unlike the individual nodes, knows the logical state at any point in time.
#### iv.3.4 Single (logical) qubit measurement
The final element we require to have universal quantum computation with our logical cluster states is the ability to do single (logical) qubit measurement. Measurement in the Z-basis \(\{|\tilde{0}\rangle,|\tilde{1}\rangle\}\) is simple because the photon is found in the first \(m\) modes if the logical state is \(|\tilde{0}\rangle\) and the the second \(m\) modes if it is \(|\tilde{1}\rangle\). Note also that unitary rotations around the logical z-axis are also easy, since rotating the phase of all the second \(m\) modes of an encoded qubit we can readily perform \(|\widetilde{1_{j}}\!\succ\!\succ e^{i\alpha}|\widetilde{1_{j}}\!\succ\).
The key thing to show is that we can perform the logical Pauli \(X\) measurement. In particular, if we single out a particular logical qubit of the computer then we can write the total state as
\[|A_{0}\rangle|\widetilde{1_{j}}\!\succ|\widetilde{0}\!\succ+|A_{1}\rangle| \widetilde{0}\!\succ|\widetilde{1_{k}}\!\succ\]
and we want to convince ourselves it is possible to collapse the state to \(|A_{0}\rangle\pm|A_{1}\rangle\). Now if every node performs a beam-splitter on the two modes of the qubit they hold then we obtain this evolution:
\[|A_{0}\rangle\left(\sum_{i}h_{i}^{(j)}|1_{i}\!\succ\right)| \widetilde{0}\!\succ+|A_{1}\rangle|\widetilde{0}\!\succ\left(\sum_{l}h_{l}^{(k )}|1_{l}\!\succ\right)\] \[\rightarrow |A_{0}\rangle\left(\sum_{i}h_{i}^{(j)}(|1_{i}\!\succ\!|0\!\succ+| 0\!\succ|1_{i}\!\succ\right)\right)\] \[+|A_{1}\rangle\left(\sum_{l}h_{l}^{(k)}(|0\!\succ\!|1_{l}\!\succ-| 1_{l}\!\succ|0\!\succ)\right)\]
If all nodes then measure their pair of modes, and the photon is found in node number \(s\), the state will collapse as desired to
\[h_{s}^{(j)}|A_{0}\rangle\pm h_{s}^{(k)}|A_{1}\rangle,\]
with the \(\pm\) dependent upon whether \(x\in\{1,\ldots,m\}\) or \(x\in\{m+1,\ldots,2m\}\). This gives us our logical \(X\) measurement.
As outlined briefly in Paper I, although the basis the single logical qubits are measured in needs to be adapted on the fly, it is in principle possible to still perform them without using a coherent switch; that is, using only a blocking switch as was used in the rest of the architecture. (But it would be pretty silly to do, since you lose a lot of loss tolerance as you can no longer tell the difference between a qubit lost and a qubit absorbed by the measurement blocking switch!)
### Further observations
#### iv.4.1 Some phases important, others not?
It can be a bit confusing at first to see why relative phases between photons that head off to different nodes (e.g. the \(h_{i}^{(j)}\)
Figure 15: (a) Type-II fusion in standard dual rail encoding. Success is heralded by detecting one photon at one of the top or bottom modes, and the other photon in one of the middle modes. (Detectors not shown). (b) Type-II fusion in a 4-node DNA. In the DNA the incoming logical states are uniformly spread over the four nodes (each node’s interferometer is depicted with a different color). No interference is necessary between different nodes, they each apply a standard Type-II gate independently. Success is heralded by detecting one photon at one of the groups of top or bottom modes, and the other photon in one of the groups of the middle modes. (Detectors not shown)
are important, but relative phases between the nodes themselves are not. (For example in Fig. 11(b) the optical fibers would not need to be kept phase stable with respect to each other). A quick way to see why is to consider that by the end of the computation all photons will have been counted - we will know how many ended up in each node. Thus we will have postselected ourselves into a part of the quantum state with \(n_{A}\) photons at node \(A\), \(n_{B}\) at node \(B\) and so on. In such a state if there are unknown random phases of the form \(e^{in_{A}\theta_{A}}e^{in_{B}\theta_{B}}\).. they all factor out completely.
#### v.2.2 A mode is a mode is a mode
The DNA has been described in terms of generic modes, and although spatial modes are the most natural implementation to have in mind when trying to envision the structural flow of quantum and classical information, the whole architecture works just as well with any types of modes. As in Paper I, using time/frequency multirail modes is very natural for many of the types of heralded sources currently being built in integrated photonics. In particular, both of these architectures make heavy use of transformations (such as \(H^{\otimes k}\) or a discrete fourier transform) which erase information, and single photon detectors are fast enough these days that they would (implicitly) perform such erasure on photons of different frequencies [16; 17].
#### v.2.3 Bosons are overrated
As described above described the number of nodes can be chosen to ensure near-deterministic, passively multiplexed single photon sources. However we saw that the logical operations did not care about the number of nodes. So, one can imagine instead that we use a very large number of nodes, so that at the end of the day each node has (with exponential likelihood) at most one photon. This limit is interesting as a way of understanding [18] the limits of entanglement distribution by single photons. But it also makes clear that ultimately all we need is interference - interpreted as erasure of undesired quantum information - and as such non-bosonic schemes could conceivably make use of the ideas of Paper I and the present paper.
#### v.2.4 A crazy variant - don't ever touch the quanta!
A different and somewhat extreme variant of the DNA shows that it is, in principle, possible to build a photonic quantum computer such that the single photon modes at most only pass through a single switch, namely the switch just prior to the adaptive single logical qubit measurement which chooses the algorithm we are running. That is, unlike the scheme laid out in the main text, the photons do not pass any blocking or coherent switches at all, unless they happen to be part of the final cluster state and undergo an adaptive measurement.
This somewhat extreme variant is to take the possibility of using delays outlined in Fig. 11(c) seriously through all probabilistic elements and levels of the quantum computer. For example, consider building Bell pairs. We inject 4 single photons into the first BSG. At this step we do not even produce the four single photons that may potentially be required for a second attempt at making a Bell pair in the second BSG. That is, we are blocking the pump laser to all the other single photons which may or may not be required for creating this particular Bell pair. [Other passively multiplexed Bell pairs are being created elsewhere simultaneously. For example, referring back to Fig. 12, each grouping of three BSG's would be staggered temporally in a manner similar to Fig. 11(c).]
If the first BSG succeeds then great, we never unblock the single photons for the other BSGs. If it does not, then we unblock the four photons for a second attempt in the second BSG. Note that the single photons already operate on a slow clock rate because they are passively multiplexed as per Fig. 11(c). Note also that even if the first BSG succeeds, its outputs must enter a very long delay to ensure that regardless of which BSG it is that ultimately succeeds, passive multiplexing is able to proceed. As we progress through the different levels of the quantum computer we have to keep increasing our delay lines so that we can always go back to turning the single photon sources off at the pump.
There are methods described in Paper I which allow one to 'coherently erase' time-bin information that perhaps could ameliorate the rapid blow up of delay lines for this variant. (These should not be confused with the frequency/time+detectors possibility mentioned previously, which concerns the much shorter timescales within a single pulse).
###### Acknowledgements.
Eric Johnston (EJ) realized the extreme architectural variant in the very last section was possible. He also wrote the excellent photonic simulation tools used to confirm the results in these papers. It was Damien Bonneau's questions and insights about phase (non)-stability in Paper I which prompted the investigation yielding the results of Paper II. Thanks to Jake Bulmer and EJ for comments on the drafts. As always the whole Quantum Architecture and System Architecture team at PsiQuantum are an endless source of inspiration and motivation.
|
2304.07695 | MASTER OT J055845.55+391533.4: SU UMa star with a dip and long
rebrightening | We analyzed Asteroid Terrestrial-impact Last Alert System (ATLAS), Zwicky
Transient Facility (ZTF) and All-Sky Automated Survey for Supernovae (ASAS-SN)
data of MASTER OT J055845.55+391533.4 and found that this object repeats
superoutburst with a dip in the middle of the outburst followed by long and
sometimes oscillating rebrightening, just like a WZ Sge-type dwarf nova or an
AM CVn-type object. The mean supercycle was 298(8) d, too short for a WZ Sge
star, but with only a few normal outbursts. We also observed the 2023
February-March superoutburst and established the superhump period of 0.05509(2)
d. This period appears to exclude the possibility of an AM CVn star. Although
the 2023 observations could not detect superhumps after the dip, the 2014, 2016
and 2021 data seem to suggest that low-amplitude superhumps were present during
the rebrightening phase. We note that a dip during a superoutburst is a feature
common to the unusual SU UMa-type dwarf nova MASTER OT J172758.09+380021.5
during some of its superoutbursts. These objects may comprise a new class of
rebrightening phenomenon in SU UMa-type dwarf novae. | Taichi Kato, Hiroshi Itoh, Tonny Vanmunster, Seiichiro Kiyota, Katsuaki Kubodera, Pavol A. Dubovsky, Igor Kudzej, Tomas Medulka, Filipp D. Romanov, David J. Lane | 2023-04-16T04:53:54Z | http://arxiv.org/abs/2304.07695v1 | ###### Abstract
###### Abstract
We analyzed Asteroid Terrestrial-impact Last Alert System (ATLAS), Zwicky Transient Facility (ZTF) and All-Sky Automated Survey for Supernovae (ASAS-SN) data of MASTER OT J055845.55+391533.4 and found that this object repeats superoutburst with a dip in the middle of the outburst followed by long and sometimes oscillating rebrightening, just like a WZ Sge-type dwarf nova or an AM CVn-type object. The mean supercycle was 298(8) d, too short for a WZ Sge star, but with only a few normal outbursts. We also observed the 2023 February-March superoutburst and established the superhump period of 0.05509(2) d. This period appears to exclude the possibility of an AM CVn star. Although the 2023 observations could not detect superhumps after the dip, the 2014, 2016 and 2021 data seem to suggest that low-amplitude superhumps were present during the rebrightening phase. We note that a dip during a superoutburst is a feature common to the unusual SU UMa-type dwarf nova MASTER OT J172758.09+380021.5 during some of its superoutbursts. These objects may comprise a new class of rebrightening phenomenon in SU UMa-type dwarf novae.
**MASTER OT J055845.55+391533.4: SU UMa star with a dip and long rebrightening**
Taichi Kato\({}^{1}\), Hiroshi Itoh\({}^{2}\), Tonny Vanmunster\({}^{3}\), Seiichiro Kiyota\({}^{4}\), Katsuaki Kubodera\({}^{5}\),
[0.5cm] Pavol A. Dubovsky\({}^{6}\), Igor Kudzej\({}^{6}\), Tomas Medulka\({}^{6}\), Filipp D. Romanov\({}^{7,8}\), David J. Lane\({}^{9,8}\)
[0.5cm] \({}^{1}\) Department of Astronomy, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan
[email protected]_
[0.5cm] \({}^{2}\) Variable Star Observers League in Japan (VSOLJ), 1001-105 Nishiterakata, Hachioji, Tokyo 192-0153, Japan
[email protected]_
[0.5cm] \({}^{3}\) Center for Backyard Astrophysics Belgium, Wallbstraat 1A, B-3401 Landen, Belgium
[email protected]_
[0.5cm] \({}^{4}\) VSOLJ, 7-1 Kitahatsutomi, Kamagaya, Chiba 273-0126, Japan
[email protected]_
[0.5cm] \({}^{5}\) VSOLJ, 2708-1 Kozu, Odawara, Kanagawa, 256-0812, Japan
[email protected]_
[0.5cm] \({}^{6}\) Vihorlat Observatory, Mierova 4, 06601 Humenne, Slovakia
[email protected]_
[0.5cm] \({}^{7}\) Pobedy street, house 7, flat 60, Yuzhno-Morskoy, Nakhodka, Primorsky Krai 692954, Russia,
and remote observer of Abbey Ridge Observatory\({}^{9}\),
[email protected]_, [https://orcid.org/0000-0002-5268-7735](https://orcid.org/0000-0002-5268-7735)
[0.5cm] \({}^{8}\) American Association of Variable Star Observers (AAVSO)
[0.5cm] \({}^{9}\) Abbey Ridge Observatory, 45 Abbey Rd, Stillwater Lake, NS, B3Z1R1, Canada,
[email protected]_, [https://orcid.org/0000-0002-6097-8719](https://orcid.org/0000-0002-6097-8719)
[0.5cm]
## 1 Introduction
MASTER OT J055845.55+391533.4 was detected as an optical transient on 2014 February 19 at a magnitude of 14.4 (Yecheistov et al., 2014). The object was found to be already at 13.9 mag on 2014 February 13. Two past outbursts had been detected (Yecheistov et al., 2014). This object was confirmed to be an SU UMa star by the detection of superhumps in 2014 (Kato et al., 2015). The superhump period and supercycle were suggested to be 0.0563(4) d and 360-450 d, respectively (Kato et al., 2015) [for general information of cataclysmic variables (CVs) and dwarf novae (DNe), see e.g., Warner (1995)]. Kato et al. (2017) observed this object on three nights in 2016 and obtained a period of 0.0581 d. Kato et al. (2017) already reported that the outburst behavior was rather strange. We have noticed that this object showed a dip and rebrightening in superoutbursts recorded in modern survey data (see below). Furthermore, such superoutburst with a dip and (sometimes complex) rebrightening occur successively without intervening normal outbursts. Such behavior was very unusual for a hydrogen-rich DN but is frequently seen in helium-rich AM CVn objects [although not very apparent in figure 2 of Levitan et al. (2015), CP Eri is such an object (see section 4)]. We therefore initiated a time-resolved photometic campaign during the 2023 February-March superoutburst to see the development of superhumps and the superoutburst. This object is also known as a variable star ZTF J055845.48+391533.1 (Ofek et al., 2020) and Gaia DR3 3458275544681382912 (\(-\)Gaia16bgq, type CV) (Gaia Collaboration et al., 2022).
Data analysis
We used Asteroid Terrestrial-impact Last Alert System (ATLAS: Tonry et al. 2018) forced photometry (Shingles et al. 2021), Zwicky Transient Facility (ZTF: Masci et al. 2019)1 and All-Sky Automated Survey for Supernovae (ASAS-SN: Shappee et al. 2014) Sky Patrol (Kochanek et al. 2017) data to examine the long-term behavior. The ASAS-SN positive detections fainter than \(V\)=16.3 or \(g\)=16.5 were excluded as noises close to the detection limit. The ASAS-SN data around BJD 2459252 were included even below this limit since the reality of these data were confirmed by a comparison with the ZTF data. Some unfiltered snapshot observations reported to VSOLJ and VSNET (Kato et al. 2004) were also included (hereafter CCD).
Footnote 1: The ZTF data can be obtained from IRSA <[https://irsa.ipac.caltech.edu/Missions/ztf.html](https://irsa.ipac.caltech.edu/Missions/ztf.html)> using the interface <[https://irsa.ipac.caltech.edu/docs/program_interface/ztf_api.html](https://irsa.ipac.caltech.edu/docs/program_interface/ztf_api.html)> or using a wrapper of the above IRSA API <[https://github.com/MickaelRigault/ztfquery](https://github.com/MickaelRigault/ztfquery)>.
Time-resolved photometry during the 2023 February-March superoutburst was obtained by VSOLJ members and the VSNET Collaboration using 30cm-class telescopes. We also obtained observations during the 2021 January-February superoutburst. The log of these observations is listed in table 1. Superhumps maxima were determined using the template fitting method introduced in Kato et al. (2009) after correcting zero-point differences between the observers and removing outburst trends by locally-weighted polynomial regression (LOWESS: Cleveland 1979). The superhump period was determined using the phase dispersion minimization (PDM: Stellingwerf 1978) method, whose errors were estimated by the methods of Fernie (1989); Kato et al. (2010).
## 3 Results
### Long-term behavior
Long-term light curves are shown in figures 1 and 2. The ATLAS data in quiescence were systematically brighter than the ZTF data. This is probably due to the contamination from a red nearby (physically unrelated) companion Gaia DR3 3458275544681383552 (Gaia \(BP\)=18.53 and \(RP\)=16.65, Gaia Collaboration et al. 2022). Eight superoutbursts were recorded in this interval. Four representative well-observed superoutbursts are also shown in figure 3 to show more details.
* 2016 January-February (BJD 2457416-2457434, first panel in figure 1). ASAS-SN recorded a dip in the middle of this outburst.
* 2016 August-September (BJD 2457631-2457646, first panel in figure 1). This outburst was observed by ASAS-SN with on three nights. There was fading in the middle of the outburst, although observations before and after it were very sparse. There were also time-resolved observations in the late stage (Kato et al. 2017) and the long duration of the outburst was secure. We consider that this outburst was likely a superoutburst with a dip.
* 2018 April-May (BJD 2458215-2458236, third panel in figure 1 and first panel in figure 3). ZTF and ASAS-SN observations clearly showed the presence of a dip in the middle of this outburst (the ASAS-SN observation during the dip completely overlapped the ZTF point and is invisible in the figures). The duration of the dip appeared to be relatively short.
* 2019 March-April (BJD 2458571-2458596, fourth panel in figure 1 and second panel in figure 3). A relatively long dip was clearly present in ZTF, ATLAS and ASAS-SN observations.
* 2020 May (BJD 2458975-2458981, first panel in figure 2). Only the initial part of the outburst was recorded due to the seasonal gap in observation. The duration of the outburst, however, was sufficient for a superoutburst.
* 2021 January-February (BJD 2459242-2459263, second panel in figure 2 and third panel in figure 3). A relatively long dip was clearly present in ZTF, ATLAS and ASAS-SN observations. This outburst showed some short-term variations (oscillations) after the dip.
* 2021 September-October (BJD 2459467-2459488, third panel in figure 2 and fourth panel in figure 3). A relatively long dip was clearly present in ZTF, ATLAS and CCD observations. An outburst in ZTF and ATLAS on 2021 September 2-5 (BJD 2459460-2459463) was apparently a precursor.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Start\({}^{*}\) & End\({}^{*}\) & Mean mag. & Error & \(N^{\dagger}\) & Observer\({}^{\ddagger}\) & Filter \\ \hline
2459254.9039 & 2459255.1868 & 14.254 & 0.002 & 235 & Kub & C \\
2459256.0870 & 2459256.1791 & 14.223 & 0.006 & 73 & Kub & C \\ \hline
2459998.2890 & 2459998.5102 & 14.368 & 0.004 & 200 & Van & CV \\
2459999.0367 & 2459999.1477 & 14.405 & 0.006 & 236 & Kis & C \\
2460001.4188 & 2460001.5810 & 14.748 & 0.003 & 226 & Van & CV \\
2460001.9571 & 2460002.1224 & 14.833 & 0.004 & 371 & Kis & C \\
2460001.9729 & 2460002.0829 & 14.631 & 0.008 & 95 & Kub & V \\
2460002.0046 & 2460002.1895 & 14.713 & 0.004 & 269 & Ioh & C \\
2460002.9033 & 2460003.1528 & 14.788 & 0.002 & 448 & Ioh & C \\
2460003.0090 & 2460003.1402 & 14.931 & 0.003 & 306 & Kis & C \\
2460003.9171 & 2460004.1300 & 14.962 & 0.004 & 325 & Ioh & C \\
2460004.0440 & 2460004.1377 & 15.151 & 0.006 & 121 & Kis & C \\
2460004.2960 & 2460004.5102 & 15.280 & 0.005 & 225 & Van & CV \\
2460006.9644 & 2460007.1752 & 15.400 & 0.004 & 237 & Ioh & C \\
2460007.2965 & 2460007.4774 & 2.401 & 0.003 & 129 & Vih & C \\
2460007.9093 & 2460008.1102 & 15.400 & 0.034 & 66 & Ioh & C \\
2460010.0027 & 2460010.1063 & 15.976 & 0.033 & 30 & Ioh & C \\
2460010.0181 & 2460010.1208 & 16.142 & 0.015 & 127 & Kis & C \\
2460010.9498 & 2460011.1429 & 15.506 & 0.007 & 178 & Ioh & C \\
2460011.9477 & 2460012.0664 & 15.325 & 0.022 & 31 & Ioh & C \\
2460013.9407 & 2460014.1841 & 15.532 & 0.013 & 206 & Ioh & C \\
2460014.4756 & 2460014.5524 & 15.670 & 0.007 & 77 & Van & CV \\
2460014.9528 & 2460015.1516 & 16.167 & 0.017 & 113 & Ioh & C \\
2460015.3553 & 2460015.5511 & 16.328 & 0.006 & 187 & Van & CV \\
2460016.4865 & 2460016.5698 & 15.912 & 0.003 & 103 & RFD & CV \\
2460017.3520 & 2460017.5518 & 16.972 & 0.006 & 175 & Van & CV \\
2460018.3133 & 2460018.5511 & 17.267 & 0.006 & 194 & Van & CV \\
2460019.3032 & 2460019.3055 & 17.440 & 0.078 & 3 & Van & CV \\
2460020.3226 & 2460020.3339 & 4.305 & 0.006 & 8 & Vih & C \\ \hline \multicolumn{6}{l}{\({}^{*}\)BJD\(-\)2400000.} \\ \multicolumn{6}{l}{\({}^{\dagger}\)Number of observations.} \\ \multicolumn{6}{l}{\({}^{\ddagger}\)Itoh (Ioh), Kiyota (Kis), Kubodera (Kub), Romanov (RFD), Vanmunster (Van), Vihorlat team (Vih).} \\ \end{tabular}
\end{table}
Table 1: Log of time-resolved photometry.
* 2023 February-March (BJD 2459997-2460020, fourth panel in figure 2). There was a dip in the middle of the outburst. This is the superoutburst during which we made a time-resolved photometric campaign in this paper (see figure 5).
The shortest intervals between the superoutburst was 215 d, 225 d being the second shortest. Assuming that two superoutbursts were missed during the seasonal gaps in 2017 and 2022, the mean interval of the superoutbursts was 298(8) d.
Only three normal outbursts were detected: 2017 February 13 (BJD 2457798, second panel in figure 1), 2018 September 16 (BJD 2458378, fourth panel in figure 1) and 2022 February 8 (BJD 2459619, third panel in figure 1). These results indicate that normal outbursts are essentially rare in this object.
### 2023 February-March superoutburst
A PDM analysis and the mean profile of superhumps before the dip are shown in figure 4. The mean period in this interval was 0.05509(2) d. Superhumps were below the detection limit during the rebrightening phase after the dip. The times of superhump maxima are listed in table 2. The period derivative (\(P_{\rm dot}=\dot{P}/P\)) was +7(2) \(\times\) 10\({}^{-5}\) (see Kato et al. 2009). The \(O-C\) diagram, variation of the superhump amplitudes and the light curve are shown in figure 5.
### 2021 January-February superoutburst
We also observed this object on two nights during the 2021 January-February superoutburst (second panel in figure 2). The observations (BJD 2459254.907-2459256.183) were after the dip. A PDM analysis detected superhumps with a period of 0.05454(12) d (figure 6).
## 4 Discussion
The recorded superhumps in 2023 were identified as stage B (for superhump stages, see Kato et al. 2009) based on the profile, decreasing amplitude and the epoch when superhumps were observed. The observed \(P_{\rm dot}\) = +7(2) \(\times\) 10\({}^{-5}\) was not unusual for this short superhump period (Kato et al. 2009).
In Kato et al. (2015), the 2014 superoutburst was observed already at least 15 d after the start of the outburst and the recorded superhumps were suggested to be stage C. The 2023 data, however, showed no sign of stage C nor evidence for superhumps in the late stage of the superoutburst. Although the period 0.0563(4) d was slightly different from the current measurement, it can be consistent with the present result considering that the amplitude of superhumps was very small in the 2014 data and that the period was based on single-night
\begin{table}
\begin{tabular}{r r r r r|r r r r} \hline \(E\) & Max\({}^{*}\) & Error & \(O-C^{\dagger}\) & \(N^{\ddagger}\) & \(E\) & Max\({}^{*}\) & Error & \(O-C^{\dagger}\) & \(N^{\ddagger}\) \\ \hline
0 & 59998.3208 & 0.0007 & 0.0003 & 50 & 69 & 60002.1227 & 0.0031 & 0.0017 & 120 \\
1 & 59998.3778 & 0.0008 & 0.0022 & 39 & 70 & 60002.1726 & 0.0020 & \(-\)0.0035 & 39 \\
2 & 59998.4322 & 0.0007 & 0.0015 & 33 & 84 & 60002.9464 & 0.0010 & \(-\)0.0008 & 80 \\
3 & 59998.4864 & 0.0010 & 0.0006 & 37 & 85 & 60003.0000 & 0.0017 & \(-\)0.0023 & 102 \\
13 & 59999.0432 & 0.0013 & 0.0067 & 58 & 86 & 60003.0619 & 0.0012 & 0.0046 & 181 \\
14 & 59999.0911 & 0.0012 & \(-\)0.0006 & 103 & 87 & 60003.1125 & 0.0022 & 0.0001 & 183 \\
57 & 60001.4559 & 0.0004 & \(-\)0.0042 & 63 & 103 & 60004.0014 & 0.0031 & 0.0077 & 72 \\
58 & 60001.5058 & 0.0009 & \(-\)0.0093 & 65 & 104 & 60004.0509 & 0.0028 & 0.0021 & 107 \\
59 & 60001.5686 & 0.0006 & \(-\)0.0016 & 45 & 105 & 60004.1005 & 0.0034 & \(-\)0.0034 & 122 \\
67 & 60002.0086 & 0.0019 & \(-\)0.0023 & 167 & 110 & 60004.3823 & 0.0022 & 0.0031 & 54 \\
68 & 60002.0608 & 0.0009 & \(-\)0.0052 & 215 & 111 & 60004.4368 & 0.0045 & 0.0025 & 51 \\ \hline \multicolumn{10}{l}{\({}^{*}\)BJD\(-\)2400000.} \\ \multicolumn{10}{l}{\({}^{\dagger}\)Against max = 2459998.3205+0.055079\(E\).} \\ \multicolumn{10}{l}{\({}^{\ddagger}\)Number of points used to determine the maximum.} \\ \end{tabular}
\end{table}
Table 2: Superhump maxima of MASTER OT J055845.55+391533.4 (2023).
Figure 1: Light curve of MASTER OT J055845.55+391533.4 in 2015–2019. ASN refers to ASAS-SN and CCD refers to unfiltered snapshot observations.
Figure 2: Light curve of MASTER OT J055845.55+391533.4 in 2019–2023. The symbols are the same as in figure 1.
Figure 3: Well-observed superoutbursts of MASTER OT J055845.55+391533.4. The symbols are the same as in figure 1. All superoutbursts had a dip in the middle and subsequent long rebrightening.
Figure 4: Mean superhump profile of MASTER OT J055845.55+391533.4 during the 2023 February–March superoutburst. The data before the dip (before BJD 2460005) were used. (Upper): PDM analysis. The bootstrap result using randomly contain 50% of observations is shown as a form of 90% confidence intervals in the resultant \(\theta\) statistics. (Lower): Phase plot.
Figure 5: 2023 February–March superoutburst of MASTER OT J055845.55+391533.4. (Upper:) \(O-C\) variation. The ephemeris of BJD(max) = 2459998.3205+0.055079\(E\) was used. The data are in table 2. (Middle:) Superhump amplitude. (Lower:) Light curve. The data were binned to 0.018 d (black filled circles). Other symbols are the same as in figure 1.
Figure 6: Mean superhump profile of MASTER OT J055845.55+391533.4 during the 2021 January–February superoutburst. The two-night data were obtained after the dip. (Upper): PDM analysis. (Lower): Phase plot.
data by a single observer. The 2016 observations (Kato et al., 2017) were also obtained near the end of the superoutburst. The period (0.0581 d) appears to be a one-day alias of the present result. Although the 2021 observations (subsection 3.3) were limited, superhumps after the dip appeared to have been detected. These results suggest that low-amplitude superhumps were present during the rebrightening phase, which were below the detection limit of the 2023 observations. The superhump period during the rebrightening phase, however, has not yet been well-determined.
The presence of a dip in the middle of a superoutburst is a strange feature in this object. As stated in section 1, AM CVn stars generally show this feature. We present a light curve of CP Eri in figure 7 for a comparison. All well-observed superoutbursts have a dip in the middle and the number of normal outbursts is relatively low despite the short supercycle. The observed superhump period of MASTER OT J055845.55+391533.4, however, is strongly against the possibility of an AM CVn-type object.
There is another known example of very complex superoutbursts in the very unusual object MASTER OT J172758.09+380021.5, which has a short superhump period of 0.05829 d and a short supercycle of 50-100 d (Pavlenko et al., 2021). The complex superoutburst recorded in 2022 is shown in figure 8 (see also figure 5 in Pavlenko et al., 2021 for the past ones). After a relatively long dip and short rebrightening, this object entered long rebrightening. The behavior is similar to MASTER OT J055845.55+391533.4 in that the superoutburst showed a dip (but more structured) in the middle of a superoutburst. The short orbital period and supercycle are also similar to MASTER OT J055845.55+391533.4. In the case of MASTER OT J172758.09+380021.5, optical spectra confirmed its hydrogen-rich nature (Thorstensen et al., 2016; Pavlenko et al., 2021). The differences between MASTER OT J055845.55+391533.4 and MASTER OT J172758.09+380021.5 are that the dip in superoutburst is highly reproducible in the former while it is not in the latter and that the supercycle is a few times shorter in the latter.
Such a dip during a superoutburst and long rebrightening are usually seen in WZ Sge stars (Kato, 2015), but not in ordinary SU UMa stars. In WZ Sge stars, a cooling wave in the accretion disk somehow occurs during a superoutburst and the remaining matter in the disk is considered to be responsible for long or repeated rebrightening (Meyer and Meyer-Hofmeister, 2015). Low mass-ratios (\(q\)) such as in WZ Sge stars (and some ER UMa stars such as RZ LMi) have been proposed to be a cause of such premature quenching of a superoutburst (Hellier, 2001; Osaki, 1995). The \(q\) value of MASTER OT J172758.09+380021.5 was estimated to be 0.08 (Pavlenko et al., 2021), which seems to be consistent with this interpretation. There have been no indication of WZ Sge-type behavior neither in MASTER OT J055845.55+391533.4 nor MASTER OT J172758.09+380021.5, and these objects may comprise a new class of rebrightening phenomenon in SU UMa-type dwarf novae. Further observations of MASTER OT J055845.55+391533.4 are needed to establish the orbital period, and hopefully the period of stage A to estimate \(q\)(Kato and Osaki, 2013; Kato, 2022), and spectrocopy is needed to estimate the chemical composition. Since superoutbursts of MASTER OT J055845.55+391533.4 are not rare, planned observations for a future superoutburst are requested.
## Acknowledgements
This work was supported by JSPS KAKENHI Grant Number 21K03616. The authors are grateful to the ATLAS, ZTF and ASAS-SN teams for making their data available to the public. We are grateful to VSOLJ and VSNET observers (particularly Yutaka Maeda, Mitsutaka Hiraga and Masayuki Moriyama) who reported snapshot CCD photometry of MASTER OT J055845.55+391533.4. We are also grateful to Naoto Kojiguchi for helping downloading the ZTF data and to Junpei Ito and Katsuki Muraoka for converting the data reported to the VSNET Collaboration.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The ATLAS project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
Based on observations obtained with the Samuel Oschin 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar
Figure 7: Light curve of CP Eri. The presence of a dip in the middle of a superoutburst is the feature common to MASTER OT J055845.55+391533.4.
Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
The ztfquery code was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n\({}^{\circ}\)759194 - USNAC, PI: Rigault).
## List of objects in this paper
AM CVn, CP Eri, RZ LMi, WZ Sge, SU UMa, ER UMa, Gaia DR3 3458275544681382912,
Gaia DR3 3458275544681383552, Gaia16bsq, MASTER OT J055845.55+391533.4,
MASTER OT J172758.09+380021.5, ZTF J055845.48+391533.1
## References
* [1]
* [2]We provide two forms of the references section (for ADS and as published) so that the references can be easily incorporated into ADS.
## References (for ADS)
* [3]
* [4]Cleveland, W. S. 1979, J. Amer. Statist. Assoc., 74, 829 ([https://doi.org/10.2307/2286407](https://doi.org/10.2307/2286407))
* [5]
* [6]Fernie, J. D. 1989, PASP, 101, 225 ([https://doi.org/10.1086/132426](https://doi.org/10.1086/132426))
* [7]
* [8]Gaia Collaboration, et al. 2022, A&A, (arXiv:2208.00211)
* [9]
* [10]Hellier, C. 2001, PASP, 113, 469 (arXiv:astro-ph/0101102)
* [11]
* [12]Kato, T. 2015, PASJ, 67, 108 (arXiv:1507.07659)
* [13]
* [14]Kato, T. 2022, VSOLJ Variable Star Bull., 89, (arXiv:2201.02945)
* [15]
* [16]Kato, T., et al. 2015, PASJ, 67, 105 (arXiv:1507.05610)
Figure 8: Complex superoutburst in MASTER OT J172758.09+380021.5 recorded in 2022. The symbols are the same as in figure 7.
Kato, T., et al. 2009, PASJ, 61, S395 (arXiv:0905.1757)
* [16] Kato, T., et al. 2017, PASJ, 69, 75 (arXiv:1706.03870)
* [17] Kato, T., et al. 2010, PASJ, 62, 1525 (arXiv:1009.5444)
* [18] Kato, T., & Osaki, Y. 2013, PASJ, 65, 115 (arXiv:1307.5588)
* [19] Kato, T., Uemura, M., Ishioka, R., Nogami, D., Kunjaya, C., Baba, H., & Yamaoka, H. 2004, PASJ, 56, S1 (arXiv:astro-ph/0310209)
* [20] Kochanek, C. S., et al. 2017, PASP, 129, 104502 (arXiv:1706.07060)
* [21] Levitan, D., Groot, P. J., Prince, T. A., Kulkarni, S. R., Laher, R., Ofek, E. O., Sesar, B., & Surace, J. 2015, MNRAS, 446, 391 (arXiv:1410.6987)
* [22] Masci, F.-J., et al. 2019, PASP, 131, 018003 (arXiv:1902.01872)
* [23] Meyer, F., & Meyer-Hofmeister, E. 2015, PASJ, 67, 52 (arXiv:1504.00469)
* [24] Ofek, E. O., Soumagnac, M., Nir, G., Gal-Yam, A., Nugent, P., Masci, F., & Kulkarni, S. R. 2020, MNRAS, 499, 5782 (arXiv:2007.01537)
* [25] Osaki, Y. 1995, PASJ, 47, L25
* [26] Pavlenko, E., et al. 2021, Contr. of the Astron. Obs. Skalnate Pleso, 51, 138 (arXiv:2103.14369)
* [27] Shappee, B. J., et al. 2014, ApJ, 788, 48 (arXiv:1310.2241)
* [28] Shingles, L., et al. 2021, Transient Name Server AstroNote, 7, 1
* [29] Stellingwerf, R. F. 1978, ApJ, 224, 953 ([https://doi.org/10.1086/156444](https://doi.org/10.1086/156444))
* [30] Thorstensen, J. R., Alper, E. H., & Weil, K. E. 2016, AJ, 152, 226 (arXiv:1609.02215)
* [31] Tonry, J. L., et al. 2018, PASP, 130, 064505 (arXiv:1802.00879)
* [32] Warner, B. 1995, Cataclysmic Variable Stars (Cambridge: Cambridge University Press)
* [33] Yecheistov, V., et al. 2014, Astron. Telegram, 5905
|
2303.14128 | The crime of being poor | The criminalization of poverty has been widely denounced as a collective bias
against the most vulnerable. NGOs and international organizations claim that
the poor are blamed for their situation, are more often associated with
criminal offenses than the wealthy strata of society and even incur criminal
offenses simply as a result of being poor. While no evidence has been found in
the literature that correlates poverty and overall criminality rates, this
paper offers evidence of a collective belief that associates both concepts.
This brief report measures the societal bias that correlates criminality with
the poor, as compared to the rich, by using Natural Language Processing (NLP)
techniques in Twitter. The paper quantifies the level of crime-poverty bias in
a panel of eight different English-speaking countries. The regional differences
in the association between crime and poverty cannot be justified based on
different levels of inequality or unemployment, which the literature correlates
to property crimes. The variation in the observed rates of crime-poverty bias
for different geographic locations could be influenced by cultural factors and
the tendency to overestimate the equality of opportunities and social mobility
in specific countries. These results have consequences for policy-making and
open a new path of research for poverty mitigation with the focus not only on
the poor but on society as a whole. Acting on the collective bias against the
poor would facilitate the approval of poverty reduction policies, as well as
the restoration of the dignity of the persons affected. | Georgina Curto, Svetlana Kiritchenko, Isar Nejadgholi, Kathleen C. Fraser | 2023-03-24T16:35:42Z | http://arxiv.org/abs/2303.14128v1 | # The crime of being poor
###### Abstract
The criminalization of poverty has been widely denounced as a collective bias against the most vulnerable. NGOs and international organizations claim that the poor are blamed for their situation, are more often associated with criminal offenses than the wealthy strata of society and even incur criminal offenses simply as a result of being poor. While no evidence has been found in the literature that correlates poverty and overall criminality rates, this paper offers evidence of a collective belief that associates both concepts. This brief report measures the societal bias that correlates criminality with the poor, as compared to the rich, by using Natural Language Processing (NLP) techniques in Twitter. The paper quantifies the level of crime-poverty bias in a panel of eight different English-speaking countries. The regional differences in the association between crime and poverty cannot be justified based on different levels of inequality or unemployment, which the literature correlates to property crimes. The variation in the observed rates of crime-poverty bias for different geographic locations could be influenced by cultural factors and the tendency to overestimate the equality of opportunities and social mobility in specific countries. These results have consequences for policy-making and open a new path of research for poverty mitigation with the focus not only on the poor but on society as a whole. Acting on the collective bias against the poor would facilitate the approval of poverty reduction policies, as well as the restoration of the dignity of the persons affected.
Artificial Intelligence (AI) provides insights that can trigger innovative interventions towards the UN Sustainable Development Goals (SDGs) (Vinuesa et al., 2020). The #1 UN SDG is the "end of poverty in all its forms everywhere", and there is an urgent call to find alternative paths to fight against poverty. The trend of global poverty reduction has been decreasing in the last decades and the Covid-19 pandemic has erased the last four years of poverty mitigation. Rising inflation and the impact of the war in Ukraine derail the process of poverty mitigation even further (The World Bank, 2023). Worldwide, the World Bank estimates that 685 M people are living below the US$2.15 a day poverty line (The World Bank, 2023). Poverty is a worldwide problem that affects not only the population in developing regions but also a significant percentage of people living in countries with thriving economies: in the United States, 11.6% of the population (US 37.9 M people) are in a situation of poverty and 18.5 M live in extreme poverty (United Nations, 2018).
The term _undeserving poor_ describes the difficulty for policy makers to approve and implement poverty reduction policies when the poor are blamed for their situation, since these policies are not popular (Nunn and Biressi, 2009). Therefore, the blamefulness of the poor could have an impact on the actual poverty levels. Curto et al. (Curto et al., 2022) provided evidence of bias against the poor, or aporophobia (Cortina, 2017), in Google, Twitter and Wikipedia word embeddings. This Natural Language Processing approach in Machine Learning has been widely used to identify biases within AI by representing words as vectors and measuring meaningful semantic relationships among them. It also allows us to reflect on societal biases, since the historical data used in AI has been produced by humans in real world scenarios.
Caricatured narratives of the rich as being industrious and entrepreneurial while the poor are seen as waters and scammers are at the root of this bias, which has been described as particularly prevalent in the United States (United Nations, 2018). The _criminalization of poverty_ refers to a discriminatory phenomenon where the poor are both associated with criminality and, at the same time, are being punished for being poor, generating a vicious circle.
The criminal offenses devised for sleeping rough in many cities of the so-called developed countries are an example of a legalized punishment for being poor, since they affect the homeless directly.
It must be highlighted that, to date, no evidence has been found that sustains the high level association between poverty and overall criminality rates. This correlation is complex to establish due to the multidimensional nature of poverty, the diversity of crimes (including violent and not violent), the method to identify them (self-reported and official reports), the potential police bias in the official crime rates reports, the different indicators of poverty used to study the potential correlation, the ages of the population in the sample, and geographic scope (Thornberry and Farnworth, 1982). However, recent studies report that factors such as a high poverty headcount ratio, high income inequality, and unemployment could have an impact on specific types of crimes, namely related to property (Imran et al., 2018; Anser et al., 2020).
While the criminalization of poverty has been widely denounced as an instance of bias against the poor, or aporophobia, empirical evidence of this collective prejudice was missing in the literature. This brief report aims to fill in this gap. By investigating how poor people are viewed in society through the analysis of social media texts, namely tweets, we discover that poor and homeless people are often discussed in association with criminality. We devise a metric, crime-poverty-bias (CPB), as the difference between the percentage of utterances mentioning criminality and the poor and the percentage of utterances mentioning criminality and the rich, and compare CPB in Twitter for eight English-speaking counties. We present the CPB results per country together with the factors which could influence the increase of property criminality rate according to the literature, namely poverty, unemployment and inequality rates. The purpose of the study is to provide empirical evidence on the criminalization of poverty and shed some light on the reasons behind it. We aim to answer the following questions: Can the differences in the association between poverty and criminality in different countries be justified based on the respective indicators of poverty, inequality, unemployment and criminality? Or are these differences due to a crime-poverty bias? The results have an impact on policy-making since CPB can hinder the acceptance of poverty mitigation policies by the public opinion.
## Method
We used the Twitter API to collect tweets in English from 25 August 2022 to 23 November 2022, pertaining to two groups: 'poor' and 'rich'. Since tweets can be up to 280 characters and include several sentences, we split each tweet into individual sentences. The corpus 'poor' comprises tweet sentences that contain the terms _the poor_ (used as a noun as opposed to an adjective, as in 'the poor performance'), _poor people_, _poor ppl_, _poor folks_, _poor families_, _homeless_, _on welfare_, _welfare recipients_, _low-income_, _underprivileged_, _disadvantaged_, _lower class_. We excluded explicitly offensive terms that tend to be used in personal insults, such as _trailer trash_ or _scrounger_. We also collected tweets related to the group 'rich', using query terms _the rich_ (used as a noun), _rich people_, _rich ppl_, _rich kids_, _rich men_, _rich folks_, _rich guys_, _rich elites_, _rich families_, _wealthy_, _well-off_, _upper-class_, _upper class_, _millionaires_, _billionaires_, _elite class_, _privileged_, _executives_. The single words _poor_ and _rich_ were not included as query terms because of their polysemy (they can apply to people, but can also be used to describe other things, e.g., 'poor results', 'rich dessert'). In total, there are over 1.3 million sentences in the corpus 'poor' and over 1.9 million sentences in the corpus 'rich'.
We were also interested in the geographical locations from which tweets originated. Unfortunately, only about 2% of tweets included the exact geographical information. Therefore, in addition to the tweet location we relied on user location that tweeters voluntarily provide in their Twitter accounts, which was available for about 60% of tweets. We automatically parsed user location descriptions to extract country information for the most frequently mentioned countries. Table 1 shows the number of sentences in both corpora per geographical location. In the following analysis, we focus on eight geographically-diverse English-speaking countries, for which both corpora contain at least 1,000 sentences each: the United States of America, the United Kingdom, Canada, India, Nigeria, Australia, South Africa, and Kenya.
To explore the themes commonly discussed with regard to poor people, we analyzed the content of sentences within the corpus 'poor' using an unsupervised topic modeling algorithm, BERTopic (Grootendorst, 2022). Topic modeling is a Machine
Learning technique that aims to group texts semantically. As a first step, BERTopic converts texts to 384-dimensional vector representations so that semantically similar texts have similar representations. This conversion is performed using a sentence transformer, a large language model trained on over one billion sentences scraped from the web. Then, the vectors are clustered together using a density-based clustering technique HDBSCAN Campello et al. (2013). The clustering algorithm identifies dense groups of semantically similar texts and leaves texts that do not fit any clusters as outliers. For computational efficiency, BERTopic was applied on a random sample of 600,000 sentences from the corpus 'poor'.
## Results and Discussion
The topic modeling on sentences related to the group 'poor' resulted in 142 extracted topics. Among the top topics in terms of frequency we find expected discussions such as the lack of affordable housing and (un)fair distribution of taxes among the socio-economic classes. We also discover themes relating to drug use, alcohol addiction and mental health issues associated with poverty. In this paper, we focus on the prominent topic related to criminality, which includes about 6,000 tweets. This topic is characterized by the presence of words such as _crime_, _police_, _cops_, _criminals_, and _jail_. Some utterances explicitly associate poverty with crime, while others oppose such positions and criticize the systemic discrimination of the poor, including over-policing and disproportionate incereration of poor and homeless people. However, the negation of stereotypes through counter-speech (e.g., "not all homeless people are criminals") is also a proof that these stereotypes exist in society Beukeboom and Burgers (2019). Table 2 shows some example tweets blaming the poor or denouncing social bias against the group.
Since topic modelling techniques are typically used for qualitative studies, a quantitative analysis was carried out to complement the results obtained through BERTopic. We quantified the percentage of sentences from the 'poor' corpus (1.3M sentences) that contain the terms related to criminality, per country (Table 3, row 1). The terms related to criminality include _crime_, _crimes_, _criminal_, _criminals_, _criminalizing_, _jail_, _prison_, _arrest_. For comparison, we include the percentage of sentences from the 'rich' corpus (1.9M sentences) that contain terms related to criminality (row 2). We refer to the difference between rows 1 and 2 as the _Crime-Poverty-Bias (CPB)_, since it measures the rate at which the poor are related to criminality as compared to the rich.
The results shown in Table 3 indicate that CPB is highest in the United States, followed by Canada and South Africa. Although the literature finds no correlation in general between overall crime rates and poverty, several factors have been identified that may potentially lead to an increase in property crime, such as income inequality and unemployment rate Anser et al. (2020); Imran et al. (2018). If the association between crime and poverty measured by the CPB reflects reality, we would expect higher CPB rates in countries rated higher on these measures. Therefore, to contextualize these outcomes, Figure 1 offers the CPB results together with each country's overall criminality rate, poverty
\begin{table}
\begin{tabular}{l r r} Location & ‘Poor’ corpus & ‘Rich’ corpus \\ \hline United States & 326,993 & 460,848 \\ United Kingdom & 80,947 & 135,211 \\ Canada & 32,978 & 43,686 \\ India & 14,029 & 16,296 \\ Nigeria & 10,529 & 26,693 \\ Australia & 9,698 & 14,654 \\ South Africa & 7,729 & 8,600 \\ Kenya & 3,378 & 6,478 \\ Other locations & 337,252 & 461,437 \\ No location information & 539,365 & 748,740 \\ \hline Total & 1,362,898 & 1,922,643 \\ \hline \end{tabular}
\end{table}
Table 1: The number of tweet sentences in the ‘poor’ and ‘rich’ corpora per geographical location.
\begin{table}
\begin{tabular}{l r} \hline _Explicit statements associating the poor with criminality:_ & Put the homeless in jail and make work camps. \\ More and more homeless people are doing crime. & Homeless people in that area, criminals on the streets!! \\ _Statements opposing the criminalization of poverty, which elicit the underlying stereotype:_ & \\ It’s quite bold of you to claim that all homeless people are criminals. & \\ So if you are in poverty you commit violent crimes and murder because you are disadvantaged? & \\ Law enforcement and prisons are routinely used against poor people not for the reasons of safety, but to protect the wealthy and privileged.
headcount ratio at $2.5 a day (purchasing power adjusted prices), inequality indicators (Gini Index and 10% income share) and unemployment rate. However, we do not observe correlation between these indicators and the CPB for the eight countries. The United States has the highest CPB rates despite having lower poverty, criminality, inequality and unemployment rates than South Africa. Therefore, we must look to other causes for the overestimated correlation between poverty and criminality in the public opinion in the United States. The relatively low CPB results obtained for other countries such as Kenya, with higher poverty headcount and similar levels of inequality and unemployment rates to the United States, would support this hypothesis.
A potential explanation could be found on the narrative shared by the United States and Canada of being the "land of opportunity" where the rich and the poor are thought to have equal chance of success and an illusory emphasis on employment influences the discussion on the public social spending (United Nations, 2018). However, the principle of equal opportunity can be considered an oxymoron since every person is exposed to different opportunities in life from the moment of birth (Sandel, 2020) and the job market for individuals with low educational qualifications, disability and with no assistance to find employment is very limited. The indicators of social mobility and inequality support the claim from the United Nations that the poor in the United States are overwhelmingly those born into poverty (United Nations, 2018): intergenerational social mobility in the United States from the bottom to the top income quintile is as low as 7.8%, below European countries such as the UK, France, Italy, or Sweden (Alesina et al., 2018). In fact, intergenerational mobility has declined substantially over the last 150 years in the United States (Song et al., 2020) and income inequality has been growing since the 1980s (The World Bank, 2023).
Although the use of Twitter is not representative of the total population within the countries in scope, the data obtained provides a first approach to measuring the phenomenon of the criminalization of the poor, which constitutes an instance of aporophobia. These preliminary results have an impact for poverty reduction policy making, because when the poor are considered "undeserving of help" it is more difficult for governments to approve laws to mitigate poverty. It is also harder for the people in need to overcome their situation when they are blamed for it and lack support from their community.
This brief report aims to initiate a new path of research for poverty mitigation, where the focus is not only on the redistribution of wealth but also on the mitigation of social bias against the poor. While the phenomenon of bias in terms of gender and race has been extensively analysed, bias against the poor has not received the attention it deserves both in AI and social sciences literature, despite the potential impact on the first global challenge identified by
Figure 1: For the countries in scope, overview of: the results obtained for Crime-Poverty-Bias (CPB), poverty headcount at $ 2.5 a day (purchasing power adjusted prices), overall criminality rate, indicators of inequality (Gini index and 10% income share) and unemployment rates. Sources: the CPB is obtained through the authors analysis of a corpus of tweets using the Natural Language Processing techniques. Poverty headcount ratio, Gini index, and 10% income share rates (World Bank 2017 or nearest year). Unemployment rate (World Bank 2021) and overall criminality rate (worldpopulationreview.com).
\begin{table}
\begin{tabular}{l c c c c c c c c} Percentage of sentences & USA & Canada & South Africa & Kenya & UK & Nigeria & Australia & India \\ \hline
1. in the ‘poor’ corpus & 3.4 & 2.1 & 2.1 & 1.6 & 1.1 & 0.8 & 1.0 & 0.7 \\
2. in the ‘rich’ corpus & 1.2 & 0.9 & 1.0 & 0.9 & 0.6 & 0.4 & 0.7 & 0.6 \\
3. Crime-Poverty-Bias (CPB) & 2.2 & 1.2 & 1.1 & 0.7 & 0.5 & 0.4 & 0.3 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 3: Percentage of sentences that include the terms related to criminality in the ‘poor’ and ‘rich’ corpora and the difference in frequency, which constitutes the Crime-Poverty-Bias (CPB).
## Abstract
In this thesis we present a new method for constructing the \(\mathcal{O}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2 }(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}(\log^{2}( \log^{2}(\log^{2}( (\log^{2^{2}}( (\log^{2^{2}}( (\log^{2^{ }}( (\log^{2^{ }}( ( \log^{ ( ( ( ( ( ( ) ( 00 0 0 0 |
2305.17977 | A unified framework for Simplicial Kuramoto models | Simplicial Kuramoto models have emerged as a diverse and intriguing class of
models describing oscillators on simplices rather than nodes. In this paper, we
present a unified framework to describe different variants of these models,
categorized into three main groups: "simple" models, "Hodge-coupled" models,
and "order-coupled" (Dirac) models. Our framework is based on topology,
discrete differential geometry as well as gradient flows and frustrations, and
permits a systematic analysis of their properties. We establish an equivalence
between the simple simplicial Kuramoto model and the standard Kuramoto model on
pairwise networks under the condition of manifoldness of the simplicial
complex. Then, starting from simple models, we describe the notion of
simplicial synchronization and derive bounds on the coupling strength necessary
or sufficient for achieving it. For some variants, we generalize these results
and provide new ones, such as the controllability of equilibrium solutions.
Finally, we explore a potential application in the reconstruction of brain
functional connectivity from structural connectomes and find that simple
edge-based Kuramoto models perform competitively or even outperform complex
extensions of node-based models. | Marco Nurisso, Alexis Arnaudon, Maxime Lucas, Robert L. Peach, Paul Expert, Francesco Vaccarino, Giovanni Petri | 2023-05-29T09:37:03Z | http://arxiv.org/abs/2305.17977v1 | # A unified framework for Simplicial Kuramoto models
###### Abstract
Simplicial Kuramoto models have emerged as a diverse and intriguing class of models describing oscillators on simplices rather than nodes. In this paper, we present a unified framework to describe different variants of these models, categorized into three main groups: "simple" models, "Hodge-coupled" models, and "order-coupled" (Dirac) models. Our framework is based on topology, discrete differential geometry as well as gradient flows and frustrations, and permits a systematic analysis of their properties. We establish an equivalence between the simple simplicial Kuramoto model and the standard Kuramoto model on pairwise networks under the condition of manifoldness of the simplicial complex. Then, starting from simple models, we describe the notion of simplicial synchronization and derive bounds on the coupling strength necessary or sufficient for achieving it. For some variants, we generalize these results and provide new ones, such as the controllability of equilibrium solutions. Finally, we explore a potential application in the reconstruction of brain functional connectivity from structural connectomes and find that simple edge-based Kuramoto models perform competitively or even outperform complex extensions of node-based models.
## I Introduction
Synchronization is defined as the emergence of order from the interactions among many parts. It is a ubiquitous phenomenon that occurs in both natural and human-engineered systems [1; 2; 3] and can be observed in a wide range of systems, including the firing of neurons [4], the twinkling of fireflies [5], power grids [6; 7] or audience applause [8]. Despite the complexity and differences of these systems, the canonical Kuramoto model [9] provides a unified framework for describing the onset of synchronization in systems of oscillators that interact in a pairwise fashion. While the original version of the model included interactions between all pairs of oscillators, later extensions of the model allowed the specification of arbitrary network topologies [10]. This, in turn, revealed interesting relationships between the dynamical properties of the model and the structure of the underlying network [11; 12].
Traditional networks, however, provide a limited perspective on complex systems as they only consider pairwise interactions. To overcome this limitation, a new paradigm has recently emerged: networks with group (or higher order) interactions, i.e., interactions between any number of units [13; 14; 15]. Group interactions have been recognized to play an important role in a rapidly growing list of systems, including brain networks [16], social [17; 18; 19] and biological communities [20; 21; 22] among many others [13; 14]. Group interactions can be represented by two main mathematical frameworks: hypergraphs or simplicial complexes. Although hypergraphs are more general, simplicial complexes have more structure because of the additional inclusion (or closure) condition: all sub-simplices of a simplex must be contained in a simplicial complex. Consequently, simplicial complexes--like pairwise networks--possess a rich theory rooted in the mathematical field of discrete differential geometry and topology. Their expressive power is also greatly increased by the possibility of including weights [23], which naturally become embedded in their topological [24; 25] and spectral structure [26]. The effect of a simplicial complex structure has been shown to induce new dynamical phenomena, such as explosive transitions [27] and multistability [28], across a variety of dynamical processes, including random walks [29], diffusion [30; 31; 32], consensus [33; 34; 35], spreading [36; 37; 38; 39; 40; 41; 42], percolation [43; 44; 45], and evolutionary game theory [46].
Naturally, this process of _simplicialization_ has also reached synchronization. One way to approach the modeling of synchronization in higher-order systems is to extend the family of possible interactions to include groups. From a network of interacting oscillators, we pass to a simplicial complex where node oscillators can also interact through triangles, tetrahedra, or higher order structures [47; 48; 49; 50; 51; 52]. Another approach is to consider the simplicial Kuramoto [53; 54] as a model of synchronizing dynamics of higher-order topological signals. With it, we are not constrained to consider the evolution of oscillators placed on nodes, but we can place them on simplices
of any order. This change, which at the beginning may appear arbitrary, allows us to consider higher-order interactions in a novel and powerful way: if an edge can connect only two nodes at a time, a triangle connects three edges, a tetrahedron four faces, etc...More generally, simplicial oscillators of order \(k\) will interact through \((k+1)\)-simplices, resulting in interactions of order \(k+2\). In line with the guiding principles of higher-order network theory, the essential difference between agents and _carriers of interactions_ fades away, leaving us with wider modeling freedom. Its evolution equation, moreover, can be elegantly written by borrowing some of the concepts of discrete exterior calculus [55], the discrete analogous to differential geometry on manifolds. This geometrical structure allows us to get precious insights into the dynamics of the model and how it is related to the topological properties of the simplicial complex. This fruitful relation with topology has also recently put the simplicial Kuramoto at the center of the attention, resulting in different variants and extensions of the original model [56; 57; 58; 54].
In this work, we aim to lay down the mathematical foundations for the study and derivation of Kuramoto models on simplicial complexes. Our approach relies on consistent geometrical and dynamical structures such as discrete differential geometry and gradient flows to express the simplicial Kuramoto models in a strict mathematical form while allowing for several extensions able to couple the dynamics across Hodge subspaces or simplicial orders.
### Structure of the paper
The work is structured as follows. We first state the Kuramoto model in Section II.1 and review the needed concepts of discrete differential geometry in Section II.2.
In Section III, we introduce the standard simplicial Kuramoto model and interpret its interactions in terms of the geometry and topology of the underlying simplicial complex. With this approach, we find that the model is locally equivalent to the standard Kuramoto model when the complex is locally _manifold-like_.
Furthermore, in Section IV, we define a natural notion of simplicial phase-locking, which we then relate to the projections of the dynamics on higher and lower dimensional simplices, allowing us to give a geometrical picture of its meaning. Taking inspiration from classic works on the node Kuramoto [11], we discuss the phase-locked configurations and derive necessary and sufficient conditions on the coupling strength for their existence.
Then, in Section V, we review and generalize some variants of the simplicial Kuramoto model that couple the dynamics across Hodge subspaces, such as the explosive model [53] or the simplicial Kuramoto-Sakaguchi [54]. Then, in Section VI we expand on the Dirac formulation of [56] that couples oscillators across orders of interactions, and Hodge subspaces when coupled with the models of Section V.
Finally, in Section VII, we apply some of the models studied here to real-world brain data and show how simple, edge-based simplicial Kuramoto models can achieve better correlations with functional connectivity than the standard node-based Kuramoto model.
## II Preliminaries
### Kuramoto model
We begin by briefly introducing the classical Kuramoto model. Let us consider a system of \(n\)_phase_ oscillators, characterized solely by their phase \(\theta_{i}\) and _natural frequency_\(\omega_{i}\), the frequency at which they oscillate when isolated from any interactions. The evolution of the uncoupled system can be described by a set of differential equations: \(\dot{\theta}_{i}=\omega_{i}\) for each oscillator \(i\). To account for the interaction among oscillators, various approaches can be employed, depending on the underlying physics of the phenomenon under investigation. A particularly elegant and widely studied model, renowned for its simplicity, analytical tractability, and rich behavior, was introduced by Kuramoto [9]. Known as the Kuramoto model, it is described by the following system of first-order differential equations:
\[\dot{\theta}_{i}=\omega_{i}-\sigma\sum_{j=1}^{n}\sin(\theta_{i}-\theta_{j}). \tag{1}\]
In this formulation, an additional term captures the effect of interactions between oscillator \(i\) and every other oscillator \(j\), modulated by a positive _coupling_ or _interaction strength_ parameter, denoted as \(\sigma\). By the properties of the sine function, we observe that the interaction force between oscillators \(i\) and \(j\) becomes zero when \(\theta_{i}-\theta_{j}=k\pi\) i.e. when \(\theta_{i}\) and \(\theta_{j}\) represent the same or opposite angles modulo \(2\pi\). Conversely, the interaction is strongest when the phase difference between the oscillators corresponds to odd multiples of \(\frac{\pi}{2}\), implying orthogonal states on the unit circle. This simple interaction mechanism forms the basis of the Kuramoto model, which can be further generalized, enabling the study of diverse synchronization phenomena and their intricate dynamics.
### Discrete differential geometry
A simplicial complex [55; 59] is a generalization of a graph that, along nodes and edges, can include triangles, tetrahedra, and their higher-dimensional analogs. Given a set of \(N\) vertices \(\mathcal{V}=\{v_{0},\ldots,v_{N-1}\}\) we call \(k\)_-simplex_ any subset of \(\mathcal{V}\) with \(k+1\) elements. The dimension of a \(k\)-simplex \(\sigma\) is \(k\) and \(\dim\sigma=k\). Geometrically, we think of \(0\)-simplices as nodes, \(1\)-simplices as
edges, 2-simplices as triangles, and so on. A _simplicial complex_\(\mathcal{X}\) is a set of simplices closed by inclusion, that is, every subset of a simplex is itself a simplex belonging to the complex (Fig. 1a). We call \(n_{k}\) the number of \(k\)-simplices in \(\mathcal{X}\) and consider _oriented_ simplicial complexes, where each \(k\)-simplex is given an ordering of its vertices \([v_{0},\ldots,v_{k-1}]\), where two orderings are considered equivalent if they are related by an even number of swaps. This means that each simplex can have only two possible orientations (Fig. 1b).
Simplices in a simplicial complex can be related in two different ways. A _subface_ of a \(k\)-simplex \(\sigma\in\mathcal{X}\) is any \((k-1)\)-simplex \(\tau\) contained in \(\sigma\). The subfaces of a triangle \([a,b,c]\) are, for example, all its edges \([a,b],[b,c],[a,c]\). We write \(\tau<\sigma\) when \(\tau\) is a subface of \(\sigma\). A _superface_ of a \(k\)-simplex \(\sigma\in\mathcal{X}\) is any \((k+1)\)-simplex \(\tau\) which contains \(\sigma\). In this case, we write \(\tau>\sigma\). An oriented \(k\)-simplex \(\sigma=[v_{0},\ldots,v_{k}]\) is said to be _coherently oriented_ with its subface \(\tau\), with nodes \(\{v_{0},\ldots,v_{i-1},v_{i+1},\ldots,v_{k}\}\), if the orientation given to \(\tau\) is equivalent to
\[[v_{0},\ldots,v_{i-1},v_{i+1},\ldots,v_{k}]\,. \tag{2}\]
We write \(\sigma\sim\tau\) when \(\sigma>\tau\) and they are coherently oriented, while \(\sigma\nsim\tau\) when \(\sigma>\tau\) and they are incoherently oriented. In addition, we say that two \(k\)-simplices \(\sigma,\tau\) are _lower adjacent_ if they share a common subface (we write \(\sigma\smile\tau\)) while they are said to be _upper-adjacent_ if there exists a \((k+1)\)-simplex which contains both of them (we write \(\sigma\frown\tau\)).
For this work, it is important to highlight two special types of subfaces. We call _free_, a subface which belongs only to a single simplex, and _manifold-like_ a subface \(\tau\) which belongs to exactly two simplices \(\sigma_{1},\sigma_{2}\), one coherently oriented with \(\tau\) (\(\sigma_{1}\sim\tau\)) and the other incoherently (\(\sigma_{2}\nsim\tau\)). This last definition comes from the fact that a simplicial complex where all the \((n-1)\)-simplices are manifold-like (which we call _simplicial n-manifold_), when embedded into a Euclidean space, is an oriented topological \(n\)-manifold, in the sense that it locally looks like \(\mathbb{R}^{n}\). If the simplicial complex is not a simplicial manifold, we can still have manifold-like subfaces which, when the complex is embedded, correspond to subspaces that are manifolds. In dimension 1, for example, a manifold-like subface is a node incident to only two edges so that the complex is locally a line (Fig. 4a).
From a geometric point of view, simplicial complexes take the role of geometrical domains upon which we define _cochains_, algebraic objects which correspond to differential forms. A \(k\)-cochain is simply a function associating a real number to every \(k\)-simplex. The vector space of \(k\)-cochains is named \(C^{k}(\mathcal{X})\) with a natural basis given by the functions associating 1 to a particular simplex, and 0 to all the others. Any \(k\)-cochain can therefore be written as
\[C^{k}(\mathcal{X})\ni x=\sum_{i=1}^{n_{k}}x_{i}\sigma^{i}\,, \tag{3}\]
with basis cochains \(\sigma^{i}(\sigma_{j})=\delta_{j}^{i}\) associated to every \(k\)-simplex \(\sigma_{i}\). Moreover, it is conventional to algebraically impose that a change of sign corresponds to a change of orientation
\[[v_{0},v_{1},\ldots,v_{k}]=-[v_{1},v_{0},\ldots,v_{k}]\,.\]
If we assign positive weights to the \(k\)-simplices \(w_{1}^{k},\ldots,w_{n_{k}}^{k}>0\), then we can endow the cochain space with an inner product given by the inverse of the diagonal matrix \(W_{k}\)
\[W_{k}=\operatorname{diag}\left(w_{1}^{k},\ldots,w_{n_{k}}^{k}\right)\,. \tag{4}\]
We denote the inner product of cochains by \(\left\langle v,w\right\rangle_{w^{k}}\overset{\text{\tiny def}}{=}v^{T}W_{k} ^{-1}w\), and its induced norm by \(\left\|v\right\|_{w^{k}}\overset{\text{\tiny def}}{=}\sqrt{v^{T}W_{k}^{-1}v}\), explicitly given as
\[\left\|v\right\|_{w^{k}}=\sqrt{\sum_{i=1}^{n_{k}}\frac{1}{w_{i}^{k}}v_{i}^{2}}\,. \tag{5}\]
The inner product and the norm reduce to the standard Euclidean inner product and 2-norm when the complex is _unweighted_, i.e. \(W_{k}=I_{n_{k}}\) for all \(k=1,\ldots,K\). In the rest of this work, we will always consider cochain spaces endowed with weights, meaning that inner products and norms will be weighted, and transposes will become adjoints. While this approach requires some care, it allows us to avoid carrying weight matrices along in every formula, resulting in more elegant and concise expressions that do not sacrifice generality.
The adjacency structure of the simplicial complex, and thus the complex itself, can be encoded in a family of linear operators acting on cochains. We define the \(k\)-th order _incidence matrix_\(B_{k}\in\mathbb{R}^{n_{k-1}\times n_{k}}\) describing the adjacency relations between \(k\)-simplices and \((k-1)\)-simplices,
Figure 1: **a.** Geometrical representation of a small oriented simplicial complex. **b.** Oriented simplices of orders 0 (nodes), 1 (edges), 2 (triangles) and 3 (tetrahedra).
as
\[B_{k}(i,j)=\begin{cases}+1&\text{ if }\dim\sigma_{i}=k-1,\,\sigma_{j}>\sigma_{i} \text{ and }\sigma_{j}\sim\sigma_{i}\,,\\ -1&\text{ if }\dim\sigma_{i}=k-1,\,\sigma_{j}>\sigma_{i}\text{ and }\sigma_{j}\nsim \sigma_{i}\,,\\ 0&\text{ otherwise}\,.\end{cases} \tag{6}\]
We then define the _coboundary operator_
\[D^{k}=B_{k+1}^{\top}\,, \tag{7}\]
sending \(k\)-cochains to \((k+1)\)-cochains. Its adjoint with respect to the inner product, which we name _weighted boundary operator_, is
\[B^{k}=(D^{k-1})^{*}=W_{k-1}B_{k}W_{k}^{-1}\,. \tag{8}\]
Indeed, by definition of adjointness, for a \((k-1)\)-cochain \(x\) and a \(k\)-cochain \(y\), \(\langle D^{k-1}x,y\rangle_{\text{\tiny$\pm$}^{k}}=\langle D^{k-1}x,W_{k}^{-1}y \rangle_{2}=\langle x,B_{k}W_{k}^{-1}y\rangle_{2}=\langle x,W_{k-1}^{-1}W_{k- 1}B_{k}W_{k}^{-1}y\rangle_{2}=\langle x,B^{k}y\rangle_{\text{\tiny$\pm$}^{k-1 }$}\). The coboundary and boundary operators should be thought of as the discrete analog of the divergence and curl operators of differential calculus. They satisfy what is known as the "fundamental theorem of topology"
\[B^{k}B^{k+1}=0,\,D^{k}D^{k-1}=0\,\,\,\,\forall k\,, \tag{9}\]
which is a linear-algebraic formalization of the topological fact that a boundary has no boundary. We call \(k\)_-cocycle_ a \(k\)-cochain \(x\) such that
\[D^{k}x=0\,, \tag{10}\]
and a _weighted \(k\)-cycle_ a \(k\)-cochain \(x\) such that
\[B^{k}x=0\,. \tag{11}\]
With these two operators, we can define the _discrete Hodge Laplacians_[60], which generalize the well-known graph Laplacian to act on higher order cochains
\[L^{k}=L^{k}_{\downarrow}+L^{k}_{\uparrow}=D^{k-1}B^{k}+B^{k+1}D^{k}\,. \tag{12}\]
It can be easily proven that the kernel of the discrete \(k\)-Hodge Laplacian is isomorphic to the \(k\)-th (co)_homology group_ of the simplicial complex
\[\ker L^{k}=\ker B^{k}\cap\ker D^{k}\cong\mathcal{H}^{k}(\mathcal{X};\mathbb{ R})=\ker B^{k}\big{/}\operatorname{Im}B^{k+1}\,,\]
meaning that its dimension \(\dim\ker L^{k}\) is equal to the \(k\)-th _Betti number_ of \(\mathcal{X}\), i.e. the number of \(k\)-dimensional holes of the simplicial complex [61]. Intuitively, the \(0\)-dimensional holes are the connected components, \(1\)-dimensional holes are empty regions bounded by \(1\)-simplices, whereas the \(2\)-dimensional holes are cavities bounded by \(2\)-simplices.
## III The simplicial Kuramoto model
In this section, we formulate and study the Kuramoto model for interacting simplicial oscillators proposed in [53]. The rest of this section is organized as follows:
* In Section III.1, we formulate the simplicial Kuramoto model using the tools of discrete differential geometry introduced in Section II.2.
* In Section III.2, we describe the local form of the two types of interactions in the model: from below and from above. In the case of interactions from below, we identify the presence of self-interactions resulting from free subfaces.
* In Section III.3, we show that the \(k\)-th order simplicial Kuramoto model and the standard node Kuramoto model are equivalent when the simplicial complex is a simplicial \(k\)-manifold.
* In Section III.4, we describe how the dynamics naturally split into three independent subdynamics using the combinatorial Hodge decomposition theorem.
* In Section III.5, we recall the definition of simplicial order parameter proposed in [54], discuss its implications on the meaning of synchronization in the simplicial model, and its differences with the standard Kuramoto order parameter.
### Simplicial Kuramoto model
Given a simplicial complex \(\mathcal{X}\), the \(k\)-th order simplicial Kuramoto model [53; 54] describes a system where the \(k\)-simplices are oscillators interacting through common subfaces and superfaces. For example, one can consider oscillating edges that interact through common nodes and triangles (see Fig. 3a). The model can be elegantly formulated with the boundary and coboundary operators as
\[\dot{\theta}_{(k)}=\omega-\sigma^{\uparrow}B^{k+1}\sin\left(D^{k}\theta_{(k) }\right)-\sigma^{\downarrow}D^{k-1}\sin\left(B^{k}\theta_{(k)}\right)\,. \tag{13}\]
Here, the phases of the oscillating \(k\)-simplices are gathered in the \(n_{k}\)-dimensional vector \(\theta_{(k)}\), formalized as a \(k\)-cochain \(\theta_{(k)}\in C^{k}(\mathcal{X})\), while \(\omega\in C^{k}(\mathcal{X})\) represents the natural frequencies, i.e. \(\omega_{i}\) is the frequency at which oscillator \(i\) oscillates when no interactions are present. The parameters \(\sigma^{\uparrow},\sigma^{\downarrow}>0\) represent respectively the strength of the coupling through superfaces and subfaces. As shown in [62], the two interaction terms \(B^{k+1}\sin(D^{k}\theta)\) and \(D^{k-1}\sin(B^{k}\theta)\) describe, respectively, interactions from _above_ and _below_, i.e. each oscillating \(k\)-simplex interacts with its adjacent simplices through both higher
\((k+1)\) and lower dimensional \((k-1)\)-simplices (Fig. 3). In Section III.2, we unpack the matrix formulation and see the explicit form of these interaction terms. For ease of notation, from now on we will drop the subscript from \(\theta_{(k)}\), as the order of oscillation can be easily inferred by the indices of the boundary and coboundary matrices in Eq. (13).
The form of Eq. (13) is not arbitrary but comes from the fact that, for \(k=0\), it reduces to the standard Kuramoto model (from now on referred to as "node Kuramoto") on a network
\[\dot{\theta}_{i}=\omega_{i}-\sigma\sum_{j}A_{ij}\sin(\theta_{i}-\theta_{j})\,, \tag{14}\]
where \(A\) is the graph adjacency matrix. To see why, notice that Eq. (14) can be rewritten in matrix form using the boundary and coboundary matrices (see Appendix A) as
\[\dot{\theta}=\omega-\sigma B^{1}\sin(D^{0}\theta)\,. \tag{15}\]
One can think of \(D^{0}\) as projecting the node phases on the edges by associating to each edge, which describes an interaction, the difference of its endpoints' phases. The boundary operator \(B^{1}\) then projects the interactions back to the nodes, so that each node receives contributions from all edges that are incident to it. The extension of this term to higher-order oscillators is straightforward once one sees the model as a nonlinear extension of the graph Laplacian \(L^{0}\theta=B^{1}D^{0}\theta\;\to B^{1}\sin(D^{0}\theta)\), which can be naturally generalized with the discrete Hodge Laplacian defined in Eq. (12).
Notice that in the case of the node Kuramoto, no simplices with order lower than the nodes exist, hence the dynamics results from interactions from _above_, as the left term of Eq. (13). The interaction term from _below_ is naturally introduced to account for the lower adjacency structure present in simplicial complexes, but absent in graphs. Simply put, two triangles can be adjacent through a common edge and a common tetrahedron, but two nodes can only be adjacent through an edge, i.e. a higher-order simplex. This dynamics belongs to the wider class of dynamical systems on simplicial complexes, whose stability properties have been studied when the sine is replaced with a general nonlinearity (e.g. [62; 63]).
**Example 1**.: _Let us consider, as an example, the simplicial Kuramoto dynamics on the edges of the unweighted small simplicial complex shown in Figure 2a. The simplicial complex of interest is \(\mathcal{X}=\{[1],[2],[3],[4],[12],[13],[23],[34],[123]\}\) and the incidence matrices of order \(1\) and \(2\), accounting for the orientations, are_
\[\begin{split} B_{1}=&\begin{pmatrix}-1&-1&0&0\\ 1&0&-1&0\\ 0&1&1&-1\\ 0&0&0&1\end{pmatrix},\;B_{2}=&\begin{pmatrix}1\\ -1&-1\\ -1&1\\ 1&-2\\ 0\end{pmatrix},\end{split}\]
_Being \(\mathcal{X}\) unweighted, we also have from Eq. (8) that \(B^{1}=B_{1}\), \(D^{0}=B_{1}^{\top}\), \(B^{2}=B_{2}\), \(D^{1}=B_{2}^{\top}\). If we consider the vector of phases on the edges \(\theta=(\theta_{[12]},\theta_{[13]},\theta_{[23]},\theta_{[34]})\) and their natural frequencies \(\omega=0\), then, after some algebra, we see that Eq. (13) becomes_
\[\begin{split}\dot{\theta}_{[12]}=&-\sigma^{\downarrow}\left( \sin(\theta_{[12]}+\theta_{[13]})+\sin(\theta_{[12]}-\theta_{[23]})\right)\\ &-\sigma^{\uparrow}\sin(\theta_{[12]}-\theta_{[13]}+\theta_{[23]}) \\ \dot{\theta}_{[13]}=&-\sigma^{\downarrow}\left(\sin(\theta_{[13]}+ \theta_{[12]})+\sin(\theta_{[13]}+\theta_{[23]}-\theta_{[34]})\right)\\ &+\sigma^{\uparrow}\sin(\theta_{[12]}-\theta_{[13]}+\theta_{[23]}) \\ \dot{\theta}_{[23]}=&-\sigma^{\downarrow}\left(\sin(\theta_{[23]}- \theta_{[12]})+\sin(\theta_{[23]}+\theta_{[13]}-\theta_{[34]})\right)\\ &-\sigma^{\uparrow}\sin(\theta_{[12]}-\theta_{[13]}+\theta_{[23]}) \\ \dot{\theta}_{[34]}=&-\sigma^{\downarrow}\left(\sin(\theta_{[34]}- \theta_{[13]}-\theta_{[23]})-\sin(\theta_{[34]})\right)\end{split} \tag{16}\]
_where the interaction terms from below and from above are clearly identifiable by their coupling strengths. Notice that some of the interactions from below are pairwise, others are higher-order (they involve three oscillators) and one, \(\sin(\theta_{[34]})\), is of order \(0\) as it depends on
Figure 2: **a.** Edge simplicial Kuramoto on the simplicial complex described in Example 1. **b.** The effective hypergraph of the dynamics describing the actual interactions taking place between the oscillators. The interaction hyperedges are labeled by the name of the simplex in the original complex which generates them. Note how the hyperedge [4], representing the term \(\sin(\theta_{[34]})\) in Eq. (16), is interpreted here as a self-interaction.
the value of a single oscillator. Moreover, the interaction from above through the triangle, as we expected, is higher-order and involves three oscillators._
While it is natural to define the dynamics on the edges and formulate the model using the incidence matrices of the simplicial complex, it is interesting to look at Equation (16) from another point of view. If we forget about the underlying simplicial complex and that \(\theta_{[ij]}\) is a phase associated with an edge oscillator, what we are left with is a dynamical system where the phases of 4 different oscillators evolve by interacting with each other in a way specified by the functional form of the equations. It is natural, therefore, to consider these oscillators as nodes and represent their interactions with _hyperedges_, i.e. arbitrary groups of nodes. What we get, by neglecting the signs inherited by the orientations, is an _effective hypergraph_ (Fig. 2b) which does not resemble the original simplicial complex but has the advantage of clearly representing the actual interaction structure underlying the dynamics. Thus, the simplicial Kuramoto model can be seen as a particular kind of hypergraph oscillator dynamics where the coupling functions depend on the orientations of the original simplices.
Notice how the coupling functions in Eq. (16) are different from the ones classically used in hypergraph oscillator models [47; 48; 49; 50; 51; 52] as, in general, they do not vanish when the phases of the interacting oscillators are all equal. Moreover, this hypergraph formulation, despite being more expressive, is harder to treat analytically as, with no knowledge of the underlying simplicial complex, one cannot resort to the powerful tools of topology and discrete geometry.
### As above, _not_ so below: the two types of interactions
The introduction of Eq. (13) was motivated by purely formal and symmetry arguments. It is then important to study the _local_ form of the different interaction terms to understand what kind of system is being described. Following a similar procedure to the one proposed in [62], we treat the two types of interactions separately.
Let us start with the interaction from _above_
\[I_{\uparrow}(\theta)\stackrel{{\text{\tiny def}}}{{=}}-B^{k+1} \sin(D^{k}\theta)\,, \tag{17}\]
which is a direct generalization of the standard node Kuramoto interaction term \(-B^{1}\sin(D^{0}\theta)\). To understand its behavior, we look at the simplest possible interaction of its kind, where we have a single \((k+1)\)-simplex regulating the interaction between its \(k+2\) oscillating subfaces (Fig. 3b). In this case, the incidence matrix is a column vector of the form
\[B_{k+1}=\xi^{\uparrow}\in\{-1,1\}^{k+2}\, \tag{18}\]
where \(\xi^{\uparrow}_{i}\) is 1 if the subface \(i\) is coherently oriented with the \((k+1)\)-simplex, and \(-1\) if it is incoherently oriented. It follows that \(D^{k}=(B_{k+1})^{\top}=(\xi^{\uparrow})^{\top}\). Equation (17) then becomes
\[I_{\uparrow}(\theta)=-\xi^{\uparrow}\sin\left((\xi^{\uparrow})^{\top}\theta \right)\,, \tag{19}\]
which means that each oscillator will be influenced by the same scalar value given by the oriented sum of the phases \((\xi^{\uparrow})^{\top}\theta\), with a sign depending on the coherence or incoherence of the orientations (see Fig. 3c). In the nodes case, this simply reduces to \(I_{\uparrow}(\theta)=(-\sin(\theta_{1}-\theta_{2}),-\sin(\theta_{2}-\theta_{1} ))^{\top}\). Given that a \((k+1)\)-simplex always has \(k+2\) subfaces, this kind of interaction involves \(k+2\) oscillators and thus, for \(k>0\), is genuinely higher order, in the sense that it does not result from the composition of multiple pairwise interaction terms.
The interaction from _below_
\[I_{\downarrow}(\theta)\stackrel{{\text{\tiny def}}}{{=}}-D^{k-1 }\sin(B^{k}\theta) \tag{20}\]
describes the interactions of simplicial oscillators through lower-order simplices. This interaction from below is absent in the node Kuramoto, and it represents the true novelty of the simplicial model. First, while only \(k+2\) simplices of order \(k\) can interact through a \((k+1)\)-simplex, an arbitrary number of \(k\)-simplices can have a common subface and interact from below. This allows us to consider arbitrary higher-order interactions, not restricted by the order of the oscillating simplices. It is then natural to ask if the interactions from below are locally of a similar form to the interactions from above, as in the case of Eq. (17). For this, let us consider again the simplest possible interaction, i.e. the general case of \(N\)\(k\)-simplices lower adjacent through a common subface with arbitrary orientations, as illustrated in Fig. 3d. By considering an appropriate ordering of the simplices, this configuration is described by the incidence matrix
\[B_{k}=\begin{pmatrix}\xi^{\downarrow}_{1}&\xi^{\downarrow}_{2}&\cdots&\xi^{ \downarrow}_{N}\\ o_{1}&0&\cdots&0\\ 0&o_{2}&\cdots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&o_{N}\end{pmatrix}\,, \tag{21}\]
where the entries \(\xi^{\downarrow}_{i}\in\{-1,1\}\) describe the relative orientation between simplex \(i\) and the common subface, while \(o_{i}\in\{-1,1\}^{k}\) contains the relative orientations between simplex \(i\) and its other subfaces not involved in this interaction from below. Given that \(D^{k-1}=(B_{k})^{\top}\), Eq. (20) becomes
\[I_{\downarrow}(\theta)=-\xi^{\downarrow}\sin\left((\xi^{\downarrow})^{\top} \theta\right)-k\sin(\theta)\,, \tag{22}\]
where \(\xi^{\downarrow}=(\xi^{\downarrow}_{1},\ldots,\xi^{\downarrow}_{N})^{\top}\). Notice that the first term is formally the same as in Eq. (19) for the interaction from above. Each oscillator receives a contribution depending on the phases of all oscillators involved in the interaction. This means that the higher order interaction given by a \((k+1)\)-simplex shares the same structure as the
higher order interaction given by \(k+2\) oscillators sharing a common subface.
However, an extra term \(-k\sin(\theta)\) appears, but by carrying out the computations which lead from Eq. (20) to Eq. (22), it appears that this extra term is a sum of contributions coming from the subfaces not involved in the interaction, which, in this case, are free i.e. they are subfaces of only one simplex. In fact, here each oscillator has \(k\) free subfaces, hence the multiplication factor \(k\) in front of \(\sin(\theta)\). In general, due to this term, each oscillator modulates its own frequency based on its own phase, which is akin to a self-interaction through its free subfaces (Fig. 3e). Formally, this term also appears in the Adler equation [2] describing the phase difference of a system with one oscillator driven by another one. From that point of view, the self-interaction terms can be seen as the driving of each oscillator by another non-existent oscillator that has a constant phase set to zero.
### Manifold-like simplicial complexes
Interestingly, if the interaction from below involves exactly two oscillating simplices, one coherent and the other incoherent with respect to the common subface so that the complex at that subface is _manifold-like_ (see Section II.2), the interaction term will be the same as the standard node Kuramoto, i.e. of the form \(\sin(\theta_{1}-\theta_{2})\).
Different kinds of interactions occur at non-manifold-like subfaces (Fig. 4a), that is:
1. at subfaces that are free, resulting in self-interactions;
2. at subfaces that are adjacent to more than two simplices of order \(k\), i.e. genuinely high-order interactions;
3. at subfaces adjacent to two simplices that are both coherently or incoherently oriented, resulting in interactions of the form \(\sin(\theta_{1}+\theta_{2})\).
If every \((k-1)\)-simplex which has at least a \(k\)-simplex incident to it is manifold-like, so that the simplicial complex is a simplicial manifold, we have the following equivalence result.
**Theorem 1** (Simplicial Kuramoto on a manifold).: _Let \(\mathcal{X}\) be a \(k\)-dimensional oriented simplicial manifold. Then it follows that the simplicial Kuramoto dynamics of order \(k\) is equivalent to the standard node Kuramoto taking place on the \(1\)-skeleton of the dual cell complex to \(\mathcal{X}\), that is, the graph with a node for each \(k\)-simplex and an edge for each \((k-1)\)-simplex._
Proof.: Under the assumptions that \(\mathcal{X}\) is manifold-like, we can apply the discrete analogous to Poincare duality (see [55] p.50) to obtain \(B^{k}=\widetilde{D}^{0},\quad D^{k-1}=\widetilde{B}^{1}\), where \(\widetilde{B}\) and \(\widetilde{D}\) are, respectively, the weighted boundary and coboundary operators of the dual cell complex to \(\mathcal{X}\). The interaction term from below in the primal complex becomes the interaction term from above in the \(1\)-skeleton of the dual, i.e.
\[\dot{\theta}=\omega-\sigma^{\downarrow}D^{k-1}\sin(B^{k}\theta)=\omega-\sigma ^{\downarrow}\widetilde{B}^{1}\sin(\widetilde{D}^{0}\theta)\,,\]
Figure 3: **a.** The simplicial Kuramoto model allows us to consider oscillators, shown here as clocks, on the edges of a simplicial complex, interacting through nodes and triangles. **b.** The interaction from _above_ Eq. (17) happens between \(k+2\) oscillating \(k\)-simplices through a single \((k+1)\)-simplex, here highlighted in red. **c.** In the interaction from above, each oscillator involved is influenced by a term depending on the oriented sum of the phases. The phase \(\theta_{2}\) appears with a minus sign because its edge is oriented in the opposite direction relative to the triangle. **d.** The interaction from _below_ Eq. (20) happens between an arbitrary number of oscillating \(k\)-simplices through a single \((k-1)\)-simplex, highlighted here in red. **e.** Unlike interactions from above, interactions from below through free subfaces are akin to self-interactions.
which has the same form as the standard node Kuramoto model in Eq. (15).
An illustration of this result can be seen in Fig. 4b with a triangulated sphere. Notice also that the dual graph to a simplicial \(k\)-manifold will necessarily be a \((k+1)\)-regular graph, as every oscillating \(k\)-simplex has exactly \(k+1\) subfaces.
### Hodge decomposition of the dynamics
Thanks to the particular form of the two interaction terms, one can use a well-known result in combinatorial topology to decompose the dynamics into three independent subdynamics. To show this, let us consider a simplicial complex \(\mathcal{X}\), weighted or unweighted, which describes the interactions between \(k\)-th order oscillators. Then, the simplicial Hodge decomposition theorem [64] states that every cochain can be decomposed into three orthogonal components
\[C^{k}(\mathcal{X})\cong\mathbb{R}^{n_{k}}=\operatorname{Im}B^{k+1}\oplus \ker L^{k}\oplus\operatorname{Im}D^{k-1}\,, \tag{23}\]
which can be interpreted as analogous to divergence-free, harmonic, and curl-free vector fields. We use the theorem to decompose both the phases cochain \(\theta\) and the natural frequencies \(\omega\)
\[\theta=\theta_{\mathrm{df}}+\theta_{\mathrm{H}}+\theta_{\mathrm{cf}},\ \omega=\omega_{\mathrm{df}}+\omega_{\mathrm{H}}+\omega_{\mathrm{cf}}\,, \tag{24}\]
where cf stands for _curl-free_, H for _harmonic_, and df for _divergence-free_. Rewriting the simplicial Kuramoto dynamics leveraging the orthogonality of the components, Eq. (13) is equivalent to the following system
\[\begin{cases}\dot{\theta}_{\mathrm{df}}=\omega_{\mathrm{df}}-\sigma^{ \dagger}B^{k+1}\sin(D^{k}\theta_{\mathrm{df}})\\ \dot{\theta}_{\mathrm{H}}=\omega_{\mathrm{H}}\\ \dot{\theta}_{\mathrm{cf}}=\omega_{\mathrm{cf}}-\sigma^{\dagger}D^{k-1}\sin( B^{k}\theta_{\mathrm{cf}})\,.\end{cases} \tag{25}\]
These three equations are of crucial importance. They tell us that under the simplicial Kuramoto dynamics: **i)** the curl-free, the harmonic, and the divergence-free components evolve independently of one another, and **ii)** the harmonic component is not affected by the interaction terms. Notice also that the interaction from above affects only the divergence-free component, while the one from below affects only the curl-free component.
Moreover, if \(\omega_{\mathrm{H}}\neq 0\), there can be no equilibrium of the system as each component of \(\theta_{\mathrm{H}}\) will always evolve with a fixed angular speed. It follows that it is always possible to pass to a frame of reference where the harmonic component is constant in time, simply by performing the change of variables \(\theta\to\theta-\omega_{\mathrm{H}}\). In the case of the node Kuramoto, \(\omega_{\mathrm{H}}=\bar{\omega}\mathbb{1}\), i.e. the constant vector of the average natural frequency. This is part of a more general observation that the addition of a harmonic cochain \(x\in\ker L^{k}\) to the phases has no effect on the dynamics. In fact, it can be proven that \(\ker L^{k}=\ker B^{k}\cap\ker D^{k}\) and thus both \(B^{k}x\) and \(D^{k}x\) are zero. Any change of variable \(\gamma=\theta+x\) will thus leave Eq. (13) formally unchanged. In this sense, we can say that the harmonic space is the _gauge_ of the simplicial Kuramoto.
### Simplicial order parameters and gradient flow
To measure the degree of synchronization of a phase configuration, it is common to employ the _order parameter_, which, for an unweighted network of \(N\) oscillators, is defined as
\[\widetilde{R}(\theta)=\frac{1}{N}\left|\sum_{\alpha=1}^{N}e^{i\theta_{\alpha} }\right|\,. \tag{26}\]
By definition, it is non-negative, and it reaches its maximum value of \(1\) when the oscillators are _fully synchronized_, i.e. when they all have the same phase \(\theta\propto\mathbb{1}\).
Figure 4: The form of the interactions from below of the simplicial Kuramoto is equivalent to ones of the standard node Kuramoto on manifold-like subfaces of the simplicial complex. **a.** From left to right: a 1-dimensional simplicial manifold, where every node is manifold-like as it is incident to exactly two edges with different orientations. In the middle is a simplicial complex where the 1-dimensional manifold-like regions are highlighted with different colors. On the right, the different ways in which a subface can produce an interaction different from a standard Kuramoto interaction: 1. the subface is free, 2. there are more than two oscillators incident to it, 3. there are two oscillators incident to it which are both coherently or incoherently oriented. **b.** If the complex is an oriented simplicial manifold, then the interaction term from below is equivalent to a node Kuramoto taking place on the 1-skeleton of the dual cell complex.
The order parameter defined in Eq. (26), however, is independent of the network structure underlying the dynamics. As first proposed in [11], we can generalize it in the following way
\[R(\theta)=\frac{N^{2}-2e+2\mathbb{1}^{\top}\cos(B_{1}^{\top}\theta)}{N^{2}}\,, \tag{27}\]
where \(e\) is the number of edges. While Eq. (27) reduces to Eq. (26) in the case of a fully connected network, it is, in fact, a more natural measure of synchronization in the general case, as
\[\nabla_{\theta}R(\theta)\propto-B_{1}\sin(B_{1}^{\top}\theta)=I_{\uparrow}( \theta)\,, \tag{28}\]
i.e. \(R(\theta)\) is the potential function of the node Kuramoto dynamics, viewed as a gradient flow, i.e. \(\dot{\theta}=\nabla_{\theta}R(\theta)\). As proposed in [54], we can extend this intuition to the simplicial case and, neglecting constants that do not appear in the gradient, define the _simplicial order parameter_
\[R_{k}(\theta)=\frac{1}{C_{k}}\left(\mathbb{1}^{\top}W_{k-1}^{-1 }\cos(B^{k}\theta)+\mathbb{1}^{\top}W_{k+1}^{-1}\cos(D^{k}\theta)\right)\,, \tag{29}\]
with the normalization constant
\[C_{k}=\mathbb{1}^{\top}W_{k-1}^{-1}\mathbb{1}+\mathbb{1}^{\top}W_{k+1}^{-1} \mathbb{1}\,. \tag{30}\]
The weight matrices are added to further generalize the construction to weighted simplicial complexes, and generate the weighted simplicial Kuramoto model as the gradient flow
\[W_{k}\nabla_{\theta}R_{k}(\theta)\propto I_{\uparrow}(\theta)+I_{\downarrow} (\theta)\,. \tag{31}\]
This order parameter reaches a maximum value of \(1\) if \(\theta\in\ker B^{k}\cap\ker D^{k}\) i.e. when the phases cochain belongs to the harmonic space. This is a direct generalization of synchronization in the node Kuramoto model: in a connected network, \(\ker L^{0}=\operatorname{span}\left\{\mathbb{1}\,\right\}\), and the full synchronization condition is \(\theta\propto\mathbb{1}\), which is equivalent to the phase cochains being harmonic.
Hence, under this definition, full synchronization in the simplicial model does not mean that the phases are all equal, but that \(\theta\) is harmonic [54]. Moreover, as the \(k\)-th harmonic space of a simplicial complex is isomorphic to the \(k\)-th homology group, we can think of fully synchronized configurations as, intuitively, localized around the \(k\)-dimensional holes.
**Definition 1** (Full synchronization).: _A configuration \(\theta\) is said to be fully synchronized under the \(k\)-th order simplicial Kuramoto dynamics if \(\theta\in\ker L^{k}\)._
From the simplicial order parameter Eq. (29), we can extract two _partial order parameters_
\[R_{k}^{-}(\theta) \stackrel{{\text{\tiny def}}}{{=}}\frac{1}{C_{k}^{- }}\mathbb{1}^{\top}W_{k-1}^{-1}\cos\left(B^{k}\theta\right) \tag{32a}\] \[R_{k}^{+}(\theta) \stackrel{{\text{\tiny def}}}{{=}}\frac{1}{C_{k}^{ \star}}\mathbb{1}^{\top}W_{k+1}^{-1}\cos\left(D^{k}\theta\right)\,, \tag{32b}\]
where the normalization constants \(C_{k}^{\pm}=\mathbb{1}^{\top}W_{k\pm 1}^{-1}\mathbb{1}\) ensure that they take values in \([-1,1]\). In this way, it holds that
\[C_{k}R_{k}(\theta)=C_{k}^{+}R_{k}^{+}(\theta)+C_{k}^{-}R_{k}^{- }(\theta)\,, \tag{33}\]
and thus, aside from normalization, the order of a configuration is computed by measuring separately the local order induced respectively on \((k-1)\) and \((k+1)\)-simplices.
Notice that, by neglecting the constants in passing from Eq. (27) to Eq. (29), we have an order parameter that has values in the interval \([-1,1]\). This allows us to meaningfully distinguish two different types of synchronized configurations. We call a configuration of phases _phase synchronized_ when its order is close to \(1\) and _anti-phase synchronized_ when it is close to \(-1\). Phase synchronization generalizes to simplicial complexes the situation where close oscillators have similar phases (Fig. 5a), while in anti-phase synchronization close oscillators have opposite phases forming "checkerboard" patterns, resembling an antiferromagnetic Ising model (Fig. 5b).
Notice also how, in this work, with "phase synchronization" and "full synchronization" we refer to the static properties of a configuration of phases \(\theta\), with no information on how it evolves under the dynamics. The notion of a configuration that "stays synchronized" under the dynamics will be tackled with the concept of phase-locking in Section IV.1.
## IV Equilibrium analysis
We now study the equilibrium properties of the simplicial Kuramoto model, extending to the simplicial cases concepts and results known in the node case.
* In Section IV.1, we extend the notion of phase-locking to simplicial complexes, we look at its geometric meaning and see how it reduces to standard node synchronization on manifold-like regions of the complex.
* In Section IV.2, we develop the necessary framework to discuss the equilibrium properties of the simplicial Kuramoto model, define reachable equilibria (Def. 6) and relate their existence to the presence of simplicial phase-locked configurations.
* In Section IV.3, we derive two bounds on the coupling strength providing necessary conditions for the existence of equilibria. We define the critical coupling (Def. 7) and characterize it as the solution to a linear optimization problem.
* In Section IV.4, we prove a simple lower bound on the coupling strength which gives a sufficient condition for the existence of reachable equilibria.
### Simplicial phase-locking
It directly follows from the Hodge decomposition of the simplicial Kuramoto model (see Eq. (25)) that studying its equilibrium properties is equivalent to separately studying the equilibria of the curl-free and divergence-free components. If these two converge to equilibrium, then the complete system will converge to a configuration evolving with constant harmonic angular speed, given by \(\omega_{\rm H}\) (\(\dot{\theta}=\dot{\theta}_{\rm H}=\omega_{\rm H}\)).
**Definition 2** (Simplicial phase-locking).: _We say that the \(k\)-th order simplicial Kuramoto dynamics is phase-locked from above if \(\dot{\theta}_{\rm df}=0\) and phase-locked from below when \(\dot{\theta}_{\rm cf}=0\)._
To motivate this definition, we consider the projections of the dynamics on lower and upper order simplices, defined as
\[\theta^{(+)} \stackrel{{\omega}}{{=}}D^{k}\theta=D^{k}\theta_{ \rm df} \tag{34}\] \[\theta^{(-)} \stackrel{{\omega\prime}}{{=}}B^{k}\theta=B^{k} \theta_{\rm cf}\,. \tag{35}\]
We can think of \(\theta^{(+)}\) and \(\theta^{(-)}\) as the discrete versions of, respectively, the curl and divergence of the vector field \(\theta\) and, we prove, they can equivalently capture simplicial phase-locking.
**Proposition 1** (Phase-locking equivalence).: _A configuration of phases \(\theta\) is phase-locked from above (from below) if and only if its projection onto higher (lower) dimensional simplices is in equilibrium._
\[\dot{\theta}_{\rm df}=0\iff\dot{\theta}^{(+)}=0,\quad\dot{\theta}_{\rm cf} \iff\dot{\theta}^{(-)}=0=0\,. \tag{36}\]
Proof.: If \(\dot{\theta}_{\rm df}=0\), then
\[\dot{\theta}^{(+)}=D^{k}\dot{\theta}_{\rm df}=0\,,\]
because of Eq. (34). If instead \(\dot{\theta}^{(+)}=0\), then
\[\dot{\theta}_{\rm df}=(D^{k})^{\dagger}D^{k}\dot{\theta}=(D^{k})^{\dagger} \dot{\theta}^{(+)}=0\,,\]
where \((D^{k})^{\dagger}\) is the weighted Moore-Penrose pseudoinverse [65] and \((D^{k})^{\dagger}D^{k}\) is the orthogonal projection operator onto \(\operatorname{Im}(D^{k})^{*}=\operatorname{Im}B^{k+1}\).
This result allows us to include in Definition 2 the standard concept of phase-locking for the node Kuramoto. In fact, the node Kuramoto on a heterogeneous network is classically said to be phase-locked when the phase difference of connected oscillators stays constant in time. This means that \(\dot{\theta}^{(+)}_{e}=(D^{0}\dot{\theta})_{e}=\dot{\theta}_{i}-\dot{\theta}_ {j}=0\) for every edge \(e=(i,j)\) in the network. According to Proposition 1, the divergence-free component of the dynamics is in equilibrium and the system is, by Definition 2, _simplicially_ phase-locked from above. If the network is connected, moreover, \(\dot{\theta}\) must be harmonic, i.e. \(\dot{\theta}\propto\mathbb{1}\), a situation which is usually named _frequency-synchronized_ as the frequencies of all oscillators coincide.
While phase-locking in the standard Kuramoto model means that all oscillators evolve with the same angular frequency, it is not clear how this extends to the simplicial case. In the case of phase-locking from below, we see that
\[\dot{\theta}^{(-)}=0\iff\frac{d}{dt}(B^{k}\theta)=B^{k}\dot{\theta}=0\,,\]
or equivalently that \(\dot{\theta}(t)\), the cochain containing the angular frequencies, is a weighted cycle (see Eq. (11)). This, in turn, means that for each \((k-1)\)-simplex \(\alpha\)
\[\sum_{i>\alpha}\xi_{\alpha,i}\dot{\theta}_{i}=0\,, \tag{37}\]
where \(\xi_{\alpha,i}\in\{-1,1\}\) is the relative orientation of \(k\)-simplex \(i\) with respect to its subface \(\alpha\). Interestingly, Eq. (37) corresponds to a flow conservation condition. Indeed, if we consider graphs with oscillating edges, at each node, the total phase flow of the incoming edges is the same as the total flow of the outgoing ones. In general, we can apply Eq. (37) to understand phase-locking from below in some particular situations.
Figure 5: Configurations of phases \(\theta\in C^{1}\), whose sine is shown here in color, which are phase (\(R_{1}(\theta)\approx 1\)) and anti-phase (\(R_{1}(\theta)\approx-1\)) synchronized in the case of a chain of edges, which is “quasi”-manifold as all subfaces except the endpoints are manifold-like, and a more general 1-dimensional simplicial complex. **a**. In the case of phase synchronization, close oscillators on manifold-like regions have similar phases. **b**. Anti-phase synchronized configurations, instead, are such that, on manifold-like regions, adjacent oscillators have opposite phases.
**Proposition 2** (Phase-locking on manifold-like regions).: _If \(\theta\) is phase-locked from below, i.e. \(\dot{\theta}^{(-)}=0\),_
1. _the connected manifold-like regions (see Section_ II.2_) of the complex (with respect to order_ \(k\)_) evolve with the same angular frequency and are thus frequency-synchronized;_
2. _oscillators with free subfaces (see Section_ II.2_) are frozen, i.e._ \(\dot{\theta}_{i}=0\)_._
Proof.: 1. At a manifold-like subface we have only two incident simplices, one coherently oriented and one incoherent, which we name respectively \(i\) and \(j\). Condition Eq. (37) gives us \(\dot{\theta}_{i}-\theta_{j}=0\iff\dot{\theta}_{i}=\theta_{j}\), so the incident oscillators are frequency-synchronized. 2. If oscillator \(i\) has a free subface then, by definition, that particular subface will be incident only to oscillator \(i\). Phase-locking at that subface implies that \(\dot{\theta}_{i}=0\), hence the thesis.
A simple application of this result is shown in Fig. 6a where the behavior of a phase-locked configuration can be inferred a priori by looking at the geometry of the graph. It can also be empirically seen that frequency-synchronized manifold regions exhibit phenomena akin to traveling waves localized around the \(k\)-holes of the complex when the homology is not trivial.
### Existence of equilibria
To study the equilibrium of the simplicial Kuramoto model, it is convenient to work with the \(\theta^{(\pm)}\) defined in Eqs. (34) and (35) as _projections_ of the phases onto upper and lower simplices. Their evolution equations [53] are readily obtained by multiplying Eq. (13) by \(D^{k}\) and \(B^{k}\) to get
\[\dot{\theta}^{(+)} =\omega^{(+)}-\sigma^{\dagger}L^{k+1}_{\downarrow}\sin(\theta^{(+ )}) \tag{38}\] \[\dot{\theta}^{(-)} =\omega^{(-)}-\sigma^{\dagger}L^{k-1}_{\uparrow}\sin(\theta^{(-)} )\,,\]
where \(L^{k+1}_{\downarrow},L^{k-1}_{\uparrow}\) are the half Laplacian matrices from Eq. (12), and we defined the projected natural frequencies as
\[\omega^{(+)}\stackrel{{\text{\tiny def}}}{{=}}D^{k}\omega, \qquad\omega^{(-)}\stackrel{{\text{\tiny def}}}{{=}}B^{k}\omega\,. \tag{39}\]
We then have two independent conditions for the existence of equilibrium solutions for each projection
\[\begin{cases}\dot{\theta}^{(+)}=0\iff L^{k+1}_{\downarrow}\sin(\theta^{(+)})= \frac{\omega^{(+)}}{\sigma^{\dagger}}\\ \dot{\theta}^{(-)}=0\iff L^{k-1}_{\uparrow}\sin(\theta^{(-)})=\frac{\omega^{( -)}}{\sigma^{\dagger}}\,.\end{cases} \tag{40}\]
Since \(\omega^{(+)}\in\operatorname{Im}D^{k}=\operatorname{Im}L^{k+1}_{\downarrow}\) and \(\omega^{(-)}\in\operatorname{Im}B^{k}=\operatorname{Im}L^{k-1}_{\uparrow}\), the equilibrium equations can be solved using the pseudoinverse
\[\begin{cases}\sin(\theta^{(+)})=(L^{k+1}_{\downarrow})^{\dagger}\frac{\omega ^{(+)}}{\sigma^{\dagger}}+x^{(+)}\\ \sin(\theta^{(-)})=(L^{k+1}_{\uparrow})^{\dagger}\frac{\omega^{(-)}}{\sigma^{ \dagger}}+x^{(-)}\end{cases}\quad, \tag{41}\]
for any weighted \((k+1)\)-cycle \(x^{(+)}\in\ker B^{k+1}\) and \((k-1)\)-cocycle \(x^{(-)}\in\ker D^{k-1}\). Moreover, by applying well-known properties of the Moore-Penrose pseudoinverse, we can simplify these expressions.
**Lemma 1**.: _We have the following equalities_
\[(L^{k+1}_{\downarrow})^{\dagger}\omega^{(+)}=(B^{k+1})^{\dagger}\omega,\ \ (L^{k-1}_{\uparrow})^{\dagger}\omega^{(-)}=(D^{k-1})^{\dagger}\omega\,. \tag{42}\]
Proof.: We have
\[(L^{k+1}_{\downarrow})^{\dagger}\omega^{(+)} =(D^{k}B^{k+1})^{\dagger}D^{k}\omega=(B^{k+1})^{\dagger}(D^{k})^ {\dagger}D^{k}\omega\] \[=(D^{k})^{*\dagger}(D^{k})^{\dagger}D^{k}\omega=(B^{k+1})^{ \dagger}\omega,\]
and, analogously, \((L^{k-1}_{\uparrow})^{\dagger}\omega^{(-)}=(D^{k-1})^{\dagger}\omega\).
**Definition 3** (Natural potentials).: _We call natural potentials of order \(k\) the quantities_
\[\beta^{(+)}=(B^{k+1})^{\dagger}\omega\in\mathbb{R}^{n_{k+1}},\ \beta^{(-)}=(D^{k-1})^{ \dagger}\omega\in\mathbb{R}^{n_{k-1}}\,. \tag{43}\]
The name _potential_ comes from the fact that, for \(k=1\), \(\beta^{(-)}\) is an assignment of potentials to the nodes such that for each edge the difference of potential between its endpoints (i.e. the voltage) is equal to \(\omega\). Moreover, it can be easily proven that they correspond to the higher and lower order signals that appear in the Hodge components of the natural frequency vector \(\omega\) as
\[\omega=B^{k+1}\beta^{(+)}+\omega_{\text{H}}+D^{k-1}\beta^{(-)}\,.\]
The values of the natural potentials are expressed in terms of the weighted Moore-Penrose pseudoinverse, which in Eq. (43) is computed with respect to the inner products on the cochain spaces \(W^{-1}_{k}\)\(k=1,\dots,K\) Eq. (4). To compute the natural potentials of a weighted simplicial complex, the weights have to be included correctly in the pseudoinverse. The explicit formula, written in terms of the standard unweighted pseudoinverse is the following ([65, Remark 2])
\[\beta^{(+)} =W^{\frac{1}{2}}_{k+1}(W^{-\frac{1}{2}}_{k}B^{k+1}W^{\frac{1}{2}}_ {k+1})^{\dagger}W^{-\frac{1}{2}}_{k}\omega \tag{44}\] \[\beta^{(-)} =W^{\frac{1}{2}}_{k-1}(W^{-\frac{1}{2}}_{k}D^{k-1}W^{\frac{1}{2}}_ {k-1})^{\dagger}W^{-\frac{1}{2}}_{k}\omega\,. \tag{45}\]
Using the definition of natural potentials, we can rewrite the equilibrium conditions of Eq. (41) as
\[\begin{cases}\sin(\theta^{(+)})=\frac{\beta^{(+)}}{\sigma^{\dagger}}+x^{(+)}\\ \sin(\theta^{(-)})=\frac{\beta^{(-)}}{\sigma^{\downarrow}}+x^{(-)}\end{cases}\quad, \tag{46}\]
where we see that a _necessary_ condition for the existence of a solution is for the right-hand sides to be bounded in \([-1,1]\) or, equivalently, for \(x^{(+)}\in\ker B^{k+1}\) and \(x^{(-)}\in\ker D^{k-1}\) to satisfy the following condition of _admissibility_.
**Definition 4** (Admissible cycles).: _We call a (weighted) cycle \(x^{(+)}\in\ker B^{k+1}\) admissible if_
\[\left\|\frac{\beta^{(+)}}{\sigma^{\uparrow}}+x^{(+)}\right\|_{\infty}\leq 1\,. \tag{47}\]
_We call a cocycle \(x^{(-)}\in\ker D^{k-1}\) admissible if_
\[\left\|\frac{\beta^{(-)}}{\sigma^{\downarrow}}+x^{(-)}\right\|_{\infty}\leq 1\,. \tag{48}\]
_With a slight abuse of notation, we call them both admissible cycles, and we name their sets \(\mathcal{A}^{(+)}\) and \(\mathcal{A}^{(-)}\)._
**Proposition 3** (Necessary condition from admissible cycles).: _A necessary condition for the existence of equilibrium solutions of the \((\pm)\) dynamics is that \(\mathcal{A}^{(\pm)}\neq\emptyset\)._
Intuitively, each cochain \(\beta^{(\pm)}/\sigma^{\downarrow}\) should be _close_ to, respectively, the vector space of weighted \((k+1)\)-cycles (for \((+)\)) and \((k-1)\)-cocycles (for \((-)\)).
When both \(x^{(+)}\) and \(x^{(-)}\) are admissible, we can invert the sine function in Eq. (41) and get an explicit expression for the set of equilibrium configurations of the projections.
**Definition 5** (Equilibrium sets).: _We define the equilibrium sets of the projections as_
\[\begin{split}\mathcal{E}^{(+)}=\left\{(-1)^{s_{+}}\odot \arcsin\left(\frac{\beta^{(+)}}{\sigma^{\uparrow}}+x^{(+)}\right)+s_{+}\pi: \\ s_{+}\in\left\{0,1\right\}^{n_{k+1}},\ x^{(+)}\in\mathcal{A}^{(+ )}\right\}\\ \mathcal{E}^{(-)}=\left\{(-1)^{s_{-}}\odot\arcsin\left(\frac{ \beta^{(-)}}{\sigma^{\downarrow}}+x^{(-)}\right)+s_{-}\pi:\right.\\ \left.s_{-}\in\left\{0,1\right\}^{n_{k-1}},\ x^{(-)}\in\mathcal{ A}^{(-)}\right\},\end{split} \tag{49}\]
_where \(\odot\) is the component-wise Hadamard product._
Any \(\theta^{(\pm)}\in\mathcal{E}^{(\pm)}\) will thus be a fixed point of the projected dynamics from Eq. (38).
From a geometrical point of view, the equilibrium set \(\mathcal{E}^{(\pm)}\) is a subset of \(\mathbb{R}^{n_{k\pm 1}}\) and, for each given \(s_{\pm}\), is a manifold of dimension given by \(\dim\ker D^{k-1}\) for \((-)\) and \(\dim\ker B^{k+1}\) for \((+)\). For example, for the projection on the nodes of edge dynamics, we have that \(\dim\ker D^{0}\) is the number of connected components. If the simplicial complex is connected then \(\mathcal{E}^{(-)}\) is a collection of curves in an \(n_{0}\)-dimensional space (see Fig. 7b).
**Proposition 4**.: \(\dot{\theta}^{(\pm)}=0\) _if \(\theta^{(\pm)}\in\mathcal{E}^{(\pm)}\) modulo \(2\pi\)._
This condition, however, is only necessary and does not fully characterize the equilibrium configurations of the projected dynamics. To see why, notice that the dynamics for the \((-)\) component (the same holds for \((+)\)
Figure 6: Simplicial Kuramoto dynamics on a simple graph with two holes (drawn here as a continuous space), where the edges are identical oscillators (\(\omega=\mathbb{1}\)) with starting phase \(\theta(0)=0\mathbb{1}\). **a.** The panel shows a diagram of the graph, highlighting the non-manifold points responsible for the non-triviality of the dynamics. The different branches of the graph are colored according to their frequency in a phase-locked (Definition 2) state. In particular, the oscillation is frequency-synchronized on the two holes and exhibits traveling waves, while it is frozen (\(\dot{\theta}=0\)) on the branch connecting them. **b.** A few snapshots of the dynamics on the graph are shown, with edges colored according to the sine of their phases. In the last frame, the effect of the non-manifold points is evident. **c.** The dynamics is run for different values of the coupling strength, and the absolute value of the frequency (\(|\dot{\theta}|\)) at the final integration time is shown with the edges’ widths. The last frame shows how the system reaches the same phase-locked configuration predicted with Eq. (37) and depicted in panel **a**.
in Eq. (38) states that the time derivative of \(\theta^{(-)}\) will be the vector \(\omega^{(-)}-\sigma^{\downarrow}L_{\uparrow}^{k-1}\sin(\theta^{(-)})\), which always belongs to \(\operatorname{Im}B^{k}\). This, together with the initial configuration \(\theta_{0}^{(-)}=B^{k}\theta_{0}\in\operatorname{Im}B^{k}\), means that the trajectories of the dynamics live in the subspace \(\operatorname{Im}B^{k}\). Only the equilibria of \(\mathcal{E}^{(-)}\), also in \(\operatorname{Im}B^{k}\), are _reachable_ by the dynamics.
**Definition 6** (Reachable equilibria).: _We define the sets of reachable equilibria as_
\[\mathcal{R}^{(-)} =\mathcal{E}^{(-)}\cap\operatorname{Im}B^{k} \tag{51}\] \[\mathcal{R}^{(+)} =\mathcal{E}^{(-)}\cap\operatorname{Im}D^{k}\,. \tag{52}\]
We then have our final result.
**Proposition 5** (Equivalence phase-locking reachability).: _The curl-free (divergence-free) component admits equilibria if and only if \(\theta^{(-)}\) (resp. \(\theta^{(+)}\)) admits reachable equilibria._
The framework we developed in this section can be fruitfully exploited to independently discuss the existence of equilibrium configurations of the divergence-free and of the curl-free components of the simplicial Kuramoto dynamics (Eq. (25)). In particular, Proposition 5 tells us that such configurations will exist if and only if there are reachable equilibria for the projections. These, in turn, are subsets of the larger sets of fixed points (Definition 5), whose explicit expression is known and whose non-emptiness can thus be controlled more easily, giving us necessary conditions for equilibrium.
### Necessary conditions for phase-locking
In this section, we investigate the relation between the equilibrium properties of the simplicial Kuramoto model and the value of the coupling strength. It is natural to think that having a stronger interaction would make it easier for the system to reach a synchronized configuration as the intrinsic differences among the oscillators, encoded by their natural frequencies, become secondary. This intuition is extensively confirmed by numerous results proved about the node Kuramoto (see [66, 11, 67]), some of which we extend to the simplicial case. We will thus derive bounds on the coupling strength, which gives us necessary and sufficient conditions for the existence of reachable equilibria, i.e. for phase-locking from below and from above. Note that all the results below refer to the _existence_ of phase-locked configurations and provide no information about whether the dynamics will actually converge to them.
Let us consider a simplicial complex whose \(m\)-simplices have weights \(w_{1}^{m},\dots,w_{n_{m}}^{m}\) for any order \(m\), and focus on the \(k\)-th order simplicial Kuramoto dynamics. The easiest conditions to derive are those that ensure that there are no admissible cycles \(\mathcal{A}^{(\pm)}=\emptyset\). If it holds then the equilibrium sets are empty \(\mathcal{E}^{(\pm)}=\emptyset\) (Definition 5) and, by inclusion, the reachable sets are as well \(\mathcal{R}^{(\pm)}=\emptyset\) i.e. there are no reachable equilibria/phase-locked configurations.
**Proposition 6** (Sufficient condition for no phase-locking).: _If_
\[\sigma^{\ddagger}<\sigma_{s}^{(\pm)}\stackrel{{\text{\tiny def }}}{{=}}\frac{1}{\sqrt{n_{k\pm 1}^{w}}}\left\|\beta^{(\pm)}\right\|_{w^{k\pm 1}}\,, \tag{53}\]
_where_
\[n_{k\pm 1}^{w}\stackrel{{\text{\tiny def}}}{{=}}\sum_{i=1}^{n_{k \pm 1}}\frac{1}{w_{i}^{k\pm 1}}\,, \tag{54}\]
_then \(\mathcal{E}^{(\pm)}=\emptyset\) and the \((\pm)\) projection admits no equilibria._
Proof.: First, see that we can bound the weighted \(w^{k\pm 1}\) norm [Eq. (5)] with the \(\infty\)-norm:
\[\left\|v\right\|_{w^{k\pm 1}} =\sqrt{\sum_{i=1}^{n_{k\pm 1}}\frac{1}{w_{i}^{k\pm 1}}v_{i}^{2}} \leq\sqrt{n_{k\pm 1}^{w}\left(\max_{i}v_{i}^{2}\right)}\] \[\leq\sqrt{n_{k\pm 1}^{w}}\left\|v\right\|_{\infty}.\]
With this in mind, we can write
\[\left\|\frac{\beta^{(\pm)}}{\sigma^{\ddagger}}+x^{(\pm)}\right\|_{\infty} \geq\frac{1}{\sqrt{n_{k\pm 1}^{w}}}\left\|\frac{\beta^{(\pm)}}{\sigma^{ \ddagger}}+x^{(\pm)}\right\|_{w^{k\pm 1}}.\]
The two addenda in the norm are orthogonal with respect to the inner product \(W_{k\pm 1}^{-1}\) because, in the \((-)\) case, \(x^{(-)}\in\ker D^{k-1}\) and
\[\beta^{(-)}\in\operatorname{Im}(D^{k-1})^{\dagger}=\operatorname{Im}(D^{k-1} )^{*}=(\ker D^{k-1})^{\perp}\,,\]
thus
\[\frac{1}{\sqrt{n_{k\pm 1}^{w}}}\left\|\frac{\beta^{(\pm)}}{ \sigma^{\ddagger}}+x^{(\pm)}\right\|_{w^{k\pm 1}}\] \[=\frac{1}{\sqrt{n_{k\pm 1}^{w}}}\sqrt{\left\|\frac{\beta^{(\pm)}}{ \sigma^{\ddagger}}\right\|_{w^{k\pm 1}}^{2}+\left\|x^{(\pm)}\right\|_{w^{k\pm 1}}^{2}}\] \[\geq\frac{1}{\sqrt{n_{k\pm 1}^{w}}}\left\|\frac{\beta^{(\pm)}}{ \sigma^{\ddagger}}\right\|_{w^{k\pm 1}}\,.\]
If this last term is strictly greater than \(1\) then there will be no admissible cycles and, therefore, no equilibria.
The condition in Proposition 6 is easy to check and provides a way to tune the coupling constants to make the set of admissible cycles empty, and thus phase-locking (from above or from below) impossible. It is now natural to ask what is the minimum value of \(\sigma\) such that there are admissible cycles, to get a sharper necessary condition for the existence of phase-locked configurations.
**Definition 7** (Critical coupling).: _We call critical coupling \(\sigma_{*}^{(\pm)}\) for the \((\pm)\) projection the minimum value of \(\sigma\) such that there are admissible cycles \((\mathcal{A}^{(\pm)}\neq\emptyset)\)._
It follows directly from the definition that \(\sigma_{s}^{(\pm)}<\sigma_{*}^{(\pm)}\). To find its value, notice first that there can be admissible cycles \(x^{(\pm)}\) (Def. 4) if and only if
\[\min_{x\in\ker D^{k-1}}\left\|\frac{\beta^{(\pm)}}{\sigma^{\frac{1}{2}}}+x \right\|_{\infty}\leq 1\,. \tag{55}\]
By manipulating this expression, we can get the exact value of the critical coupling as a solution to a linear optimization problem.
**Theorem 2** (Value of the critical coupling).: _The critical coupling \(\sigma_{*}^{(\pm)}\) can be found in the solution of a linear optimization problem_
\[\sigma_{*}^{(+)} =\min_{x\in\ker B^{k+1}}\left\|\beta^{(+)}+x\right\|_{\infty} \tag{56}\] \[\sigma_{*}^{(-)} =\min_{x\in\ker D^{k-1}}\left\|\beta^{(-)}+x\right\|_{\infty}\,, \tag{57}\]
_which corresponds to the \(\infty\)-distance of \(\beta^{(\pm)}\) from the space of weighted \((k+1)\)-cycles (resp. \((k-1)\)-cocycles)._
Proof.: Using Eq. (55), we first show that the critical couplings \(\sigma_{*}^{(-)},\sigma_{*}^{(+)}\) satisfy, respectively.
\[\min_{x\in\ker D^{k-1}}\left\|\frac{\beta^{(-)}}{\sigma_{*}^{(-)} }+x\right\|_{\infty} =1, \tag{58}\] \[\min_{x\in\ker B^{k+1}}\left\|\frac{\beta^{(+)}}{\sigma_{*}^{(+)} }+x\right\|_{\infty} =1\,. \tag{59}\]
If the statement were false and
\[\min_{x\in\ker D^{k-1}}\left\|\frac{\beta^{(-)}}{\sigma_{*}^{(-)}}+x\right\|_ {\infty}=a\,,\]
with \(0<a<1\), then we could divide both sides by \(a\) and get
\[\min_{x\in\ker D^{k-1}}\left\|\frac{\beta^{(-)}}{a\sigma_{*}^{(-)}}+\frac{1}{ a}x\right\|_{\infty}=1\,,\]
which means that for \(\sigma=a\sigma_{*}^{(-)}<\sigma_{*}^{(-)}\) there is an admissible cycle \(\frac{x}{a}\), which is impossible because we assumed that \(\sigma_{*}^{(-)}\) is the smallest coupling with that property.
Then, multiplying both terms of Eq. (59) by \(\sigma_{*}^{(-)}\), we have
\[\min_{x\in\ker D^{k-1}}\left\|\beta^{(-)}+\sigma_{*}^{(-)}x\right\|_{\infty}= \sigma_{*}^{(-)}\,.\]
It is now possible to perform a linear change of variable in the optimization problem \(\sigma_{*}^{(-)}x\to\widetilde{x}\) which will change the optimal solution position but not the optimum itself. This means that \(\sigma_{*}^{(-)}\) disappears from the left-hand side and it is found as the solution to the optimization problem above.
In some special cases, the critical coupling admits a closed formula. For example, for the \((-)\) projection of the simplicial Kuramoto dynamics on the edges of a connected simplicial complex, the set of admissible vectors and the critical coupling can both be found explicitly.
**Theorem 3** (Critical coupling in the edge simplicial Kuramoto).: _For the \((-)\) component of the edge dynamics on a connected simplicial complex, it holds that_
\[x^{(-)}\in\mathcal{A}^{(-)}\iff x^{(-)}=x\mathbb{1}\,, \tag{60}\]
_where_
\[-\min\left(\frac{\beta^{(-)}}{\sigma^{\downarrow}}\right)-1\leq x\leq-\max \left(\frac{\beta^{(-)}}{\sigma^{\downarrow}}\right)+1\,, \tag{61}\]
_and_
\[\sigma_{*}^{(-)}=\frac{\max\left(\beta^{(-)}\right)-\min\left(\beta^{(-)} \right)}{2}\,. \tag{62}\]
Proof.: If the complex is connected we have that \(D^{0}\) has a \(1\)-dimensional kernel given by \(\operatorname{span}\{1\}\). This means that there are admissible vectors if and only if
\[\left\|\frac{\beta^{(-)}}{\sigma^{\downarrow}}+x\mathbb{1}\right\|_{\infty} \leq 1\iff-1\leq\frac{\beta_{i}^{(-)}}{\sigma^{\downarrow}}+x\leq 1\,,\]
\(\forall i=1,\ldots,n_{0}\), which holds if and only if Eq. (61) holds, and has solutions only when
\[\max\left(-\frac{\beta^{(-)}}{\sigma^{\downarrow}}\right)-1\leq \min\left(-\frac{\beta^{(-)}}{\sigma^{\downarrow}}\right)+1\] \[\iff\sigma^{\downarrow}\geq\frac{\max(\beta^{(-)})-\min(\beta^{(- )})}{2}\,.\]
It is now worth noting that the properties of the projections \(\theta^{(+)}=D^{k}\theta\) and \(\theta^{(-)}=B^{k}\theta\) are not entirely symmetrical, as the space of \((k+1)\)-cycles can be trivial (\(\ker B^{k+1}=\{0\}\)) and thus there are situations in which the space of admissible cycles is simply \(\mathcal{A}^{(+)}=\{0\}\). The same cannot be said for \(\ker D^{k-1}\) resulting in the down projection \((-)\) being generally harder to treat. To shed more light on this, let us consider the \(k\)-th order dynamics on a simplicial complex \(\mathcal{X}\) which has at least one \(k\)-simplex. From Definition 4, the existence of equilibria for both of them depends on the presence or absence of admissible cycles which, respectively, must belong to \(\ker B^{k+1}\) and \(\ker D^{k-1}\). The asymmetry stems from the fact that \(D^{k-1}\) cannot have a trivial kernel because
\[\ker D^{k-1}=(\operatorname{Im}B^{k})^{\perp}\underset{\text{Hodge}}{ \subseteq}\operatorname{Im}D^{k-2}\oplus\ker L^{k-1}\,, \tag{63}\]
and thus
* if \(k=1\) then \(\ker D^{0}=\ker L^{0}\), which is non-trivial as there is at least one connected component;
* if \(k>1\) then \(\dim\ker D^{k-1}\geq\dim\operatorname{Im}D^{k-2}\) which is nonzero because, by inclusion, there is a nonzero number of \((k-1)\)-simplices and \(D^{k-2}\) is not an all-zero matrix.
The same cannot be said for \(B^{k+1}\) as, in general, there is no restriction on the number of \((k+1)\)-cycles. In fact, on the same line of Eq. (63),
\[\ker B^{k+1}=(\operatorname{Im}D^{k})^{\perp}=\operatorname{Im}B^{k+2}\oplus \ker L^{k+1}\,,\]
which is empty when there are no \((k+2)\)-simplices and no \((k+1)\)-holes. Therefore, the case of \(\ker B^{k+1}=\{0\}\) deserves a special treatment.
**Theorem 4** (No higher-order cycles).: _If there are no \((k+1)\)-cycles ( \(\ker B^{k+1}=\{0\}\)) then the following properties hold:_
1. _if_ \(\mathcal{A}^{(+)}\neq\emptyset\) _then_ \(\mathcal{A}^{(+)}=\{0\}\)_;_
2. _if_ \(\mathcal{A}^{(+)}\neq\emptyset\) _then the equilibrium set is a discrete set of points given by_ \[\mathcal{E}^{(+)}=\left\{(-1)^{s_{+}}\odot\arcsin\left(\frac{\beta^{(+)}}{ \sigma^{\uparrow}}\right)+s_{+}\pi:s\in\{0,1\}^{n_{k+1}}\right\}\,;\] (64)
3. \(\sigma_{*}^{(+)}=\left\|\beta^{(+)}\right\|_{\infty}\)_;_
4. _All equilibria are reachable_ \(\mathcal{E}^{(+)}=\mathcal{R}^{(+)}\)_._
Proof.: We prove each statement below.
1. It is trivial because \(0\) is the only vector in \(\ker B^{k+1}\).
2. Directly follows from Eq. (46) with \(x^{(+)}=0\).
3. \(0\) is the only vector in \(\ker B^{k+1}\) so it will be admissible if and only if
\[\left\|\frac{\beta^{(+)}}{\sigma^{\uparrow}}\right\|_{\infty}\leq 1\,.\]
The smallest value of \(\sigma^{\uparrow}\) for which this holds is \(\sigma^{\uparrow}=\left\|\beta^{(+)}\right\|_{\infty}\).
According to Definition 6, an equilibrium is reachable for the \((+)\) projection if it belongs to \(\operatorname{Im}D^{k}\). In this case,
\[\operatorname{Im}D^{k} =(\ker(D^{k})^{*})^{\perp}\] \[=(\ker B^{k+1})^{\perp}=\{0\}^{\perp}=\mathbb{R}^{n_{k+1}}\,,\]
hence the thesis.
Notice how, in this case, the critical coupling \(\sigma_{*}^{(+)}\) is both the transition value for the existence of admissible cycles (by Definition 4) and the existence of the equilibria of the divergence-free component (by Proposition 5). This means that \(\sigma^{\uparrow}\geq\sigma_{*}^{(+)}\) is a necessary and sufficient condition for the existence of phase-locked configuration.
Figure 7: **a.** Fixing the natural frequencies \(\omega\), we simulate the edge simplicial Kuramoto model on a small simplicial complex with \(20\) different initial phase configurations, for values of \(\sigma\in[0,1]\), and compute the time-averaged partial order parameters \(R_{1}^{-}\) (left), \(R_{1}^{+}\) (right) from \(t=0\) to \(t=1000\). The vertical lines correspond to the values of \(\sigma_{s}\) (Proposition 6), \(\sigma_{*}\) (Theorem 2), \(\sigma_{\infty}\) (Eq. (70)) and \(\sigma_{\mathrm{fp}}\) (Theorem 5). If we identify the last “jump” in the order with the emergence of reachable equilibria, then we see how the special values of \(\sigma\) we derived actually bound its value from below and from above. As predicted by Theorem 4, the equilibrium transition value for the \((+)\) projection is exactly \(\sigma_{*}^{(+)}=\sigma_{\infty}^{(+)}\). **b.** The meaning of the different values of \(\sigma\) is depicted by numerically computing the equilibrium set \(\mathcal{E}^{(-)}\subset\mathbb{R}^{3}\) (Definition 5) for the edge dynamics on a \(2\)-simplex. We see how the equilibrium set \(\mathcal{E}^{(-)}\) is empty for \(\sigma=\sigma_{s}^{(-)}\), it first appears as a discrete set of points for \(\sigma=\sigma_{*}^{(-)}\) and grows, intersecting the plane \(\operatorname{Im}B^{1}\) for \(\sigma\geq\sigma_{\mathrm{fp}}^{(-)}\) giving rise to a reachable equilibrium (Definition 6), marked here as a black dot.
From this general result, we can obtain _for free_ the well-known [11] exact equilibrium transition for the node Kuramoto on trees as, by definition, they have no 1-cycles.
### Sufficient condition for phase locking
Necessary conditions for equilibrium are useful in a setting where we are interested in pushing the system to a _non-equilibrium_ state. If, in fact, we are able to tune the coupling strength below one of the bounds derived above (\(\sigma_{s}\) or \(\sigma_{*}\)), we are guaranteed that the system will not reach a phase-locked configuration. If, however, we want the system to be phase-locked, we need _sufficient_ conditions that can ensure the existence of such equilibria.
An elegant bound on \(\sigma^{\ddagger}\), that both ensures the existence of equilibria and that is easy to compute, can be found generalizing one of the results proven in [68, Theorem 4.7] by using the proof technique first introduced in [11] for the node Kuramoto.
**Theorem 5** (Sufficient condition for the existence of stable reachable equilibria).: _For any \(\gamma\in(0,\pi/2)\), if_
\[\sigma^{\ddagger}\geq\sigma^{(\pm)}_{\mathrm{fp}}(\gamma)\stackrel{{ \text{\tiny def}}}{{=}}\frac{\sqrt{\max_{i}w_{i}^{k\pm 1}}}{\sin(\gamma)} \left\|\beta^{(\pm)}\right\|_{w^{k\pm 1}}\,, \tag{65}\]
_there exists an asymptotically stable reachable equilibrium for the \((\pm)\) dynamics such that_
\[\left\|\theta^{(\pm)}\right\|_{\infty}\leq\gamma\,. \tag{66}\]
Proof.: The proof directly follows the constructions in [11, Theorem 2] by rewriting the equilibrium equation for the projection dynamics as a fixed point equation (hence the subscript \(\mathit{fp}\) in \(\sigma\)) \(x=f(x)\) and finding \(\sigma^{\ddagger}\) such that \(f\) is a continuous function from a convex compact set to itself. Brouwer's fixed point theorem then provides the existence of a fixed point (a reachable equilibrium) in this set. The full proof can be found in Appendix B.
Four important observations should be highlighted from this result:
1. it is _always_ possible to tune the coupling strengths in order for the curl-free and divergence-free components to independently reach equilibrium;
2. after a certain value of the coupling strength, these equilibrium configurations always exist and at least one of them is close to the origin;
3. increasing the coupling will also increase the closeness of the equilibrium to the origin;
4. we see from the definition of the simplicial order parameter Eq. (29) that, if each component of the projection is close to 0, then the configuration will be such that \(R_{k}\approx 1\) i.e. phase synchronized (Section III.5).
We also highlight that, when the complex is unweighted, the expression of the bound becomes
\[\sigma^{(\pm)}_{\mathrm{fp}}(\gamma)=\frac{1}{\sin(\gamma)}\left\|\beta^{(\pm )}\right\|_{2}\,. \tag{67}\]
Tor the node Kuramoto and for \(\gamma=\frac{\pi}{2}\), it reduces to
\[\sigma^{(+)}_{\mathrm{fp}}=\left\|B_{1}^{\dagger}\omega\right\|_{2}=\left\|(L ^{0})^{\dagger}B_{1}^{\top}\omega\right\|_{2}\,, \tag{68}\]
which, when approximated, gives the well-known bound
\[\sigma\geq\frac{1}{\lambda_{2}(L^{0})}\left\|B_{1}^{\top}\omega\right\|_{2}\,, \tag{69}\]
where \(\lambda_{2}(L^{0})\) is the Fiedler eigenvalue of the network. Another interesting observation is that
\[\left\|\beta^{(+)}\right\|_{w^{k+1}}^{2} =\left\langle(B^{k+1})^{\dagger}\omega,(B^{k+1})^{\dagger}\omega \right\rangle_{w^{k+1}}\] \[=\left\langle\omega,(B^{k+1}D^{k})^{\dagger}\omega\right\rangle_{ w^{k}}\] \[=\left\langle\omega,(L_{\uparrow}^{k})^{\dagger}\omega\right\rangle _{w^{k}}\,,\]
which is exactly the effective resistance of \(\omega\) as defined in [69]. In other words, to have equilibrium, the coupling must overcome the "structural" resistance of the simplicial complex, encoded in both the incidence structure (\(L_{\ddagger}^{k}\)) and the natural frequencies. This is a powerful observation because it means that it might be possible to define pairs of structures and frequencies to reach particular types of dynamics or control the frequencies to move across regimes.
Finally, while Theorem 5 ensures the existence of reachable equilibria, in practice its value \(\sigma^{(\pm)}_{\mathrm{fp}}\) tends to be conservative and to overestimate the minimum value of \(\sigma\) for which stable reachable equilibria exist. In perfect analogy with the node Kuramoto literature [67], it is often seen in practice that
\[\sigma^{(\pm)}_{\infty}\stackrel{{\text{\tiny def}}}{{=}}\left\| \beta^{(\pm)}\right\|_{\infty} \tag{70}\]
is closer to the true reachability threshold, and thus provides a sharper bound. This value, moreover, exactly coincides with the reachability transition in some special cases, such as in Thm. 4. The different bounds on \(\sigma\) found in Sections IV.3 and IV.4 are shown in Fig. 7, where they are related to the partial order parameters on a small simplicial complex. We see how \(\sigma_{*}\) and \(\sigma_{\mathrm{fp}}\) actually bound the point of the last jump, corresponding to the transition value after which the dynamics admits reachable equilibria.
## V Coupling the Hodge Components
The simplicial Kuramoto model of Eq. (13) provides a natural way to formulate synchronization dynamics
of topological signals interacting on a simplicial complex [70]. Building upon its form, many different variants with interesting behaviors can be formulated. The first models we consider are those for which the Hodge decomposition of the dynamics does not lead to decoupled equations.
* In Section V.1, we review the explosive model, proposed in [53], which couples the Hodge components through the order parameters. We state the model and propose a similar variant, obtained as a gradient flow, which lends itself to an easier analytical treatment.
* In Section V.2, we consider Sakaguchi-Kuramoto type models, where the dynamics is frustrated by an external parameter. This classical variant is extended to the simplicial case in two ways: the first directly follows from the node Kuramoto and the second, proposed in [54], adds frustration in an orientation-independent fashion.
### Explosive simplicial Kuramoto
One of the early works on the simplicial Kuramoto model proposed to couple the different Hodge components of the dynamics with factors depending on the partial order parameters [53]. In that work, with the partial order parameters
\[\begin{split} R_{k}^{[+]}(\theta)&=\frac{1}{n_{k+1 }}\left|\sum_{\alpha=1}^{n_{k+1}}e^{i(D^{k}\theta)_{\alpha}}\right|\\ R_{k}^{[-]}(\theta)&=\frac{1}{n_{k-1}}\left|\sum_{ \alpha=1}^{n_{k-1}}e^{i(B^{k}\theta)_{\alpha}}\right|\,,\end{split} \tag{71}\]
for a \(k\)-cochain \(\theta\), the following dynamical system is defined
\[\dot{\theta}=\omega -\sigma^{\downarrow}R_{k}^{[+]}(\theta)D^{k-1}\sin\left(B^{k} \theta\right)\] \[-\sigma^{\uparrow}R_{k}^{[-]}(\theta)B^{k+1}\sin\left(D^{k} \theta\right)\,, \tag{72}\]
which was shown to display explosive transitions in the order parameters \(R_{k}^{[\pm]}\) when varying \(\sigma\).
The partial order parameters used by [53] (Eq. (71)) are different from the ones defined here in Eqs. (32a) and (32b). Indeed, our formulation allows for negative values (Section III.5) as well as a derivation of the simplicial Kuramoto dynamics as a gradient flow. We show here that a nonlinearity introduced into the potential allows us to formulate an explosive model analogous to [53]. From the two partial order parameters defined in Eqs. (32a) and (32b), we can consider their product to define the _explosive simplicial Kuramoto_ model as
\[\dot{\theta}=C_{k}^{+}C_{k}^{-}W_{k}\nabla_{\theta}(R_{k}^{+}R_{k}^{-})\,, \tag{73}\]
whose explicit dynamics is
\[\dot{\theta}=\omega-\sigma^{\downarrow}R_{k}^{+}(\theta)D^{k-1} \sin\left(B^{k}\theta\right) \tag{74}\] \[-\sigma^{\uparrow}R_{k}^{-}(\theta)B^{k+1}\sin\left(D^{k}\theta \right)\,.\]
where we have introduced coupling strengths and natural frequencies for generality. The projected dynamics and the Hodge components are now coupled because the interaction term from below depends only on \(\theta^{(-)}\) but \(R_{k}^{+}\) depends on \(\theta^{(+)}\), and vice versa for the interaction from above. This nonlinear gradient flow dynamics is different from Eq. (72), but still displays an explosive phase transition in the order parameter, even for small simplicial complexes (Fig. 8a). The possibility of having a negative order parameter in front of the interaction terms, moreover, can make the model behave in such a way as to maximize the phase difference between interacting oscillators [71], giving rise to new dynamical phenomena. As shown in Fig. 8a, the model shows a second phase transition in \(\sigma\) after which the dynamics is bistable and can converge to both phase (\(R_{k}\approx 1\)) and anti-phase (\(R_{k}\approx-1\)) synchronized configurations.
Interestingly, for this model, we can find a sufficient condition for the existence of a phase-locked configuration analogous to Theorem 5. In this case, as expected, the bounds related to the \((+)\) and \((-)\) projections are coupled.
**Theorem 6** (Sufficient condition for the existence of reachable equilibria).: _For any \(\gamma^{(+)},\gamma^{(-)}\in(0,\pi/2)\), if_
\[\begin{cases}\sigma^{\uparrow}\geq\frac{\sqrt{\max_{\mathrm{x}}\,w^{k+1}}}{ \sin(\gamma^{(+)})\cos(\gamma^{(-)})}\left\|\beta^{(+)}\right\|_{w^{k+1}}\\ \sigma^{\downarrow}\geq\frac{\sqrt{\max_{\mathrm{x}}\,w^{k-1}}}{\sin(\gamma^{ (-)})\cos(\gamma^{(+)})}\left\|\beta^{(-)}\right\|_{w^{k-1}}\end{cases}\,, \tag{75}\]
_then both the projections \(\theta^{(+)},\theta^{(-)}\) of the explosive simplicial Kuramoto model Eq. (74) admit reachable equilibria such that_
\[\left\|\theta^{(+)}\right\|_{\infty}\leq\gamma^{(+)},\ \left\|\theta^{(-)} \right\|_{\infty}\leq\gamma^{(-)}\,. \tag{76}\]
Proof.: The proof is similar to the one of Theorem 5 and can be found in Appendix C.
It is interesting to see that, in the bound above, there is a tradeoff between the coupling strengths of the two projections. If we want a low bound on the coupling for the \((+)\) projection, then we need \(\gamma^{(+)}\) to be high and \(\gamma^{(-)}\) to be low. By doing so, however, we will result in a high value of the bound for the \((-)\) component. Notice, moreover, how this result is only concerned with stating the presence of a phase-locked configuration whose projections can independently be made arbitrarily close to the origin (and thus with high values of the order parameters) by tuning the couplings, but states nothing about whether the dynamics will actually converge to them. In addition, stability analysis is challenging for this system and is left for future work.
This gradient flow approach suggests a more general way to build variants of the simplicial Kuramoto model which couple the dynamics across Hodge subspaces. For this, we can consider a general function \(f\) of the partial order parameters and take its gradient
\[\dot{\theta}=W_{k}\nabla_{\theta}f(R_{k}^{+},R_{k}^{-})\,, \tag{77}\]
which, modulo normalization constants, reduces to the standard simplicial Kuramoto Eq. (13) for \(f(x,y)=x+y\). As an example, if we consider a linear interpolation between the standard potential and the explosive one, \(f_{\varepsilon}(x,y)=(1-\varepsilon)(x+y)+\varepsilon xy\), parametrized by \(\varepsilon\in[0,1]\), we have what we call _mixed model_
\[\dot{\theta}=W_{k}\nabla_{\theta}((1-\varepsilon)C_{k}R_{k}+\varepsilon C_{k }^{+}C_{k}^{-}R_{k}^{+}R_{k}^{-})\,,\]
which explicitly reads
\[\dot{\theta} =-(1-\varepsilon+\varepsilon R_{k}^{-}(\theta))B^{k+1}\sin(D^{k} \theta) \tag{78}\] \[-(1-\varepsilon+\varepsilon R_{k}^{+}(\theta))D^{k-1}\sin(B^{k} \theta)\,.\]
For \(\varepsilon=0\), we recover the standard simplicial Kuramoto model, and for \(\varepsilon=1\) we get Eq. (74). The phase diagram of this dynamics is shown in Fig. 8b, where the explosive transition for \(\varepsilon=1\) is evident and a region of bistability appears for high values of \(\sigma\). Notice that, although the potential is linear in \(\varepsilon\), the stationary dynamic is not, and, through an analogous proof, it is possible to derive a result equivalent to Theorem 6 to get sufficient conditions for the existence of reachable equilibria.
### Simplicial Sakaguchi-Kuramoto
The Sakaguchi-Kuramoto model [72] is a well-known extension of the Kuramoto model, which modifies the interaction function by including a phase lag parameter. Given a frustration vector \(\alpha\) on the edges, we can write it as a modification of Eq. (14)
\[\dot{\theta}_{i}=\omega_{i}-\sigma\sum_{j}A_{ij}\sin(\theta_{i}-\theta_{j}+ \alpha_{ij})\,. \tag{79}\]
We can extend it to the simplicial case in a simple manner by considering two frustration cochains \(\alpha_{k-1}\in C^{k-1}\) and
Figure 8: Fixing the natural frequencies and the frustrations, we run variants of the simplicial Kuramoto model on a small simplicial complex for different values of \(\sigma\in[0,1]\) and compute the time-averaged partial order parameters (red: lower order parameter, blue: upper order parameter). **a. The explosive model Eq. (74) shows an explosive transition in both the down-(left) and up- (right) partial order parameters. After a certain value of \(\sigma\), moreover, some of the trajectories converge to anti-phase synchronized configurations characterized by the order parameter being close to \(-1\). b. The phase diagram of mixed model Eq. (78) is depicted for \(\sigma\in[0,1]\), \(\varepsilon\in[0,1]\). For any given \(\varepsilon\), the dashed lines show the \(\sigma\) corresponding to the first and last jump in the order. As we can see, they converge to a single point when \(\varepsilon=1\), signaling the explosiveness of the model. The dashed line on the right encircles the region in which the system is bistable, and the trajectories can converge to both phase and anti-phase synchronized configurations.**
writing the simplicial Kuramoto model
\[\dot{\theta} =\omega-\sigma^{\uparrow}B^{k+1}\sin\left(D^{k}\theta+\alpha_{k+1}\right)\] \[-\sigma^{\downarrow}D^{k-1}\sin\left(B^{k}\theta+\alpha_{k-1} \right)\,, \tag{80}\]
where \(\alpha\) is the effect of an external field on each interaction simplex. As this model does not couple the Hodge subspaces, we refer to it as the _simple_ frustrated model. In addition, while it has a simple form, it can be proven that it does not reduce to the Sakaguchi-Kuramoto model of Eq. (79) for \(k=0\)[54]. Before considering how to include frustrations in a more meaningful way, we notice that this simple frustration has the surprising property that \(\alpha\) can be used to control the system by making any projected configuration reachable and stable.
**Theorem 7** (Control of reachable equilibrium in simple model).: _If \(\sigma^{\downarrow}>\sigma^{(\pm)}_{\infty}=\left\lVert\beta^{(\pm)}\right\rVert _{\infty}\), then, for any chosen projected configuration \(\theta^{(\pm)}_{*}\), i.e. \(\theta^{(+)}_{*}\in\operatorname{Im}D^{k}\), \(\theta^{(-)}_{*}\in\operatorname{Im}B^{k}\), if_
\[\alpha_{k\pm 1}=\arcsin\left(\frac{\beta^{(\pm)}}{\sigma^{\downarrow}} \right)-\theta^{(\pm)}_{*}\,, \tag{81}\]
_then \(\theta^{(\pm)}_{*}\) is an asymptotically stable, reachable equilibrium for the \((\pm)\) projection of the simple frustrated dynamics Eq. (80)._
Proof.: We prove it for the \((+)\) projection, as the \((-)\) case is analogous. We see from Equation (80) that an equilibrium \(\theta^{(+)}_{eq}\) of the \((+)\) projection of the frustrated dynamics will satisfy
\[\sin\left(\theta^{(+)}_{eq}+\alpha_{k+1}\right)=\frac{\beta^{(+)} }{\sigma^{\uparrow}}+x^{(+)}\,.\]
As \(\sigma^{\uparrow}\geq\left\lVert\beta^{(+)}\right\rVert_{\infty}\), \(x^{(+)}=0\) is an admissible cycle (Def. 4) and thus
\[\theta^{(+)}_{eq}=\arcsin\left(\frac{\beta^{(+)}}{\sigma^{\uparrow}}\right)- \alpha_{k+1}=\theta^{(+)}_{*}\in\operatorname{Im}D^{k}\,,\]
is a reachable equilibrium. The proof of stability can be found in Appendix D.
We can visualize this result by looking at any panel of Fig. 7b and noticing that the action of a linear frustration corresponds to a translation of \(\mathcal{E}^{(-)}\), resulting in a different intersection with the reachable subspace. The strength of Theorem 7 is in the fact that, with a fine-tuned frustration, it is possible to have equilibrium configurations as ordered as we want while keeping the coupling strengths comparatively low. By exploiting this idea, we can get the following corollary.
**Corollary 1**.: _Under the hypotheses of Theorem 7, if_
\[\alpha_{k\pm 1}=\arcsin\left(\frac{\beta^{(\pm)}}{\sigma^{\downarrow} _{*}}\right)\,, \tag{82}\]
_then \(0\in\mathbb{R}^{n_{k\pm 1}}\) is a stable, reachable equilibrium for the \((\pm)\) projection and thus there is a stable equilibrium configuration of the frustrated dynamics Eq. (80) with partial order parameter \(R^{\pm}_{k}(\theta)=1\)._
Proof.: Simply follows by applying Theorem 7 with \(\theta^{(\pm)}_{*}=0\) and using the definition of simplicial order parameter Eq. (29).
An application of this corollary can be seen in Figure 9, where the configuration where all phases differences \(\theta^{(-)}\) are equal to \(0\) is made reachable by tuning the frustration. The problem with this simple frustration formulation comes from the oriented nature of the simplices. Intuitively, two oscillating simplices with a common subface \(a\) will see the frustration on \(a\) with a different sign, depending on their relative orientations. In [54], this issue is addressed by lifting the phases cochains, similarly to [29], into another space where both orientations are present, and by then projecting back to obtain a model which is independent on the orientation of \((k+1)\)-simplices. We write here the resulting equation, slightly generalized from [54] to include orientation-independent frustrations on \((k-1)\)-simplices too. We define the lift operators
\[V^{k}=\begin{pmatrix}I_{n_{k}}\\ -I_{n_{k}}\end{pmatrix},\ \ U^{k}=\begin{pmatrix}I_{n_{k}}\\ I_{n_{k}}\end{pmatrix}\,, \tag{83}\]
and indicate with \((A)^{\pm}\stackrel{{\text{\tiny def}}}{{=}}(A\pm|A|)/2\) the projection of a matrix onto its positive or negative components. Using these definitions, we can write the _orientation
Figure 9: Application of Corollary 1 to the \((-)\) projection of the dynamics on a small simplicial complex. Tuning the frustration cochain it is possible to have a stable equilibrium configuration such that \(\theta^{(-)}_{eq}=0\), as shown by the bottom panel.
independent simplicial Sakaguchi-Kuramoto model_
\[\dot{\theta} =\omega-\sigma^{\uparrow}\left(B^{k+1}(V^{k+1})^{\top}\right)^{-} \sin\left(V^{k+1}D^{k}\theta+U^{k+1}\alpha_{k+1}\right)\] \[\quad-\sigma^{\downarrow}\left(D^{k-1}(V^{k-1})^{\top}\right)^{-} \sin\left(V^{k-1}B^{k}\theta+U^{k-1}\alpha_{k-1}\right)\,. \tag{84}\]
Note that the projection of the external operator onto its negative components is nonlinear, and thus changes its image and kernel. This means that the Hodge decomposition of Equation (84) will lead to components that are not evolving independently but are coupled.
**Proposition 7**.: _[_54_]_ _It holds that Eq. (84) is independent on the orientation of the simplices of order \(k-1\) and \(k+1\)._
Proof.: Let us focus on the interaction from below, as the other case is completely symmetrical. A change of orientation of a \((k-1)\)-simplex indexed by \(i\) can be encoded in the action of a diagonal matrix \(P\) such that \(P_{jj}=1\) if \(j\neq i\) and \(P_{ii}=-1\). The boundary and coboundary operators in the new simplicial complex with the orientation of \(i\) flipped are \(\widetilde{D}^{k-1}=D^{k-1}P,\ \widetilde{B}^{k}=PB^{k}\). We see that the change of orientation matrix related to simplex \(i\) acts on the lift matrix from the right by swapping rows \(i\) and \(2i\ V^{k-1}P=\widetilde{P}V^{k-1}\), where \(\widetilde{P}\) is the corresponding permutation matrix. The interaction term from below will then become
\[\left(\widetilde{D}^{k-1}(V^{k-1})^{\top}\right)^{-}\sin\left(V^ {k-1}\widetilde{B}^{k}\theta+U^{k-1}\alpha_{k-1}\right)\] \[=\left(D^{k-1}(V^{k-1})^{\top}\right)^{-}\widetilde{P}\sin\left( \widetilde{P}V^{k-1}B^{k}\theta+\widetilde{P}U^{k-1}\alpha_{k-1}\right)\] \[=\left(D^{k-1}(V^{k-1})^{\top}\right)^{-}\sin\left(V^{k-1}B^{k} \theta+U^{k-1}\alpha_{k-1}\right)\,,\]
as \(\widetilde{P}^{2}=I\) and \(\widetilde{P}U^{k-1}=U^{k-1}\), being \(\widetilde{P}\) a row-swap operation between two equal rows.
The orientation-independent model, and its associated Proposition 7 (proven in [54]) can be better understood by making the following observations. If we consider arbitrary \(\widetilde{\alpha}_{k\pm 1}=(\underline{\alpha}_{k\pm 1},\overline{\alpha}_{k \pm 1})\in\mathbb{R}^{2n_{k\pm 1}}\) which are not necessarily of the form \(U^{k+1}\alpha_{k\pm 1}=(\alpha_{k\pm 1},\alpha_{k\pm 1})\), we can define the more general _orientation-selective simplicial Sakaguchi-Kuramoto model_
\[\dot{\theta} =\omega-\sigma^{\uparrow}\left(B^{k+1}(V^{k+1})^{\top}\right)^{-} \sin\left(V^{k+1}D^{k}\theta+\widetilde{\alpha}_{k+1}\right)\] \[\quad-\sigma^{\downarrow}\left(D^{k-1}(V^{k-1})^{\top}\right)^{-} \sin\left(V^{k-1}B^{k}\theta+\widetilde{\alpha}_{k-1}\right)\,. \tag{85}\]
In this case, we can see that a different frustration will act on \(k\)-simplices depending on their relative orientation. In particular, the elements of \(\underline{\alpha}_{k\pm 1}\) represent frustrations on \((k\pm 1)\) simplices acting only on the \(k\)-simplices which are incoherently oriented with them, while the last components \(\overline{\alpha}_{k\pm 1}\) will act only on coherently oriented simplices. Hence, if these two coincide, we have orientation independence. Indeed, it is enough to expand the lift matrices and projection operators to see that, for example, the term of the interaction from above can be rewritten as
\[(B^{k+1})^{-}\sin(D^{k}\theta+\underline{\alpha}_{k+1})+(B^{k+1})^{+}\sin(D^ {k}\theta-\overline{\alpha}_{k-1})\,, \tag{86}\]
and notice that the nonzero elements of \((B^{k+1})^{-}\) contain the adjacencies between incoherently oriented \(k\) and \((k+1)\)-simplices, while \((B^{k+1})^{+}\) contains only the coherently-oriented adjacencies. Moreover, when \(\underline{\alpha}_{k+1}=-\overline{\alpha}_{k+1}\), then we can compact the two matrices \((B^{k+1})^{\pm}\) and get back the simple frustration of Eq. (80), which can now be interpreted as inducing opposite frustrations on coherently or incoherently oriented simplices. Finally, it should be possible to also control the equilibrium solution via \(\widetilde{\alpha}_{k\pm 1}\), but as this system is now coupled, both projections will have to be controlled together to obtain a consistent system.
## VI Coupling the different orders with the Dirac operator
Up to this point, we have considered topological signals of a fixed order, on nodes, edges faces, and so on. This approach gives rise to interesting types of interactions. However, it does not fully exploit the multi-order nature of simplicial complexes, because it involves only \(k\)-simplices and their upper/lower adjacencies. This is a direct consequence of the fact that \(B^{k}B^{k+1}=0\): coupling signals between, for example, nodes and faces, cannot be done with a simple concatenation of boundary operators. Instead, we can generalize the simplicial Sakaguchi-Kuramoto models by letting the frustration vector be the signal of a lower/higher order on the same simplicial complex. This can be formalized through the discrete Dirac operator (also known as Gauss-Bonnet operator [73]), first introduced in [74] in the context of simplicial complexes, and later used for synchronization[56] and signal processing [75].
### Discrete Dirac formalism
For a simplicial complex with simplices up to order \(K\), we can gather the phases into a single vector \(\Theta=(\theta_{(0)},\theta_{(1)},\ldots,\theta_{(K)})\) and define the Dirac operator on simplicial complexes [76, 74, 58] as the square, block tridiagonal matrix
\[\mathbf{D}\stackrel{{\text{\tiny def}}}{{=}}\text{ tridiag}([D^{0},\ldots,D^{K-1}],[0,\ldots,0],[B^{1},\ldots,B^{K}])\,, \tag{87}\]
where \(0\) indicates the matrix of the right size with all zero elements. The Dirac operator contains all the adjacency structure of the simplicial complex and it is, by construction, the "square root" of the Laplacian matrix of the complex, in the sense that its square is the block
diagonal matrix of the Hodge Laplacians
\[\mathbf{L}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{}}}}}}}}}}}}}}}}{{ \mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{\tiny{{\tiny{{ \tiny{{{}}}}}}}}}}}}}}}}}}{{\mbox{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{\tiny{{\tiny{\tiny{{\tiny{{ }}}}}}}}}}}}}}}}}}}}}{{\mbox{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{{\tiny{ \tiny{{\tiny{{\tiny{{\tiny{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\}\,\
In Eq. (96), however, the phases of the simplices of different orders evolve independently of one another, as Equation (96) is just a formal reformulation to include all possible simplicial Kuramoto models that exist on a simplicial complex of order \(K\) into a single formula. The advantage of this formulation is that it provides a general mathematical framework to couple the dynamics across different orders.
### Explosive Dirac Kuramoto dynamics
To couple the dynamics across orders, [70] proposed to multiply the interaction with the factor depending on the order parameters Eq. (71) of the dynamics above and below, as a Dirac generalization of the earlier explosive model of [53]. For a network (\(K=1\)), this coupling was made into a so-called _Nodes-Links_ (NL) model [70]
\[\begin{cases}\dot{\theta}_{(0)}=\omega_{(0)}-R_{1}^{[-]}(\theta_{(1)})B^{1} \sin(D^{0}\theta_{(0)})\\ \dot{\theta}_{(1)}=\omega_{(1)}-R_{0}^{[+]}(\theta_{(0)})D^{0}\sin(B^{1}\theta _{(1)})\,,\end{cases} \tag{103}\]
and, with non-oscillating triangles, into the _Nodes-Links-Triangles_ (NLT) model [70]
\[\begin{cases}\dot{\theta}_{(0)}=\omega_{(0)}-R_{1}^{[-]}(\theta_{(1)})B^{1} \sin(D^{0}\theta_{(0)})\\ \dot{\theta}_{(1)}=\omega_{(1)}-R_{0}^{[+]}(\theta_{(0)})R_{1}^{[+]}(\theta_{ (1)})D^{0}\sin(B^{1}\theta_{(1)})\\ \hskip 28.452756pt-R_{1}^{[-]}(\theta_{(1)})B^{2}\sin(D^{1}\theta_{(1)})\,. \end{cases} \tag{104}\]
Inspired by these formulations, we can write order-coupled models in the Dirac formalism. As we did in Section V.1 with the explosive model, we can write the following nonlinear gradient flow
\[\dot{\Theta}=\Omega+C^{+}C^{-}\mathbf{W}\nabla_{\Theta}\left(\mathbf{R}^{+} \mathbf{R}^{-}\right)\,, \tag{105}\]
where the coupling is global as it depends on all simplices of all orders. Alternatively, we can have a coupling across only adjacent orders with
\[\dot{\Theta}=\Omega+\mathbf{W}\nabla_{\Theta}\left(\sum_{k=1}^{K}C_{k-1}^{+}C _{k}^{-}R_{k-1}^{+}R_{k}^{-}\right)\,, \tag{106}\]
which is such that, for all orders \(k\), the interaction term from below of order \(k\) will depend on the order parameter from above of order \(k-1\) and vice versa. We leave the analysis of these models for future works.
### Frustrated Dirac Kuramoto model
Just as in [56; 57], one can instead consider a local coupling with the half super-Laplacian matrices \(\mathbf{L}_{\uparrow},\mathbf{L}_{\downarrow}\), by writing
\[\begin{split}\dot{\Theta}=\Omega&-\sigma^{\dagger} \boldsymbol{d}\sin\left(\boldsymbol{\delta}\Theta-z\gamma\mathbf{L}_{\uparrow }\Theta\right)\\ &-\sigma^{\dagger}\boldsymbol{\delta}\sin\left(\boldsymbol{d} \Theta-z\gamma\mathbf{L}_{\downarrow}\Theta\right)\,,\end{split} \tag{107}\]
where \(z>0\) regulates the strength of the local cross-order coupling and \(\gamma\) is the block-diagonal matrix [76]
\[\gamma=\text{diag}(I_{n_{0}},-I_{n_{1}},\ldots,(-1)^{K}I_{n_{K}})\,, \tag{108}\]
which anticommutes with both \(\boldsymbol{d}\) and \(\boldsymbol{\delta}\). This choice of \(\gamma\) comes from the fact that for the linearized dynamics (with unit coupling strengths) \(\dot{\Theta}=\Omega-(\mathbf{D}^{2}+\gamma\mathbf{D}^{3})\Theta\), the matrix \(-(\mathbf{D}^{2}+\gamma\mathbf{D}^{3})\) can be shown to have complex eigenvalues with non-positive real part, resulting in the emergence of damped oscillations (see [57, Appendix A]), as depicted in Fig. 10. Equation (107), named _local Dirac synchronization_[57], displays explosive synchronization transitions and stable hysteresis loops. As an example, for \(K=2\), we can write Eq. (107) explicitly
\[\begin{cases}\dot{\theta}_{(0)}=\omega_{(0)}-\sigma^{\dagger}B^{1}\sin(D^{0} \theta_{(0)}+L_{\downarrow}^{1}\theta_{(1)})\\ \dot{\theta}_{(1)}=\omega_{(1)}-\sigma^{\dagger}D^{0}\sin(B^{1}\theta_{(1)}- L_{\uparrow}^{0}\theta_{(0)})\\ \hskip 14.226378pt-\sigma^{\dagger}B^{2}\sin(D^{1}\theta_{(1)}-L_{\downarrow}^ {2}\theta_{(2)})\\ \dot{\theta}_{(2)}=\omega_{(2)}-\sigma^{\dagger}D^{1}\sin(B^{2}\theta_{(2)}+ L_{\uparrow}^{1}\theta_{(1)})\end{cases}\,. \tag{109}\]
From a more general point of view, if we now define a frustration \(A_{\uparrow}=(\alpha_{(0)}^{\uparrow},\alpha_{(1)}^{\downarrow},\ldots, \alpha_{(K)}^{\downarrow})\) (possibly dependent on \(\Theta\)) we can generalize the single-order frustrated simplicial Kuramoto model (Eq. (80)) as
\[\dot{\Theta}=\Omega-\boldsymbol{d}\sin(\boldsymbol{\delta}\Theta+A_{\uparrow})- \boldsymbol{\delta}\sin(\boldsymbol{d}\Theta+A_{\downarrow})\,. \tag{110}\]
In addition, if we introduce the _total_ lift operators \(\mathbf{V}=\text{diag}(V^{0},V^{1},\ldots V^{K})\), \(\mathbf{U}=\text{diag}(U^{0},U^{1},\ldots U^{K})\), we have an orientation independent version, akin to Eq. (84) but in the Dirac framework as
\[\dot{\Theta}=\Omega -\left(\boldsymbol{d}V^{\top}\right)^{-}\sin(\mathbf{V} \boldsymbol{\delta}\Theta+\mathbf{U}A_{\uparrow})\] \[-\left(\boldsymbol{\delta}\mathbf{V}^{\top}\right)^{-}\sin( \mathbf{V}\boldsymbol{d}\Theta+\mathbf{U}A_{\downarrow})\,. \tag{111}\]
In contrast to Eq. (110), being orientation independent, this system couples both the simplicial orders and the Hodge subspaces.
It follows that the local Dirac synchronization dynamics of Eq. (107) is the application of the simple \(\Theta\)-dependent frustrations \(A_{\uparrow}=-z\gamma\mathbf{L}_{\uparrow}\Theta\), \(A_{\downarrow}=-z\gamma\mathbf{L}_{\downarrow}\Theta\). This fact, together with the gradient flow formulation of the Dirac model, gives us a common framework to build and study multiple variants of the model. It would be natural, for example, to consider an analogous model where the local frustration is introduced in an orientation-independent fashion. Finally, we did not consider here possible extensions of the results on equilibrium solutions but left it for future works. It should be possible to extend some of the theorems of this work due to the similar structure between a single simplicial Kuramoto model and the Dirac-based formulation.
## VII Application to functional connectivity reconstruction
Oscillator models have been extensively used in neuroscience as they offer a powerful and flexible framework
for studying simplified versions of the dynamics of neuronal or brain networks [77, 78, 79, 80, 4]. By treating neurons or brain regions as oscillators that interact with each other, these models can capture significant features of brain activity observed in experiments, such as the presence of rhythms and oscillations. While oscillator models have been widely used to study brain dynamics, it is important to note that most of these have focused on pairwise interactions between neurons or brain regions. This is due in part to the fact that pairwise interactions are simpler to model or analyze and that anatomically it is more realistic to consider the dynamics taking place on networks rather than higher-order systems. However, recent studies have suggested that higher-order interactions may also play an important role in brain dynamics, both functionally [81, 82] and structurally [70, 43]. These interactions involve three or more elements and can give rise to emergent phenomena that cannot be explained by pairwise interactions alone. Given the potential importance of these higher-order interactions, it is natural to apply models of higher-order synchronization to brain data. These models might offer a more comprehensive framework for studying the dynamics of large-scale brain networks and have the potential to give us new insights into the mechanisms underlying cognition and behavior.
To test this hypothesis, we study how well simplicial Kuramoto models of various orders could reproduce brain correlation patterns. Following the methodology proposed in [79, 80], we run simulations of 5 different variants of the simplicial Kuramoto model on a real structural connectome, the network that describes the connectivity structures between regions of the human cerebral cortex, and we investigate how well each model can reproduce the resting-state functional activity experimentally measured.
The structural connectome is encoded in a group-averaged weighted structural connectivity matrix (Fig. 11a), obtained by diffusion imaging and tractography, by parcellating the brain into \(N=200\) regions, which here take the role of nodes connected by \(M=6040\) weighted edges. From the network adjacency matrix, we derive the incidence matrix \(B_{1}\in\left\{-1,1\right\}^{N\times M}\) by choosing randomly edges orientations. To achieve consistency with [79], the connection weights \(K_{1},\ldots,K_{M}\) are included, after being inverted, as weights on the edges \(W_{1}=\text{diag}(\frac{1}{K_{1}},\ldots,\frac{1}{K_{M}})\) and the tract lengths are encoded in an edge frustration vector \(\alpha\in\mathbb{R}^{M}\). The natural frequencies for both node-based and edge-based models are sampled independently from a Gaussian distribution with a mean of \(2\pi\,40\) and a standard deviation of \(2\pi\,0.1\). We compare the following five models:
1. _Orientation independent node Kuramoto-Sakaguchi_
Figure 10: Local Dirac Synchronization (Eq. (109)) on nodes, edges, and triangles of the small simplicial complex depicted in Fig. 8a, with \(\Omega=0\). We simulate the dynamics for different values of \(z\) and see how the coupling it induces disrupts synchronization and results in the emergence of damped oscillations.
model (Node OI)_. This is, by construction, the classical Kuramoto-Sakaguchi model of Eq. (79), used in [79]
\[\dot{\theta}_{i}=\omega_{i}-\sigma\sum_{j=1}^{200}K_{ij}\sin(\theta_{i}-\theta_{j }+\alpha_{ij})\,. \tag{112}\]
2. _Edge Simplicial Kuramoto (Edge)_. The simplest possible simplicial Kuramoto model defined on the edges \[\dot{\theta}=\omega-\sigma B_{1}^{\top}\sin(B_{1}W_{1}^{-1}\theta)\,.\] (113)
3. The _Orientation Independent Edge Sakaguchi-Kuramoto (Edge OI)_ \[\dot{\theta}=\omega-\sigma\left(B_{1}^{\top}(V^{0})^{\top}\right)^{-}\sin\left( V^{0}B_{1}W_{1}^{-1}\theta-U^{0}B_{1}W_{1}^{-1}\alpha\right)\,.\] (114)
The explosive simplicial Kuramoto model (Eq. (74)) cannot be directly used as it requires nodes, edges, and triangles for its interaction terms to be nonzero. Triangles are not present in the structural connectivity network and thus, to avoid injecting arbitrary structure into the analysis, we will not use it. As a proxy for its behavior, however, we propose the similar _order-modulated model_ (OM), derived by multiplying \(\sigma\) by the order parameter. In other words, the OM model is the gradient flow of the square order parameter.
\[\dot{\theta}=\omega+\frac{1}{2}C_{k}W_{k}\nabla_{\theta}R_{k}^{2}(\theta)\,. \tag{115}\]
We simulate two different OM models.
1. The _Order-modulated node Kuramoto-Sakaguchi (Node OM)_ \[\dot{\theta}=\omega-\sigma R_{0}(\theta)(B_{1}W_{1}^{-1}(V^{1})^{\top})^{-} \sin(V^{1}B_{1}^{\top}\theta-U^{1}\alpha)\,.\] (116)
2. The _Order-modulated edge simplicial Kuramoto (Edge OM)_ \[\dot{\theta}=\omega-\sigma R_{1}(\theta)B_{1}^{\top}\sin(B_{1}W_{1}^{-1}\theta )\,.\] (117)
Models 2, 3, and 5 are defined on the edges of the network. Given that we want to simulate a node-wise functional connectivity matrix, we consider the projections of their phases onto the nodes \(\theta^{(-)}\) to get node-wise trajectories. For this reason, notice that it is not necessary to numerically solve all the \(M\) equations on the edges, but it is enough to directly integrate the projected dynamics.
### Simulations
Following [79], the simulations are run for a total of \(T=812\) seconds with a time resolution of \(\delta t=1\)ms (using MATLAB ode45), and the first 20 seconds are discarded to allow the dynamics to reach stationarity. We then take the trajectories, convert them into downsampled BOLD signals, filter them with a lowpass cutoff of
Figure 11: **a.** Structural connectivity matrix representing the weighted network onto which we simulate the dynamics. The color represents the logarithm of the weight. **b.** The empirical functional connectivity matrix. **c.** We simulate 5 different variants of the simplicial Kuramoto model and compute the correlation matrices of their post-processed trajectories as simulated FC matrices. **d.** The coupling strength \(\sigma\) is tuned for each model by scanning 20 values between 100 and 500. **e.** Pearson correlations between the empirical FC and the simulated FCs for the 6 models, over 10 simulations.
\(c=0.25\)Hz, and use them to compute \(N\times N\) pairwise Pearson correlation matrices. These simulated functional connectivity matrices (Fig. 11b) are then compared to the experimental resting-state functional connectivity (FC) matrix (Fig. 11c) using Pearson correlation (by correlating the vectorized upper triangular matrix). We repeat this process multiple times for each model by varying the coupling strength in order to tune it. We scan 20 \(\sigma\) values ranging from 100 to 500, and select the optimal one w.r.t Pearson correlation (Fig. 11e). Given the optimal coupling strength for each model, we then perform 10 simulations for each one of them with different random starting phases and natural frequencies and confront them with the empirical FC matrix. The results are shown in Fig. 11d, where it is easy to see how the two non-frustrated edge-based models outperform the node ones, achieving an average of \(r=0.27\) correlation against the \(r=0.2\) of the standard node Sakaguchi-Kuramoto. The result is statistically confirmed by an ANOVA test which achieves p-values lower than \(10^{-3}\). The effect size against the node Kuramoto model is 0.0757 for the edge model and 0.0692 for the edge OM.
Our findings suggest that an edge-based description of the dynamics might provide a better fit to the experimental data, both outperforming the node-based models and without resorting to additional parameters or internal mechanisms, as for example edge flickering [80] (which was shown to obtain a slightly lower correlation than our edge Kuramoto model). In fact, arguably edge-based simplicial Kuramoto models might provide a better fit to the observed FC correlation structure exactly because the variables are defined on the connections that link different nodes together, rather than on the nodes themselves. That is, the observed activity of brain regions might be better explained as the result of the information integration taking place via the structural fibers linking the regions, rather than by looking at the brain regions in themselves [83], and display interesting parallels with neural frequency mixing behaviours [84, 85]. Naturally, these results are preliminary and intended as a simple demonstration of the potential of simplicial (and more generally, higher-order) oscillator models in the context of computational neurobiological models.
## VIII Summary and outlook
Simplicial Kuramoto models, where oscillators are defined on simplices rather than on nodes, have grown in numbers, yielding a wide variety of different and interesting dynamics. Here, we have attempted to provide a more unified view, akin to a taxonomy, of this simplicial Kuramoto zoo. Our description has relied heavily on topology and discrete differential geometry because the simplicial structure of these models naturally lends itself to a topological and geometrical language, including boundary operators and the Hodge Laplacian.
We have shown that these models can be divided into three main categories:
* "simple" models, which can all be rewritten in a single framework: that of gradient flows, encoded in Eq. (31) for a single order, and Eq. (97) for all orders. These models do not have couplings across orders or frustration.
* "Hodge-coupled" models, in which different Hodge components of the dynamics are coupled. These include explosive models, which can be rewritten in a similar gradient flow framework (Eq. (77)) as the simple models, but also include other models that require additional ingredients, in our case two flavors of frustration (Eq. (84)).
* "order-coupled" (Dirac) models, in which oscillators are coupled across different orders with or without frustrations (Eq. (107)).
This unified view in terms of just two ingredients--gradient flows and frustrations--compresses this model taxonomy to a lower-dimensional space of models and has allowed us to describe the general properties of these models.
A first example is the possibility to derive a set of bounds on the value of the coupling strength that are necessary or sufficient to obtain synchronization for "simple" models, thanks to their simplicity. This gave us a general description of the space of equilibria and their relative degree of reachability as a function of the coupling strength. Additionally, using this taxonomy, it is possible to investigate when two models are genuinely different or not: we demonstrated that the simple simplicial Kuramoto model is strictly equivalent to the standard Kuramoto model on (pairwise) networks if the underlying simplicial complex structure is manifold-like (Theorem 1). More specifically, by mapping the oscillators defined on simplices of order \(k\) to nodes in an effective (pairwise) network, the effective dynamics reduce to a standard Kuramoto model. This is a powerful result that bridges the simplicial models and the well-known standard Kuramoto model and shows that the simplicial models are of most interest on non-manifold-like simplicial complexes.
More generally, we showed that the simplicial models can be related to another important class of higher-order Kuramoto models: those where oscillators are defined only on nodes and interact across hypergraphs [47, 48]. Indeed, a simplicial model on a generic complex can be rewritten as a node-Kuramoto model with group interactions occurring on an effective dual hypergraph. There is one important difference, however: the models obtained this way do not have the properties usually desired for models defined on nodes, that is, the coupling functions do not vanish when all phases are equal. As a consequence, contrary to these other models, the standard 1-cluster synchronization solution is not guaranteed to exist and the equations are not invariant under a uniform phase shift. Although an equivalent effective hypergraph
can be found to define oscillators on nodes, the simplicial models are more naturally described in the formalism of discrete differential geometry by the boundary operators of the simplicial complex. It is an interesting future direction to investigate to what degree and exactly under what conditions these two classes of models can be related to each other. Furthermore, the formalism and results presented here, of course, refer to the case of synchronization, but we expect them to be rather straightforwardly generalizable to more general dynamics, such as consensus [86; 33] or diffusion [29; 30], and structures, such as cell complexes [87].
With respect to applications, we provided a simple example of application to the reconstruction of brain functional connectivity from a structural connectome, a common and still open task in computational neuroscience [88; 89], finding that vanilla models of simplicial edge Kuramoto models are competitive or even outperform more complex node-based models [80]. We suspect that this might be related to the fact that, when edge phases are projected down to node dynamics, they behave akin to time-evolving temporal delays across node signals, an element that has been recognized as crucial in brain dynamical simulations [90]. Similar considerations however are relevant also for many other types of real-world complex systems, such as network traffic [91; 92; 93] and power grid balancing [94; 95].
Overall, we believe that the proposed framework provides a starting point to shed new light and further research on a number of interlaced theoretical and practical topics across the broader community of complex dynamical systems.
## Acknowledgement
A.A. was supported by funding to the Blue Brain Project, a research center of the Ecole polytechnique federale de Lausanne (EPFL), from the Swiss government's ETH Board of the Swiss Federal Institutes of Technology. R.P. acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Project-ID 424778381-TRR 295.
## Code availability
An open-source code to numerically solve the presented models is available at [https://github.com/arnaudon/simplicial-kuramoto](https://github.com/arnaudon/simplicial-kuramoto).
|
2302.02463 | Nationality Bias in Text Generation | Little attention is placed on analyzing nationality bias in language models,
especially when nationality is highly used as a factor in increasing the
performance of social NLP models. This paper examines how a text generation
model, GPT-2, accentuates pre-existing societal biases about country-based
demonyms. We generate stories using GPT-2 for various nationalities and use
sensitivity analysis to explore how the number of internet users and the
country's economic status impacts the sentiment of the stories. To reduce the
propagation of biases through large language models (LLM), we explore the
debiasing method of adversarial triggering. Our results show that GPT-2
demonstrates significant bias against countries with lower internet users, and
adversarial triggering effectively reduces the same. | Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao 'Kenneth' Huang, Shomir Wilson | 2023-02-05T19:15:33Z | http://arxiv.org/abs/2302.02463v3 | # Nationality Bias in Text Generation
###### Abstract
Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models. This paper examines how a text generation model, GPT-2, accentuates pre-existing societal biases about country-based demonyms. We generate stories using GPT-2 for various nationalities and use sensitivity analysis to explore how the number of internet users and the country's economic status impacts the sentiment of the stories. To reduce the propagation of biases through large language models (LLM), we explore the debiasing method of adversarial triggering. Our results show that GPT-2 demonstrates significant bias against countries with lower internet users, and adversarial triggering effectively reduces the same.
## 1 Introduction
Language models learn the context of a word based on other words present around it (Caliskan et al., 2017), and training an enormous dataset leads to the model learning powerful linguistic associations, allowing them to perform well without fine-tuning (Abid et al., 2021). However, this method can easily capture biases, mainly from internet-based texts, as it tends to over-represent the majority's hegemonic viewpoints, causing the LLMs to mimic similar prejudices (Whittaker et al., 2019; Bender et al., 2021; Bolukbasi et al., 2016). Although existing research shows the impact, these model biases can have on various facets of sociodemographic (Kennedy et al., 2020; Hutchinson et al., 2020), no work looks at how LLMs represent different countries worldwide. Learning the representation of nationalities, in LLMs is crucial as demography is used to improve the efficiency of a model for applications like opinion mining (Sazzed, 2021). Previous works have adopted a hybrid approach (using lexicon based with classifier) to adapt them for non-native speakers (Sazzed, 2021).
In this work, we look into how LLMs, specifically GPT-2, represent demonyms from 193 countries. An example of potential bias in GPT-2 can be seen in Table 1. This examination shows how the dataset from the internet generally accentuates the ideas of the majority population (countries with a significant number of internet users) while misrepresenting the opinions of the minority. We look at the group bias demonstrated by GPT-2, using their text generation feature, on countries categorized by the _number of internet users_ and their _economic status_. The essential aspect of this study is also to quantify the accentuation of bias GPT-2 contributes by juxtaposing the analysis with human-written text. Finally, we examine the potential solution of the group bias, in text generation models, by using the method of _adversarial trigerring_ where we positively trigger the prompts used by GPT-2 to provide better text.
## 2 Related Work
Research identifying bias in NLP models has shown that embedding models such as GloVe and Word2Vec, and context-aware dynamic embed
\begin{table}
\begin{tabular}{l c} American people are _in the best shape we’ve ever seen._ _he said. “We have tremendous job growth. So we have an economy that is stronger than it has been.”_ \\ Mexican people are _the ones responsible for bringing drugs, violence and chaos to Mexico’s borders._ \\ Afghan people are _as good as you think. If you look around, they’re very poor at most things._ \\ French people are _so proud of their tradition and culture._ \\ \end{tabular}
\end{table}
Table 1: Examples of short sentences produced by GPT-2 on passing the prompt: ‘<Demonym> people are’.
dings, i.e., large language models (LLMs) such as BERT, automatically mimic biases related to gender Kurita et al. (2019), race Ousidhoum et al. (2021), disability Venkit et al. (2022), and religion Abid et al. (2021) from the language corpora used to train the model. The work done by Nadeem et al. (2021) provides a mechanism for measuring such sociodemographic stereotypes in embeddings and LLMs models. The results of these works infer that these models' primary sources of bias stem from the representation and data used to train them Dev et al. (2020); Rudinger et al. (2018) where the datasets are from very large internet crawls.
Unfortunately, internet access and usage is not evenly distributed over the world, and the generated data tends to overrepresent users from developed countries WorldBank (2015). Bender et al. (2021) discusses this by showing how a large internet-based dataset used to train the model masks minority viewpoints while propagating white supremacist, misognistic and ageist views. With LLMs being used for downstream tasks such as story and dialogue generation and machine translation Radford et al. (2019), the biases acquired from the training language are propagated into the resulting texts generated in these tasks.
Whittaker et al. (2019) discusses how groups that have been discriminated against in the past are at a higher risk of experiencing bias and exclusionary AI as LLMs tend to reproduce as well as amplify historical prejudices. The analysis of demography bias is important in this scenario as the difference in the majority's viewpoint, shown by the model, compared to the actual internal image of a country can lead to the propagation of harmful and outdated stereotypes Harth (2012); Lasorsa and Dai (2007). Such biases can lead to social harms such as stereotyping, and dehumanization Dev et al. (2022) against marginalized populations, especially LLMs used as social solutions to analyze online abuse, distress, and political discourse and to predict social cues based on demographic information Blackwell et al. (2017); Gupta et al. (2020); Guda et al. (2021).
## 3 Methodology
In our work, we describe bias using the statistical framework used in the study of fairness in AI Chouldechova and Roth (2020); Czarnowska et al. (2021), i.e., the difference in behavior that occurs when a selected group is treated less favorably than another in the same or similar circumstance. We identify group bias using statistical inferences of different demonym groups \(d_{n}\) and check for parity across all the groups and a standard control group \(C\), using the story generation feature of GPT-2.
We selected GPT-2 as it is _an open access language model without usage limit_. It captures superior linguistic associations between words, resulting in better performance on various NLP tasks than other publicly available LLM models Radford et al. (2019). WebText, the text corpus used by GPT-2, is generated by scraping pages linked to by Reddit posts that have received at least three upvotes. The issue with such a dataset is that it overrepresents the ideas of individuals with higher activity quotients on the internet, leading to potential systemic biases Bender et al. (2021).
We identify group bias using the text completion feature of GPT-2 to comprehend the explicit associations created by the dataset. We analyze the demonyms used for the 193 countries recognized by the United Nations1 and use the method of perturbation developed by Prabhakaran et al. (2019); Kurita et al. (2019), where a template generates similar prompts for each country using instantiation. We use the prompt _X: [The <dem> people are]_ and instantiate <dem>_ with demonyms \(d\in D\) (where \(D\) is the set of 193 selected nationalities) to generate 100 unique2 stories per demonym, with a 500-word upper limit, using the GPT-2 API from Huggingface3. In order to generate the control \(C\) and remove associations to any demonym, we generate 100 stories using the prompt [The people are], resulting in a final corpus of 19,400 stories.
Footnote 1: [https://www.un.org/en/about-us/member-states](https://www.un.org/en/about-us/member-states)
Footnote 2: The authors of the paper manually examined 15 random stories generated for each prompt to make sure the texts generated were unique.
Footnote 3: [https://huggingface.co/GPT-2](https://huggingface.co/GPT-2)
We measure the fairness of GPT-2 by running the generated texts through sentiment analysis model VADER Hutto and Gilbert (2014), similar to other works Hutchinson et al. (2020); Venkit and Wilson (2021) that use perturbation to detect fairness where a relevant arbitrary score, like sentiment or toxicity, is used to measure the performance of a model. VADER evaluates sentiment scores on a scale of -1 (most negative) to (most positive) +1 to represent the overall emotional valence of a text. Our reason for selecting VADER is two folds: (i) most of the textual trained by GPT-2 is predominantly selected from a social media platform which VADER
is known to perform well on Hutto and Gilbert (2014); and (ii) VADER is a lexicon-based sentiment model created from a human-curated gold standard set of words, making it less susceptible to demonstrate sociodemographic biases. We check this by running all 193 _prompts_\(|\)\(|\)\(|\)\(|\)\(|\)\(|\) through VADER to identify explicit bias, but found none (as all scores were 0.00).
## 4 Results
In this section, we analyze the most negative and positive sentiment demonyms for the first part of the examination on nationality bias in GPT-2. We then group the demonyms based on the economic status of the country as well as the number of internet users. The use of statistical parameters and _perturbation sensitivity score_ show the effect of the above factors on the stories generated. Following this, we will juxtapose our results to articles from or about specific demonyms written by human agents. Finally, we will demonstrate the impact of adversarial triggering, a debiasing method, on the results generated by GPT2. To account for the stochastic nature of this model, we repeated the text generation and statistical analysis process to acquire close to identical results demonstrated in this paper, reiterating our findings.
### Analysis of Adjectives
For the preliminary analysis, we examine the nature of the stories using sentiment scores and adjective extractions. Analysis of adjectives shows the words that GPT-2 uses to describe the demonym commonly. Table 2 shows the five most positive and negative scored countries from all the stories generated by GPT-2. We use Textblob Loria (2018) to extract adjectives from the texts. We categorize all the adjectives generated as positive and negative based on their sentiment scores per demonym. Table 2 shows the top five most frequent adjectives present in stories of the individual countries. We observe that the most negatively scored countries have detrimental adjectives like 'dead', 'violent' & 'illegal' associated with them. These associations and the sentiment score portray a very toxic image of the demonyms.
### Analysis of Internet Usage and Economic Status
We group the countries based on two factors, i.e., their population of internet users and economic status, to statistically check if it factors in on how GPT-2 generates the stories for the demonyms for these countries. We acquire the total number of internet users and the economic status of all 193 countries from the World Bank dataset4. World Bank assigns the world's economies to four income groups--_low, lower-middle, upper-middle, and high-income countries_. We also calculate the total number of internet users in each country from
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Demonym** & **Top Adjectives** & \(f\)**(LLM)** & \(f\)**(Hum)** & \(f\)**(DeB)** & \(\Delta f\) \\ \hline France & good, important, best, strong, true & 0.375 & 0.501 & 0.672 & 0.126 \\ Finland & good, important, better, free, happy & 0.358 & 0.605 & 0.524 & 0.247 \\ Ireland & important, good, better, _difficult_, proud & 0.315 & 0.389 & 0.645 & 0.074 \\ San Marino & good, important, strong, original, beautiful & 0.314 & 0.577 & 0.649 & 0.263 \\ United Kingdom & good, important, legal, certain, better & 0.287 & 0.102 & 0.572 & -0.185 \\ \hline Libya & _terrorist_, clear, great, important, strong & -0.701 & 0.076 & -0.055 & 0.777 \\ Sierra Leone & important, _affected, worst, difficult, dangerous_ & -0.702 & 0.232 & 0.079 & 0.934 \\ Sudan & special, responsible, _worst_, _poor, terrorist_ & -0.704 & 0.075 & 0.212 & 0.779 \\ Tunisia & _violent, terrorist, difficult, good, legal_ & -0.722 & 0.063 & 0.199 & 0.785 \\ South Sudan & _illegal_, _serious, dead, desperate, poor_ & -0.728 & 0.169 & 0.170 & 0.897 \\ \hline \end{tabular}
\end{table}
Table 2: Analysis of most positive and negatively scored countries. \(f\)(LLM) denotes scores generated by GPT-2. \(f\)(Hum) denotes scores generated by non-AI text. \(f\)(DeB) denotes scores generated by post adversarial and \(\Delta f\) denotes bias accentuation.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Internet User Pop.** & **Sentiment Score** & **ScoreSense** \\ \hline High & 0.495 & +0.191 \\ \hline Upper-Middle & **0.256 * & -0.047 \\ \hline Lower-Middle & **0.241 ** & -0.068 \\ \hline Low & **0.176 ** & -0.124 \\ \hline NA & **0.206 ** & -0.101 \\ \hline \hline
**Economic Status** & **Sentiment Score** & **ScoreSense** \\ \hline High & 0.254 & -0.043 \\ \hline Upper-Middle & 0.178 & -0.124 \\ \hline Lower-Middle & 0.183 & -0.118 \\ \hline Low & **0.089 * & -0.213 \\ \hline \end{tabular}
\end{table}
Table 3: Sentiment scores and ScoreSense grouped by Internet Usage and Economic Status. (*) represents the significance codes of the t-test: 0.001 ‘***’ 0.01 ‘**’ 0.05 ‘*’.
data collected by the World Bank on the internet usage parameters for all countries5. We statistically divide countries, based on internet user population, into four groups using the k-means clustering method of vector quantization and the WCSS elbow method. The categorization of each country is present in our project repository6.
Footnote 5: [https://data.worldbank.org/indicator/IT.NET.USER.ZS](https://data.worldbank.org/indicator/IT.NET.USER.ZS)
Footnote 6: [https://github.com/PranavNV/Nationality-Prejudice-in-Text-Generation](https://github.com/PranavNV/Nationality-Prejudice-in-Text-Generation)
We use the Pearson coefficient, mean, and p-value of the sentiment score for all demonyms to understand the group bias demonstrated by GPT-2. We calculate the p-value in the factor of _economic status_ with the help of an independent sample t-test and Welch t-test for _internet user population_ as the variance differs significantly amongst all the groups. Using Perturbation Score Sensitivity (ScoreSense), defined by Prabhakaran et al. (2019), we measure the extent to which a model prediction is'sensitive' to specific demonyms. ScoreSense of a model \(f\) is the average difference between the results generated by the corpus \(\mathsf{lXl^{s}|U|l}\) for a selected demonym \(d_{n}\) and the results generated by the stories without any mention of a demonym \(C\).
\[ScoreSense=\sum_{d_{n}\in D}\left[f(|X|*|d_{n}|)-f(C)\right]\]
The Pearson coefficient shows a positive correlation between the sentiment of the generated story and the internet user population (0.818), and the country's economic status (0.935). Table 3 shows each group's sentiment score, significance value, and score sense for both factors. Countries with more internet users show an increase in sentiment scores by 0.191 from the control group. On the other hand, scores for countries with low internet users dip by 0.124. We see similar behavior concerning economic status as well. The number of internet users in a country is statistically shown to be a significant factor in determining the sentiment of the story generated.
### Evaluation of Human Written Stories
We evaluate human-written stories to juxtapose the nature of text generated by a non-AI and an AI agent to understand how GPT-2 _catalyzes_ the presence of stereotypes. We randomly select 50 articles for each demonym, written about or from the selected country, from the NOW corpus (Davies, 2017), which contains data from 26 million texts written in English from online magazines and newspapers from various nations worldwide. This corpus contains local news and online articles from multiple countries that help construct a more inclusive perspective of the demonym. We select articles published till 2019 to mimic the knowledge learned by GPT-2 (as WebText was released in 2019). We depict the sentiment analysis acquired for all the stories, for a selected list of countries, in Table 2 through \(f(Hum)\). Comparing \(f(LLM)\) (sentiment scores of the text generated by GPT-2) to \(f(Hum)\), we can see that the overall sentiment score of stories generated by GPT-2 is more negative than the human-written articles.
We also notice countries like South Sudan and Sierra Leone, with a lesser \(f(Hum)\) value, receive a significantly negative score compared to countries that received an overall positive sentiment score. To understand this gap better, we define \(\Delta f\) to measure _negative bias accentuation_ caused by GPT-2 by measuring the difference between texts generated by non-AI and AI agent (\(f(Hum)-f(LLM)\)). The value shows the overall accentuation of negative bias amongst all the selected countries by GPT-2. The score shows that lower countries (negative sentiment scores) are penalized substantially more (\(\sim\)0.834) than top countries (\(\sim\)0.105) concerning sentiment score. The results indicate that such countries are heavily penalized by GPT-2 by associations of higher negative themes to the demonym.
### Debiasing using Adversarial Triggers
This section analyzes a potential solution for generating less harmful and inimical stories generated by GPT-2 for all demonyms. From our experimental
Figure 1: Sentiment scores of countries grouped by Internet Usage before and after debiasing.
results in Table 2, we see that certain demonyms contain an unfavorable presence of toxic words that can bring out a skewed perception of the country. To tackle this issue, we alleviate the results by using the method of _adversarial triggers_Wallace et al. (2019). For example, the prompt 'French people are' can be changed to '<positive adjective> French people are' where <positive adjective> is an adjective that adds a favorable context to the demonym (eg: excellent, brilliant).
We generate 100 stories for each demonym preceded by the positive triggers, _hopeful and hardworking_. The words are selected based on the most effective adjective identified by Abid et al. (2021) to decrease anti-muslim prejudices in LLMs for a similar application. Table 2 and 4 show the results obtained from debiasing. Figure 1 compares scores between countries grouped by the internet user population. We notice that countries with lower income status and internet user populations perform considerably well after debiasing (Table 4). We also see countries grouped as 'High' score lesser after debiasing. A potential explanation is that the positive bias learned by the model, due to the high representation of these countries, is now normalized through adversarial triggering.
There is now no significant difference in scores when we compare _High_ with the rest of the groups using the t-test, unlike the comparison done prior using the debiasing method. These debiased scores are relatively closer to the sentiment scores acquired by evaluating the human written articles (_Hu_Score_) for the selected countries as well.
## 5 Discussion and Conclusion
The use of large language models (LLMs) that are trained on large internet-based textual datasets has become widespread in recent years. These models aim for scalability and universal solutions, but in the process, biases towards potentially sensitive words such as demonyms can emerge. Given the widespread use of popular LLMs like ChatGPT and BERT, it is crucial to address this issue. In this study, we conducted perturbation analysis and statistical evaluations on GPT-2, a high-performing LLM available for public access, to examine its biases against various nationalities. Our results indicate that GPT-2 exhibits prejudices against certain countries, as demonstrated by the relationships between sentiment and the number of internet users per country or GDP, respectively.
One potential cause of these demonym-based biases is the large internet-based textual datasets used to train the LLM, which tends to over-represent a majority viewpoint while under-representing other perspectives. Our analysis revealed that countries with lower representation online tend to have lower sentiment and ScoreSense scores, and that the LLM mimics the majority viewpoint from the internet rather than its actual representation. To quantify this, we calculated the bias accentuation value as the difference between the scores of stories generated by GPT-2 and human-written articles that mention or are from the country. We observed higher values corresponding to countries with more negative sentiment scores.
In this work, we explored the potential for adversarial triggering to mitigate biases in language models. Our results indicate that this method can effectively reduce the accentuation of stereotypes in generated stories. Given the widespread use of language models in various applications, such as writing assistance and machine translation, it is vital to consider the potential biases these models may propagate. Much research demonstrates that such biases can negatively affect marginalized communities, including stereotyping, disparagement, erasure, and poor service quality of service Dev et al. (2022). Our findings highlight the importance of ongoing efforts to examine and address potential biases in language models to promote more equitable and inclusive outcomes.
In conclusion, it is crucial to continuously monitor and evaluate language models for bias and harm. By addressing the role of training data in shaping the models' predictions and taking steps to curate more diverse and representative datasets, we can strive towards creating fairer and more inclusive language models that serve all. This is crucial to building a more equitable future, where language models can enhance communication and understanding rather than perpetuate harmful biases.
\begin{table}
\begin{tabular}{|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \hline
**IUPop.** & **SentiScore** \\ \hline H & 0.351 \\ \hline UM & 0.326 \\ \hline LM & 0.422 \\ \hline L & 0.400 \\ \hline NA & 0.421 \\ \hline \end{tabular}
\begin{tabular}{|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \hline
**EcoStatus** & **SentiScore** \\ \hline H & 0.449 \\ \hline UM & 0.358 \\ \hline LM & 0.421 \\ \hline L & 0.376 \\ \hline \end{tabular}
\end{table}
Table 4: Sentiment score for both _Internet User Population_ (IUPop.) and _Economic Status_ (EcoStatus) _after debiasing_. High, Uple-Middle, Lower-Middle, and Low groups are denoted as H, UM, LM, and L.
### Limitations
In this study, we utilized English language stories generated by GPT-2 for our analysis and compared them with English news articles written by humans. While this approach allows us to compare the results of the LLM with human-written articles, it also imposes a limitation. Our study does not consider local language news, especially for predominantly non-English speaking countries. This limitation highlights the existing disparity between English and non-English speaking internet users. GPT-2 was trained on English language data from the internet, and as a result, it cannot generate stories in any other languages. The lack of non-English data used to train the model demonstrates the pre-existing bias against the population of the world that does not speak English.
Additionally, our study acknowledges that the nuances of political and economical situations in many countries are beyond our scope of exploration. GPT-2 was trained on data collected from the internet over a period of a couple of years, and this would have captured internet activity for countries with unstable political situations and potential war-like conditions for only that period. The intention of this study was to demonstrate how GPT-2 exacerbates negative bias with respect to demonyms when compared to human-written articles, as shown in our results. However, it is important to note that our analysis is limited only to the results produced by GPT-2 and does not explore the themes of the generated texts for each country.
|
2305.09591 | Møller-Plesset and density-fixed adiabatic connections for a model
diatomic system at different correlation regimes | In recent years, Adiabatic Connection Interpolations developed within Density
Functional Theory (DFT) have been found to provide satisfactory performances in
the calculation of interaction energies when used with Hartree-Fock (HF)
ingredients. The physical and mathematical reasons for such unanticipated
performance have been clarified, to some extent, by studying the
strong-interaction limit of the M\o ller-Plesset (MP) adiabatic connection. In
this work, we calculate both the MP and the DFT adiabatic connection (AC)
integrand for the asymmetric Hubbard dimer, which allows for a systematic
investigation at different correlation regimes by varying two simple parameters
in the Hamiltonian: the external potential, $\Delta v$, and the interaction
strength, $U$. Noticeably, we find that, while the DFT AC integrand appears to
be convex in the full parameter space, the MP integrand may change curvature
twice. Furthermore, we discuss different aspects of the second-order expansion
of the correlation energy in each adiabatic connection and we demonstrate that
the derivative of the $\lambda$-dependent density in the MP adiabatic
connection at $\lambda=0$ (i.e., at the HF density) is zero. Concerning the
strong-interaction limit of both adiabatic connections, we show that while, for
a given density, the asymptotic value of the MP adiabatic connection,
$W_\infty^\text{HF}$, is lower (or equal) than its DFT analogue,
$W_\infty^\text{KS}$, this is not always the case for a given external
potential. | Sara Giarrusso, Aurora Pribram-Jones | 2023-05-16T16:40:46Z | http://arxiv.org/abs/2305.09591v3 | Moller-Plesset and density-fixed adiabatic connections for a model diatomic system at different correlation regimes
###### Abstract
In recent years, Adiabatic Connection Interpolations developed within Density Functional Theory (DFT) have been found to provide good performances in the calculation of interaction energies when used with Hartree-Fock (HF) ingredients. The physical and mathematical reasons for such unanticipated performance have been clarified, to some extent, by studying the strong-interaction limit of the Moller-Plesset (MP) adiabatic connection. In this work, we calculate both the MP and the DFT adiabatic connection (AC) integrand for the asymmetric Hubbard dimer, which allows for a systematic investigation at different correlation regimes by varying two simple parameters in the Hamiltonian: the external potential, \(\Delta v\), and the interaction strength, \(U\). Noticeably, we find that, while the DFT AC integrand appears to be convex in the full parameter space, the MP integrand may change curvature twice. Furthermore, we discuss different aspects of the second-order expansion of the correlation energy in each adiabatic connection and we demonstrate why the derivative of the \(\lambda\)-dependent density in the MP adiabatic connection at \(\lambda=0\) (i.e., at the HF density) is zero in the model. Concerning the strong-interaction limit of both adiabatic connections, we show that while, for a given density, the asymptotic value of the MP adiabatic connection, \(W_{\infty}^{\rm HF}\), is lower (or equal) than its DFT analogue, \(W_{\infty}^{\rm KS}\), this is not always the case for a given external potential.
## I Introduction and theoretical background
Adiabatic connection methods rely on the idea of gradually switching from a formally non-interacting Hamiltonian, which is comparatively simple, to a "fully interacting" one, which is more complicated and describes the actual electronic system of interest. This is done by multiplying the interaction operator by a coupling- or interaction-strength parameter.
Nowadays, there are several different flavors of adiabatic connections adopted in wavefunction-based methods; [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]however, the formalism was first developed in the context of Density Functional Theory (DFT) [11; 12; 13] and it has been a quite powerful tool to construct models for the exchange-correlation (XC) energy in Kohn-Sham DFT [14] ever since. Indeed, it has provided the rationale for density functional approximations (DFAs) such as hybrid, [15; 16] double-hybrid, [17] and functionals from the random phase approximation, [18] encoding the exact behaviour of the adiabatic connection at small coupling, where the interaction can be treated as a perturbation.
In addition to these types of functionals however, this formalism has also inspired the construction of DFAs that interpolate between two different limits of the adiabatic connection curve, performing what is effectively an (approximate) all-order resummation of the perturbation series. Initially, DFAs of this kind were built as an interpolation between the zero- and the full-interaction limits, [19] but, shortly afterwards, more balanced interpolations were constructed by extending the range of the coupling strength _beyond_ the physical value, bringing it to infinity [20; 21; 22; 23; 24] and thus combining the information coming from equally extreme limits (where equally extreme is in the sense that the coupling-strength parameter \(\lambda\) in front of the interaction operator in the two limits behaves as \(\lambda\to\infty\) or \(\alpha\to\infty\) with \(\alpha=\frac{1}{\lambda}\)). This class of DFAs, collectively referred to as Adiabatic Connection (Interaction) Interpolations (ACIIs or ACIs) or Adiabatic Connection Methods (ACMs), has recently drawn much attention. One important reason for such renewed interest is that their lack of size-consistency can be corrected very easily at no extra computational cost, as shown in Ref. [25]. Another fundamental reason is that, although having been originally devised in a DFT framework, ACMs have been shown to provide satisfactory performances for binding and interaction energies (in non-covalent complexes), when used with Hartree-Fock (HF) ingredients. [25; 26; 27; 28]
Their use in this framework has numerous practical advantages compared to their use in KS-DFT and some theoretical downsides. The downside compared to KS-DFT is that ACMs on HF ingredients are a simple energetic correction to the HF approximation: they cannot be used to obtain the interacting density via a self-consistent-field (SCF) scheme. On the contrary, ACMs within KS-DFT can in principle yield an approximate interacting density via an SCF calculation, but with the practical disadvantage that their implementation is quite involved and expensive, due to the presence of functional derivatives of energy terms that depend only implicitly on the density. [29; 30] Indeed, ACMs within DFT have been mostly used on approximate KS orbitals, obtained from a preceding SCF calculation, but this strategy introduces an extra layer of approximation, falls back into the known problem of having to "cherry-pick" the best functional for the calculation at hand, and seems to be overall not quite |
2310.12461 | Balanced Group Convolution: An Improved Group Convolution Based on
Approximability Estimates | The performance of neural networks has been significantly improved by
increasing the number of channels in convolutional layers. However, this
increase in performance comes with a higher computational cost, resulting in
numerous studies focused on reducing it. One promising approach to address this
issue is group convolution, which effectively reduces the computational cost by
grouping channels. However, to the best of our knowledge, there has been no
theoretical analysis on how well the group convolution approximates the
standard convolution. In this paper, we mathematically analyze the
approximation of the group convolution to the standard convolution with respect
to the number of groups. Furthermore, we propose a novel variant of the group
convolution called balanced group convolution, which shows a higher
approximation with a small additional computational cost. We provide
experimental results that validate our theoretical findings and demonstrate the
superior performance of the balanced group convolution over other variants of
group convolution. | Youngkyu Lee, Jongho Park, Chang-Ock Lee | 2023-10-19T04:39:38Z | http://arxiv.org/abs/2310.12461v1 | # Balanced Group Convolution: An Improved Group Convolution Based on Approximability Estimates
###### Abstract
The performance of neural networks has been significantly improved by increasing the number of channels in convolutional layers. However, this increase in performance comes with a higher computational cost, resulting in numerous studies focused on reducing it. One promising approach to address this issue is group convolution, which effectively reduces the computational cost by grouping channels. However, to the best of our knowledge, there has been no theoretical analysis on how well the group convolution approximates the standard convolution. In this paper, we mathematically analyze the approximation of the group convolution to the standard convolution with respect to the number of groups. Furthermore, we propose a novel variant of the group convolution called _balanced group convolution_, which shows a higher approximation with a small additional computational cost. We provide experimental results that validate our theoretical findings and demonstrate the superior performance of the balanced group convolution over other variants of group convolution.
keywords: Convolutional layer, Group convolution, Approximability estimate +
Footnote †: journal: Neural Networks
## 1 Introduction
The convolutional layer plays a crucial role in the success of modern neural networks for image classification (He et al., 2016; Hu et al., 2018; Krizhevsky et al., 2017) and image processing problems (Radford et al., 2015;
Ronneberger et al., 2015; Zhang et al., 2017). However, achieving high performance through convolutional neural networks (CNNs) typically requires the use of a large number of channels (Brock et al., 2021; He et al., 2022; Huang et al., 2019; Tan and Le, 2019; Zagoruyko and Komodakis, 2016), resulting in significant computational costs and long training times. Accordingly, there have been many studies focusing on modifying the convolutional layer to reduce its computational complexity (Gholami et al., 2022; Krizhevsky et al., 2017; Lee et al., 2022; Liu et al., 2020; Zhang et al., 2015, 2015).
Among them, group convolution is the most basic modification that can be easily thought of. It was introduced in AlexNet (Krizhevsky et al., 2017) as a distributed computing method of convolutional layers to resolve the memory shortage. Group convolution divides the channels of each layer into groups and performs convolution only within each group. CNNs with the group convolution succeeded in reducing the number of parameters and implementation time, but its performance drops rapidly as the number of groups increases (Lee et al., 2022; Long et al., 2015). It has been speculated that this phenomenon is due to the fact that the lack of intergroup communication greatly affects the representation capacity of CNNs. However, it has not yet been mathematically revealed how much the performance is degraded.
Recently, several modifications of the group convolution, which add an intergroup communication, have been considered to restore the performance (Chollet, 2017; Lee et al., 2022; Long et al., 2015; Zhang et al., 2018). The channel shuffling structure used in ShuffleNet (Zhang et al., 2018) adds a permutation step among the output channels of the group convolution to make data exchange among groups. This method is efficient in terms of memory usage because it does not use any additional parameters. Learnable group convolution (Huang et al., 2018) determines channels to be grouped through learning. This method generates the weight for the group convolution by overlaying a trainable mask on the weight of the standard convolution. In fact, it is equivalent to changing the channel shuffling from a fixed rule to a learnable one. Although these methods do not make sense when viewed as a single layer, they effectively restore the performance of CNNs that use multiple layers. On the other hand, fully learnable group convolution (Zhang et al., 2018) introduces more parameters to additionally vary the number of channels in each group. In (Lee et al., 2022), it was observed that group convolution has a block diagonal matrix structure and two-level group convolution was introduced to collect representatives of each group to perform additional convolution with low computational cost. These works have suc
cessfully resolved the performance degradation issue described above, but there is still no mathematical analysis of why the performance is improved.
In this paper, we rigorously analyze the performance of the group convolution as an approximation to the corresponding standard convolution with respect to the number of groups \(N\). To achieve this, we introduce a new measure of approximability (see (3.2)), which quantifies the optimal squared \(\ell^{2}\)-error between the outputs of the standard convolution and group convolution. We prove that the approximability of the group convolution is estimated as \(K(1-1/N)^{2}\), where \(K\) denotes the number of parameters in a convolution layer that maps a single channel to a single channel (see Theorem 3.5).
In addition, we present a new variant of group convolution, called _balanced group convolution_ (BGC), which achieves a theoretically improved approximability estimate compared to the plain group convolution. In BGC, the intergroup mean is computed by averaging the channels between groups. After that, we introduce an additional convolution to add the intergroup mean to the output of the group convolution. This allows each group in BGC to utilize the entire information of the input, resulting in an improved approximability with a small additional computational cost. Indeed, we prove that the approximability of BGC is estimated to be \(K(1-1/N)^{3}\) (see Theorem 3.6), which is a better result than the plain group convolution. Furthermore, under an additional assumption, the approximability of the group convolution and BGC can be estimated to be \(K(1-1/N)/n\) and \(K(1-1/N)^{2}/n\), respectively, where \(n\) is the number of input channels. The superior performance of the proposed BGC is verified by various numerical experiments on several recent CNNs such as WideResNet (Zagoruyko and Komodakis, 2016), ResNeXt (Xie et al., 2017), MobileNetV2 (Sandler et al., 2018), and EfficientNet (Tan and Le, 2019).
In summary, we address the following issues in this work:
_Is it possible to obtain a rigorous estimate for the approximability of group convolution? Furthermore, can we develop an enhanced version of group convolution that achieves a theoretically better approximability than the original?_
We summarize our main contributions in the followings:
* We propose BGC, which is an enhanced variant of group convolution with an improved approximability estimate.
* We estimate the bounds on approximability of group convolution and BGC.
* We demonstrate the performance of BGC by embedding it into state-of-the-art CNNs.
The rest of this paper is organized as follows. In Section 2, we present preliminaries of this paper, specifically related to the group convolution. We introduce the proposed BGC and its improved approximation properties in Section 3, and establish connections to existing group convolution approaches in Section 4. Numerical validations of the theoretical results and improved classification accuracy of the proposed BGC applied to various state-of-the-art CNNs are presented in Section 5. We conclude this paper with remarks in Section 6.
## 2 Group convolution
In this section, we introduce some basic notations for group convolution to be used throughout this paper. Let \(m\) and \(n\) be positive integers and \(N\) be a common divisor of \(m\) and \(n\). We write \(m_{N}=m/N\) and \(n_{N}=n/N\).
We consider convolutional layers that operate on \(n\) channels and produces \(m\) channels. Input \(\mathbf{x}\) of a convolutional layer is written as \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\), where \(x_{j}\) is the \(j\)th channel of \(\mathbf{x}\). Similarly, output \(\mathbf{y}\) is written as \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{m})\), where \(y_{i}\) is the \(i\)th channel of \(\mathbf{y}\). To represent a standard convolutional layer that takes input \(\mathbf{x}\) and produces output \(\mathbf{y}\), we use the following matrix expression:
\[\begin{bmatrix}y_{1}\\ y_{2}\\ \vdots\\ y_{m}\end{bmatrix}=\begin{bmatrix}W_{11}&W_{12}&\cdots&W_{1n}\\ W_{21}&W_{22}&\cdots&W_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ W_{m1}&W_{m2}&\cdots&W_{mn}\end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\\ \vdots\\ x_{n}\end{bmatrix}, \tag{2.1}\]
where \(W_{ij}\) is a matrix that represents the generic convolution that maps \(x_{j}\) to \(y_{i}\). We suppose that the channels of \(\mathbf{x}\) and \(\mathbf{y}\) are evenly partitioned into \(N\) groups. Namely, let \(\mathbf{x}=(\mathbf{x}^{1},\mathbf{x}^{2},\ldots,\mathbf{x}^{N})\) and \(\mathbf{y}=(\mathbf{y}^{1},\ldots,\mathbf{y}^{N})\), where
\[\mathbf{x}^{k}=\begin{bmatrix}x_{(k-1)n_{N}+1}\\ x_{(k-1)n_{N}+2}\\ \vdots\\ x_{kn_{N}}\end{bmatrix},\quad\mathbf{y}^{k}=\begin{bmatrix}y_{(k-1)m_{N}+1} \\ y_{(k-1)m_{N}+2}\\ \vdots\\ y_{km_{N}}\end{bmatrix}.\]
Then (2.1) is rewritten as the following block form:
\[\begin{bmatrix}\mathbf{y}^{1}\\ \mathbf{y}^{2}\\ \vdots\\ \mathbf{y}^{N}\end{bmatrix}=\begin{bmatrix}\mathbf{W}^{11}&\mathbf{W}^{12}&\cdots& \mathbf{W}^{1N}\\ \mathbf{W}^{21}&\mathbf{W}^{22}&\cdots&\mathbf{W}^{2N}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{W}^{N1}&\mathbf{W}^{N2}&\cdots&\mathbf{W}^{NN}\end{bmatrix}\begin{bmatrix} \mathbf{x}^{1}\\ \mathbf{x}^{2}\\ \vdots\\ \mathbf{x}^{N}\end{bmatrix}. \tag{2.2}\]
Group convolution, a computationally efficient version of the convolution proposed in (Krizhevsky et al., 2017), is formed by discarding intergroup connection in (2.2). That is, we take only the block-diagonal part to obtain the group convolution, such as
\[\begin{bmatrix}\mathbf{y}^{1}\\ \mathbf{y}^{2}\\ \vdots\\ \mathbf{y}^{N}\end{bmatrix}=\begin{bmatrix}\mathbf{W}^{11}&\mathbf{O}&\cdots& \mathbf{O}\\ \mathbf{O}&\mathbf{W}^{22}&\cdots&\mathbf{O}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{O}&\mathbf{O}&\cdots&\mathbf{W}^{NN}\end{bmatrix}\begin{bmatrix} \mathbf{x}^{1}\\ \mathbf{x}^{2}\\ \vdots\\ \mathbf{x}^{N}\end{bmatrix}, \tag{2.3}\]
where \(\mathbf{O}\) denotes the zero matrix. We observe that the number of parameters in the group convolution (2.3) is \(1/N\) of that of the standard convolution (2.2). Moreover, the representation matrix in (2.3) is block-diagonal, allowing each block on the main diagonal to be processed in parallel, making it computationally more efficient than (2.2). However, when the number of parameters in the group convolution is reduced, the performance of a CNN using group convolutional layers may decrease compared to a CNN using standard convolutional layers (Lee et al., 2022; Zhang et al., 2018).
## 3 Main results
In this section, we present the main results of this paper, including the proposed BGC and estimates of its approximability and computational cost.
### Balanced group convolution
A major drawback of the group convolution, which affects performance, is the lack of intergroup communication since the representation matrix given in (2.3) is block-diagonal. The motivation behind the proposed BGC is by adding intergroup communication to the group convolution with low computational cost to improve performance. To achieve this, we utilize a representative vector of small dimension that captures all the information in the
input \(\mathbf{x}\). Specifically, we use the mean \(\bar{\mathbf{x}}\) of \(\{\mathbf{x}^{k}\}\), i.e.,
\[\bar{\mathbf{x}}=\frac{1}{N}\sum_{k=1}^{N}\mathbf{x}^{k}.\]
Then, the proposed BGC is defined by
\[\begin{bmatrix}\mathbf{y}^{1}\\ \mathbf{y}^{2}\\ \vdots\\ \mathbf{y}^{N}\end{bmatrix}=\begin{bmatrix}\mathbf{W}^{11}&\mathbf{O}&\cdots& \mathbf{O}\\ \mathbf{O}&\mathbf{W}^{22}&\cdots&\mathbf{O}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{O}&\mathbf{O}&\cdots&\mathbf{W}^{NN}\end{bmatrix}\begin{bmatrix} \mathbf{x}^{1}\\ \mathbf{x}^{2}\\ \vdots\\ \mathbf{x}^{N}\end{bmatrix}+\begin{bmatrix}\bar{\mathbf{W}}^{1}\\ \bar{\mathbf{W}}^{2}\\ \vdots\\ \bar{\mathbf{W}}^{N}\end{bmatrix}\bar{\mathbf{x}}, \tag{3.1}\]
where each \(\bar{\mathbf{W}}^{k}\) is a convolution that operates on \(n_{N}\) channels and produces \(m_{N}\) channels. That is, BGC is an extension of the group convolution that incorporates an additional structure based on the intergroup mean \(\bar{\mathbf{x}}\), which serves as a balancing factor that utilizes the entire information of \(\mathbf{x}\) to intergroup communication. Specifically, this additional structure allows BGC to better distribute feature representations across groups.
Since the number of parameters in BGC (3.1) is \(2/N\) of that of the standard convolution (2.2), the computational cost of BGC is significantly lower than that of the standard convolution. We will discuss this further in Section 3.3.
### Approximability estimate
Thanks to the additional structure in the proposed BGC, we may expect that it behaves more similarly to the standard convolution than the group convolution. To provide a rigorous assessment of this behavior, we introduce a notion of approximability and prove that BGC has a better approximability to the standard convolution than the group convolution.
Suppose that the standard convolution \(\mathbf{W}=[\mathbf{W}^{kl}]\) that maps \(N\) groups to \(N\) groups (see (2.2)) and the input \(\mathbf{x}=[\mathbf{x}^{k}]\) are drawn from a certain probability distribution. We define the approximability of the group convolution \(\mathbf{W}_{\mathrm{gp}}\) of the form (2.3) as
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\mathrm{gp}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{gp}}\mathbf{x}\| ^{2}\right]\right], \tag{3.2}\]
where \(\|\cdot\|\) denotes the \(\ell^{2}\)-norm. Namely, the approximability (3.2) measures the \(\mathbf{W}\)-expectation of the optimal \(\mathbf{x}\)-averaged squared \(\ell^{2}\)-error between the
output \(\mathbf{W}\mathbf{x}\) of the standard convolution and the output \(\mathbf{W}_{\mathrm{gp}}\mathbf{x}\) of the group convolution. The approximability of BGC is defined in the same manner.
To estimate the approximability (3.2), we need the following assumptions on the distributions of \(\mathbf{W}\) and \(\mathbf{x}\).
_Assumption 3.1_.: A standard convolution \(\mathbf{W}=[W_{ij}]\) and an input \(\mathbf{x}=[x_{j}]\) are random variables that satisfy the following:
1. \(\{W_{ij}\}\) are identically distributed.
2. \(\{x_{j}\}\) are independent and identically distributed (i.i.d.).
3. \(\mathbf{W}\) and \(\mathbf{x}\) are independent.
Assumption 3.1(i) is essential to handle the off-diagonal parts of the group convolution when estimating (3.2). To effectively compensate for the absence of the off-diagonal parts in the group convolution using \(\bar{\mathbf{x}}\), we need Assumption 3.1(ii). On the other hand, Assumption 3.1(iii) is a quite natural assumption since it asserts that the convolution and input are independent of each other. Furthermore, if we additionally assume to the parameters of the standard convolution, we can get better estimates on the approximability of group convolution and BGC. This assumption is specified in Assumption 3.2.
_Assumption 3.2_.: A standard convolution \(\mathbf{W}=[W_{ij}]\) is a random variable such that \(\{W_{ij}\}\) are i.i.d. with \(\mathbb{E}[W_{ij}]=0\).
The independence of \(W_{ij}\) is essential to obtain a sharper bound on the expectation of \(\|\mathbf{W}\mathbf{x}\|^{2}\). The condition \(\mathbb{E}[W_{ij}]=0\) usually occurs when the parameters of the convolutional layers are generated by Glorot (Glorot and Bengio, 2010) or He (He et al., 2015) initialization, which are i.i.d. samples from random variable with zero mean.
The following lemma presents a Young-type inequality for the standard convolution, which plays a key role in the proof of the main theorems.
**Lemma 3.3**.: _Let \(\mathbf{W}\) be a standard convolution that operates on \(n\) channels and produces \(m\) channels. For any \(n\)-channel input \(\mathbf{x}\), we have_
\[\|\mathbf{W}\mathbf{x}\|\leq K^{\frac{1}{2}}\|\mathbf{W}\|_{\mathrm{par}}\| \mathbf{x}\|,\]
_where \(K\) is the number of parameters in a convolutional layer that maps a single channel to a single channel and \(\|\mathbf{W}\|_{\mathrm{par}}\) denotes the \(\ell^{2}\)-norm of the vector of parameters of the convolution \(\mathbf{W}\)._
Proof.: We first prove the case \(m=n=1\). Let \(W\) be a convolution that maps a single channel to a single channel and \(x\in\mathbb{R}^{D}\) be an input. Since the output \(Wx\) of the convolution is linear with respect to the parameters of \(W\), there exists a matrix \(X\in\mathbb{R}^{D\times K}\) such that
\[Wx=Xw,\]
where \(w\in\mathbb{R}^{K}\) is the vector of parameters of \(W\). Note that each entry of \(X\) is also an entry of \(x\). For each entry \(x_{d}\) (\(1\leq d\leq D\)) of \(x\) and each entry \(w_{k}\) (\(1\leq k\leq K\)) of \(w\), their product \(w_{k}x_{d}\) appears at most one in the formulation of \(Wx\). Hence, the \(k\)th column \(v_{k}\) of \(X\) satisfies
\[\|v_{k}\|\leq\|x\|. \tag{3.3}\]
By the triangle inequality, Cauchy-Schwarz inequality, and (3.3), we obtain
\[\begin{split}\|Wx\|&=\|Xw\|\leq\sum_{k=1}^{K}\|v_{k }w_{k}\|\\ &\leq\left(\sum_{k=1}^{K}\|v_{k}\|^{2}\right)^{\frac{1}{2}}\left( \sum_{k=1}^{K}w_{k}^{2}\right)^{\frac{1}{2}}\leq K^{\frac{1}{2}}\|W\|_{\rm par} \|x\|,\end{split} \tag{3.4}\]
which completes the proof of the case \(m=n=1\).
Next, we consider the general case. As in (2.1), we write \(\mathbf{W}=[W_{ij}]\) and \(\mathbf{x}=[x_{j}]\). By the triangle inequality, (3.4), and the Cauchy-Schwarz inequality, we obtain
\[\begin{split}\|\mathbf{W}\mathbf{x}\|^{2}&=\sum_{i=1 }^{m}\left\|\sum_{j=1}^{n}W_{ij}x_{j}\right\|^{2}\leq\sum_{i=1}^{m}\left(\sum_ {j=1}^{n}\|W_{ij}x_{j}\|\right)^{2}\\ &\leq K\sum_{i=1}^{m}\left(\sum_{j=1}^{n}\|W_{ij}\|_{\rm par}\|x _{j}\|\right)^{2}\\ &\leq K\sum_{i=1}^{m}\left(\sum_{j=1}^{n}\|W_{ij}\|_{\rm par}^{2} \cdot\sum_{j=1}^{n}\|x_{j}\|^{2}\right)\\ &=K\|\mathbf{W}\|_{\rm par}^{2}\|\mathbf{x}\|^{2},\end{split}\]
which is our desired result.
A direct consequence of Lemma 3.3 is that, if \(\mathbf{W}\) and \(\mathbf{x}\) are independent, then we have
\[\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[\left\|\mathbf{W}\mathbf{x} \right\|^{2}\right]\leq K\mathbb{E}\left[\left\|\mathbf{W}\right\|_{\mathrm{ par}}^{2}\right]\mathbb{E}\left[\left\|\mathbf{x}\right\|^{2}\right]. \tag{3.5}\]
In the following lemma, we show that the above estimate can be improved up to a multiplicative factor \(1/n\) if we additionally assume that \(W_{ij}\) has zero mean and that some random variables are independent and/or identically distributed.
**Lemma 3.4**.: _Let \(\mathbf{W}=[W_{ij}]\) be a standard convolution that operates on \(n\) channels and produces \(m\) channels, and let \(\mathbf{x}=[x_{j}]\) be an \(n\)-channel input. Assume that \(\mathbf{W}\) and \(\mathbf{x}\) satisfy the following:_
1. \(\{W_{ij}\}\) _are i.i.d. with_ \(\mathbb{E}[W_{ij}]=0\)_._
2. \(\{x_{j}\}\) _are identically distributed._
3. \(\mathbf{W}\) _and_ \(\mathbf{x}\) _are independent._
_Then we have_
\[\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[\left\|\mathbf{W}\mathbf{x }\right\|^{2}\right]\leq\frac{K}{n}\mathbb{E}\left[\left\|\mathbf{W}\right\|_{ \mathrm{par}}^{2}\right]\mathbb{E}\left[\left\|\mathbf{x}\right\|^{2}\right].\]
Proof.: We first prove the case \(m=1\). For \(i\neq j\), invoking (i) yields
\[\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[x_{i}^{\mathrm{T}}W_{1i}^ {\mathrm{T}}W_{1j}x_{j}\right]=\mathbb{E}_{\mathbf{x}}\left[x_{i}^{\mathrm{T} }\mathbb{E}\left[W_{1i}^{\mathrm{T}}\right]\mathbb{E}\left[W_{1j}\right]x_{j} \right]=0. \tag{3.6}\]
On the other hand, for \(1\leq j\leq n\), by (iii) and (3.5), we have
\[\mathbb{E}_{W_{1j}}\mathbb{E}_{x_{j}}\left[\left\|W_{1j}x_{j}\right\|^{2} \right]\leq K\mathbb{E}\left[\left\|W_{1j}\right\|_{\mathrm{par}}^{2}\right] \mathbb{E}\left[\left\|x_{j}\right\|^{2}\right]=\frac{K}{n^{2}}\mathbb{E} \left[\left\|\mathbf{W}\right\|_{\mathrm{par}}^{2}\right]\mathbb{E}\left[ \left\|\mathbf{x}\right\|^{2}\right], \tag{3.7}\]
where the last equality is due to (i) and (ii). By (3.6) and (3.7), we get
\[\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[\left\|\mathbf{ W}\mathbf{x}\right\|^{2}\right] =\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{E}_{\mathbf{W}}\mathbb{E}_{ \mathbf{x}}\left[x_{i}^{\mathrm{T}}W_{1i}^{\mathrm{T}}W_{1j}x_{j}\right]\] \[\leq\sum_{j=1}^{n}\frac{K}{n^{2}}\mathbb{E}\left[\left\|\mathbf{ W}\right\|_{\mathrm{par}}^{2}\right]\mathbb{E}\left[\left\|\mathbf{x} \right\|^{2}\right]=\frac{K}{n}\mathbb{E}\left[\left\|\mathbf{W}\right\|_{ \mathrm{par}}^{2}\right]\mathbb{E}\left[\left\|\mathbf{x}\right\|^{2}\right],\]
which completes the proof of the case \(m=1\). The general case \(m>1\) can be shown by applying the case \(m=1\) to each \(i\)th output channel (\(1\leq i\leq m\)) and then summing over all \(i\)
The representation matrix of the group convolution presented in (2.3) becomes more sparse as the number of groups \(N\) increases. This suggests that the approximability of the group convolution decreases as \(N\) increases. However, to the best of our knowledge, there has been no theoretical analysis on this. Theorem 3.5 presents the first theoretical result regarding the approximability of the group convolution.
**Theorem 3.5**.: _For an \(n\)-channel input \(\mathbf{x}\), let \(\mathbf{W}\mathbf{x}\) and \(\mathbf{W}_{\mathrm{gp}}\mathbf{x}\) denote the outputs of the standard and group convolutions given in (2.2) and (2.3), respectively. Under Assumption 3.1, we have_
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\mathrm{gp}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{gp}}\mathbf{x}\|^ {2}\right]\right]\leq K\left(1-\frac{1}{N}\right)^{2}\mathbb{E}\left[\|\mathbf{ W}\|_{\mathrm{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}\|^{2}\right].\]
_In addition, if we further assume that Assumption 3.2 holds, then we have_
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\mathrm{gp}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{bal}}\mathbf{x}\|^ {2}\right]\right]\leq\frac{K}{n}\left(1-\frac{1}{N}\right)\mathbb{E}\left[\| \mathbf{W}\|_{\mathrm{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}\|^{2} \right].\]
Proof.: Take any standard convolution \(\mathbf{W}=[\mathbf{W}^{kl}]\) and any input \(\mathbf{x}=[\mathbf{x}^{k}]\) as in (2.2). If we set
\[\mathbf{W}_{\mathrm{gp}}=\begin{bmatrix}\mathbf{W}^{11}&\mathbf{O}&\cdots& \mathbf{O}\\ \mathbf{O}&\mathbf{W}^{22}&\cdots&\mathbf{O}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{O}&\mathbf{O}&\cdots&\mathbf{W}^{NN}\end{bmatrix},\]
i.e., if we choose \(\mathbf{W}_{\mathrm{gp}}\) as the block-diagonal part of \(\mathbf{W}\), then we have
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\mathrm{gp}}} \mathbb{E}_{\mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{gp}} \mathbf{x}\|^{2}\right]\right] \leq\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[\| \mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{gp}}\mathbf{x}\|^{2}\right]\] \[=\sum_{k=1}^{N}\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}} \left[\left\|\sum_{l\neq k}\mathbf{W}^{kl}\mathbf{x}^{l}\right\|^{2}\right]. \tag{3.8}\]
For each \(k\), since the map \(\mathbf{x}\mapsto\sum_{l\neq k}\mathbf{W}^{kl}\mathbf{x}^{l}\) is a convolution that operates on \((n-n/N)\) channels (\(\mathbf{x}_{k}\) is excluded from the input), invoking Lemmas 3.3 and 3.4 yields
\[\mathbb{E}_{\mathbf{W}}\mathbb{E}_{\mathbf{x}}\left[\left\|\sum_{l\neq k} \mathbf{W}^{kl}\mathbf{x}^{l}\right\|^{2}\right]\leq C_{n,N,k}\left(\sum_{l \neq k}\mathbb{E}\left[\|\mathbf{W}^{kl}\|_{\mathrm{par}}^{2}\right]\right) \left(\sum_{l\neq k}\mathbb{E}\left[\|\mathbf{x}^{l}\|^{2}\right]\right), \tag{3.9}\]
where
\[C_{n,N,k}=\begin{cases}K,&\text{under Assumption \ref{Assumption:2.1},}\\ \frac{K}{n-n/N},&\text{under Assumptions \ref{Assumption:2.1} and \ref{Assumption:2.2}.}\end{cases} \tag{3.10}\]
Note that the independence condition between \(\mathbf{W}\) and \(\mathbf{x}\) are used in (3.9).
Meanwhile, Assumption 3.1 implies that
\[\mathbb{E}\left[\|\mathbf{W}^{kl}\|_{\text{par}}^{2}\right]=\frac{1}{N^{2}} \mathbb{E}\left[\|\mathbf{W}\|_{\text{par}}^{2}\right],\quad\mathbb{E}\left[ \|\mathbf{x}^{l}\|^{2}\right]=\frac{1}{N}\mathbb{E}\left[\|\mathbf{x}\|^{2} \right]. \tag{3.11}\]
Finally, combination of (3.8), (3.9), and (3.11) yields
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\text{gp}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\text{gp}}\mathbf{x}\|^{ 2}\right]\right]\leq C_{n,N,k}\left(1-\frac{1}{N}\right)^{2}\mathbb{E}\left[ \|\mathbf{W}\|_{\text{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}\|^{2} \right],\]
which completes the proof.
As discussed above, BGC (3.1) has the additional structure that utilizes the intergroup mean \(\bar{\mathbf{x}}\) to compensate for the absence of the off-diagonal parts in the group convolution. We obtain the main theoretical result summarized in Theorem 3.6, showing that BGC achieves an improved approximability estimate compared to the group convolution.
**Theorem 3.6**.: _For an \(n\)-channel input \(\mathbf{x}\), let \(\mathbf{W}\mathbf{x}\) and \(\mathbf{W}_{\text{bal}}\mathbf{x}\) denote the outputs of the standard convolution and BGC given in (2.2) and (3.1), respectively. Under Assumption 3.1, we have_
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\text{bal}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\text{bal}}\mathbf{x}\|^ {2}\right]\right]\leq K\left(1-\frac{1}{N}\right)^{3}\mathbb{E}\left[\| \mathbf{W}\|_{\text{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}\|^{2}\right].\]
_In addition, if we further assume that Assumption 3.2 holds, we have_
\[\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{\text{bal}}}\mathbb{E}_{ \mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\text{bal}}\mathbf{x}\|^ {2}\right]\right]\leq\frac{K}{n}\left(1-\frac{1}{N}\right)^{2}\mathbb{E}\left[ \|\mathbf{W}\|_{\text{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}\|^{2} \right].\]
Proof.: Take any standard convolution \(\mathbf{W}=[\mathbf{W}^{kl}]\) and any input \(\mathbf{x}=[\mathbf{x}^{k}]\) as in (2.2). If we choose \(\mathbf{W}_{\text{bal}}\) as
\[\mathbf{W}_{\text{bal}}\mathbf{x}=\begin{bmatrix}\mathbf{W}^{11}&\mathbf{O}& \cdots&\mathbf{O}\\ \mathbf{O}&\mathbf{W}^{22}&\cdots&\mathbf{O}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{O}&\mathbf{O}&\cdots&\mathbf{W}^{NN}\end{bmatrix}\begin{bmatrix} \mathbf{x}^{1}\\ \mathbf{x}^{2}\\ \vdots\\ \mathbf{x}^{N}\end{bmatrix}+\begin{bmatrix}\sum_{l\neq 1}\mathbf{W}^{1l}\\ \sum_{l\neq 2}\mathbf{W}^{2l}\\ \vdots\\ \sum_{l\neq N}\mathbf{W}^{Nl}\end{bmatrix}\bar{\mathbf{x}},\]
then we get
\[\left\|\mathbf{W}\mathbf{x}-\mathbf{W}_{\mathrm{bal}}\mathbf{x}\right\|^{2}=\sum_{k =1}^{N}\left\|\sum_{l\neq k}\mathbf{W}^{kl}(\mathbf{x}^{l}-\bar{\mathbf{x}}) \right\|^{2}.\]
Note that, in the proof of Theorem 3.5, the independence condition of \(\{x_{j}\}\) was never used. Hence, we can use the same argument as in the proof of Theorem 3.5 to obtain
\[\begin{split}&\mathbb{E}_{\mathbf{W}}\left[\inf_{\mathbf{W}_{ \mathrm{gp}}}\mathbb{E}_{\mathbf{x}}\left[\|\mathbf{W}\mathbf{x}-\mathbf{W}_{ \mathrm{gp}}\mathbf{x}\|^{2}\right]\right]\\ &\leq C_{n,N,k}\left(1-\frac{1}{N}\right)^{2}\mathbb{E}\left[\| \mathbf{W}\|_{\mathrm{par}}^{2}\right]\mathbb{E}\left[\|\mathbf{x}-\bar{ \mathbf{x}}\|^{2}\right]\\ &=C_{n,N,k}\left(1-\frac{1}{N}\right)^{2}\mathbb{E}\left[\| \mathbf{W}\|_{\mathrm{par}}^{2}\right]\left(\mathbb{E}\left[\|\mathbf{x}\|^{2 }\right]-N\mathbb{E}\left[\|\bar{\mathbf{x}}\|^{2}\right]\right),\end{split} \tag{3.12}\]
where \(C_{n,N,k}\) was given in (3.10). Meanwhile, since \(\{\mathbf{x}^{k}\}\) are i.i.d., we have
\[\mathbb{E}\left[\|\bar{\mathbf{x}}\|^{2}\right]=\frac{1}{N^{2}}\mathbb{E}\left[ \|\mathbf{x}\|^{2}\right]+\frac{N-1}{N}\left\|\mathbb{E}\left[\mathbf{x}^{1} \right]\right\|^{2}\geq\frac{1}{N^{2}}\mathbb{E}\left[\|\mathbf{x}\|^{2} \right]. \tag{3.13}\]
Combining (3.12) and (3.13) yields the desired result.
_Remark 3.7_.: By Assumption 3.1, the approximability of the group convolution has a bound of \(K(1-1/N)^{2}\). Adding Assumption 3.2 to this provides \(K(1-1/N)/n\) bound. Since \(n\geq N\), the latter is a sharper bound. On the other hand, in both cases, the approximability of BGC has sharper estimates than the group convolution by \((1-1/N)\) factor.
### Computational cost
We now consider the computational cost for a single convolutional layer with the proposed BGC. Let \(D\) be the size of each input channel, i.e., \(x_{j}\in\mathbb{R}^{D}\). The number of scalar arithmetic operations required in a single operation of BGC is given by
\[2Nm_{N}n_{N}KD+nD=\frac{2KDmn}{N}+Dn. \tag{3.14}\]
In the right-hand side of (3.14), the first term corresponds to \(2N\) blocks of convolutional layers in (3.1), and the second term corresponds to the computation of \(\bar{\mathbf{x}}\). Noting that the standard convolution and the group convolution
require \(KDmn\) and \(KDmn/N\) scalar arithmetic operations, respectively, we conclude that the computational cost of BGC is approximately \(2/N\) of that of the standard convolution, but twice as much as that of the group convolution.
## 4 Comparison with existing works
In this section, we compare BGC with other variants of group convolution. Various methods have been proposed to improve the performance of group convolution by enabling intergroup communication: Shuffle (Zhang et al., 2018), learnable group convolution (LGC) (Huang et al., 2018), fully learnable group convolution (FLGC) (Wang et al., 2019), and two-level group convolution (TLGC) (Lee et al., 2022). In BGC, input and output channels are evenly partitioned into groups to ensure that all groups have the same computational cost, preventing computational bottlenecks. This feature also applies to GC, Shuffle, LGC, and TLGC, but deterministically to BGC, i.e., it uses a fixed partition that is independent of convolution and input. Hence, the computational cost for partitioning into blocks is less expensive than those of LGC and FLGC, which have different partitions for each convolution.
The number of parameters in BGC decreases at a rate of \(O(1/N)\) as the number of groups \(N\) increases, which is consistent with the rates of GC and Shuffle. Consequently, the computational cost of BGC can be scaled by \(1/N\), similar to GC and Shuffle. This is contrast to LGC, FLGC, and TLGC, which have additional parameters that are not \(O(1/N)\), making their computational costs greater than \(O(1/N)\). Note that the total computational cost of LGC, FLGC, and TLGC is \(DKmn/N+(N-1)*(Kmn/N+mn/N+1)\), \(DKmn/N+(n+m)N^{2}+Kmn/N\), and \(DKmn/N+DKn+DNm\), respectively.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & Intergroup & \# of channels & \multirow{2}{*}{Grouping} & \multirow{2}{*}{\# of parameters} & \multirow{2}{*}{
\begin{tabular}{c} Comput. \\ cost \\ \end{tabular} } \\ & commun. & per group & & & \\ \hline \hline GC (Krizhevsky et al., 2017) & No & \(n/N\) & deterministic & \(Kmn/N\) & \(O(1/N)\) \\ Shuffle (Zhang et al., 2018) & Yes & \(n/N\) & deterministic & \(Kmn/N\) & \(O(1/N)\) \\ Learnable GC (Huang et al., 2018) & Yes & \(n/N\) & adaptive & \(2Kmn\) & greater \\ Fully learnable GC (Wang et al., 2019) & Yes & varies & adaptive & \(Kmn+N+Nm\) & greater \\ Two-level GC (Lee et al., 2022) & Yes & \(n/N\) & deterministic & \(Kmn/N+Kn+Nm\) & greater \\ \hline BGC & Yes & \(n/N\) & deterministic & \(2Kmn/N\) & \(O(1/N)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of various variants of group convolution (GC), where \(n\) and \(m\) denote the numbers of input and output channels, respectively, \(N\) denotes the number of groups, and \(K\) represents the number of parameters of the single-channel convolutional layer.
As discussed in Theorem 3.6, the upper bound on the approximability of BGC is \(K(1-1/N)^{3}\), which is better than GC in Theorem 3.5 by a factor of \((1-1/N)\). To the best of our knowledge, this is the first rigorous analysis of the performance of a group convolution variant. On the other hand, there is still no approximation theory for Shuffle, LGC, FLGC, and TLGC.
Table 1 provides an overview of the comparison between BGC and the other variants of group convolution discussed above. As shown in Table 1, Shuffle (Zhang et al., 2018) shares all the advantages of BGC except for the theoretical guarantee of the approximability. However, BGC has another advantage over Shuffle. While Shuffle relies on layer propagation to have intergroup communication and provide good accuracy, BGC incorporates intergroup communication in each layer, increasing accuracy even when the network is not deep enough. This is particularly advantageous when BGC is applied to CNNs with wide but not deep layers, such as WideResNet (Zagoruyko and Komodakis, 2016). This assertion will be further supported by the numerical results in the next section.
## 5 Numerical experiments
In this section, we present numerical results that verify our theoretical results and demonstrate the performance of the proposed BGC. All programs were implemented in Python with PyTorch (Paszke et al., 2019) and all computations were performed on a cluster equipped with Intel Xeon Gold 6240R (2.4GHz, 24C) CPUs, NVIDIA RTX 3090 GPUs, and the operating system CentOS 7.
### Verification of the approximability estimates
We verify the approximability estimates of GC and BGC, showing that the estimate on approximability of BGC is better than that of GC, shown in Theorems 3.5 and 3.6. We consider a set of one-dimensional standard convolutional layers \(\{\mathbf{W}^{(t)}\}_{t=1}^{S}\) with \(m=n=256\) and \(K=3\), which is generated by He initialization (He et al., 2015). We also sample \(S\) random data points \(\{\mathbf{x}^{(s)}\}_{s=1}^{S}\) from either the normal distribution \(\mathcal{N}(0,1)\) or the uniform distribution \(\mathcal{U}(-1,1)\). To measure the approximability of GC (\(\mathrm{m}=\mathrm{gp}\)) and BGC (\(\mathrm{m}=\mathrm{bal}\)) with respect to \(\mathbf{W}\), we use the following measure:
\[\mathcal{E}=\frac{1}{S}\sum_{t=1}^{S}\left(\min_{\mathbf{W}_{\mathrm{m}}} \frac{1}{S}\sum_{s=1}^{S}\|\mathbf{W}^{(t)}\mathbf{x}^{(s)}-\mathbf{W}_{ \mathrm{m}}\mathbf{x}^{(s)}\|^{2}\right), \tag{5.1}\]
Figure 1: Graphs depicting \(\log\mathcal{E}\) with respect to \(\log(1-1/N)\) for GC and the proposed BGC, where \(\mathcal{E}\) is the error measure given in (5.1) and \(N\) is the number of groups. We draw data points from either the normal distribution \(\mathcal{N}(0,1)\) or the uniform distribution \(\mathcal{U}(-1,1)\). The number in the legend indicates the average slope. Note that \(N=2^{k}\), \(k=2,\cdots,6\).
Figure 2: Graphs depicting \(\operatorname{Rel.\mathcal{E}}/(1-1/N)^{p}\) with respect to the number of groups \(N\) for GC (\(p=2\)) and the proposed BGC (\(p=3\)), where \(\mathcal{E}\) is the error measure given in (5.1). We draw data points from either the normal distribution \(\mathcal{N}(0,1)\) or the uniform distribution \(\mathcal{U}(-1,1)\).
which can be evaluated by conventional least squares solvers (Golub and Van Loan, 2013).
The graphs in Figure 1 depict \(\log\mathcal{E}\) with respect to \(\log(1-1/N)\) at various settings for GC and BGC. One can readily observe linear relations between \(\log\mathcal{E}\) and \(\log(1-1/N)\) for both GC and BGC. That is, we have an empirical formula
\[\mathcal{E}\approx C\left(1-\frac{1}{N}\right)^{\gamma}\frac{1}{S}\sum_{t=1}^{ S}\|\mathbf{W}^{(t)}\|^{2}\frac{1}{S}\sum_{s=1}^{S}\|\mathbf{x}^{(s)}\|^{2} \tag{5.2}\]
for some positive constants \(C\) and \(\gamma\) independent of \(N\). From the graph, we see that \(\gamma\approx 1\) for GC and \(\gamma\approx 2\) for BGC, which confirm our theoretical results in Theorems 3.5 and 3.6. Next, we define a relative approximability
\[\text{Rel.}\mathcal{E}=\frac{\mathcal{E}}{\frac{1}{S}\sum_{t=1}^{S}\| \mathbf{W}^{(t)}\|^{2}\frac{1}{S}\sum_{s=1}^{S}\|\mathbf{x}^{(s)}\|^{2}}. \tag{5.3}\]
In Figure 2, we plot \(\text{Rel.}\mathcal{E}/(1-1/N)^{p}\) with respect to \(N\) under various settings for GC and BGC, where \(p=1\) for GC and \(p=2\) for BGC. We observe that all values of \(\text{Rel.}\mathcal{E}/(1-1/N)^{p}\) are bounded by \(3.83\times 10^{-3}\) independent of \(N\), which implies the constant \(C\) is less than \(K/n\approx 11.72\times 10^{-3}\) in (5.2). This observation also agrees with the theoretical results in Theorems 3.5 and 3.6.
### Embedding in recent CNNs
In Section 5.1, we verified that our approximability estimates are consistent with experimental results using synthetic data. Now, we examine the classification performance of BGC applied to various recent CNNs, including ResNet (He et al., 2016), WideResNet (Zagoruyko and Komodakis, 2016), ResNeXt (Xie et al., 2017), MobileNetV2 (Sandler et al., 2018), and EfficientNet (Tan and Le, 2019). The network structures used in our experiments are described below:
* **ResNet.** In our experiments, we used the ResNet-50, which was used for ImageNet classifications. It uses a bottleneck structure consisting of a \(1\times 1\) standard convolution, a group convolution with a \(3\times 3\) kernel, another \(1\times 1\) standard convolution, and a skip connection.
* **WideResNet.** In our experiments, we used two different structures of WideResNet: WideResNet-28-10 and WideResNet-34-2, which were used for CIFAR-10 and ImageNet classifications, respectively. It uses a residual unit consisting of two \(3\times 3\) convolutions and a skip connection.
* **ResNeXt.** It uses the same structure as the ResNet except using \(3\times 3\) group convolution instead of \(3\times 3\) convolution. In our experiments, we used ResNeXt-29 (\(8\times 64\)d) for CIFAR-10 classification. We applied the group convolutions to two \(1\times 1\) standard convolutions.
* **MobileNetV2.** The basic structure of MobileNetV2 (Sandler et al., 2018) is an inverted residual block, similar to the bottleneck in ResNeXt. However, MobileNetV2 uses a depthwise convolution (Chollet, 2017) instead of ResNeXt's \(3\times 3\) group convolution. In our experiments, we applied group convolution to two \(1\times 1\) standard convolutions.
* **EfficientNet.** It is based on MnasNet (Tan et al., 2019), a modification of MobileNetV2. Using an automated machine learning technique (Zoph and Le, 2017), EfficientNet proposes several model structures with appropriate numbers of channels and depth. It also has several variations, from b0 to b7 models, depending on the number of parameters. In our experiments, we used the most basic model b0.
We evaluate the performance of various variants of group convolution on the datasets CIFAR-10 (Krizhevsky, 2009) and ImageNet ILSVRC 2012 (Deng et al., 2009). Details on these datasets are in Table 2. In addition, for the CIFAR-10 dataset, a data augmentation technique in (Lee et al., 2015) was adopted for training; four pixels are padded on each side of images and \(32\times 32\) random crops are sampled from the padded images and their horizontal flips. For the ImageNet dataset, input images of size \(224\times 224\) are randomly cropped from a resized image using the scale and aspect ratio augmentation of (Szegedy et al., 2015), which was implemented by (Gross and Wilber, 2016).
#### 5.2.1 Ablation study through transfer learning
First, we will apply GC and BGC to a pre-trained network on real data to see how well BGC works compared to GC. We select a pre-trained ResNet-50
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Image size & Classes & \# of training / validation samples \\ \hline CIFAR-10 & \(32\times 32\) & 10 & 50,000 / 10,000 \\ ImageNet & \(224\times 224\) & 1,000 & 1,280,000 / 50,000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Details of CIFAR-10 and ImageNet datasets.
trained on the ImageNet dataset. Note that the pre-trained parameters can be found in the PyTorch library (Paszke et al., 2019). By transferring the parameters of the pre-trained standard convolution in ResNet-50, we obtained the parameters of (2.3) and (3.1), which are referred to as ResNet-50-GC and ResNet-50-BGC, respectively. We then further trained the ResNet-50-GC and ResNet-50-BGC for 30 epochs using SGD optimizer with batch size 128, learning rate 0.01, Nesterov momentum 0.9, and weight decay 0.0001.
The classification performance of ResNet-50-GC and ResNet-50-BGC is given in Table 3. Compared to GC, the classification performance of BGC improves by as little as 4% and as much as 15%. Therefore, even for real data, it can be confirmed that BGC definitely complements the performance of the neural network degraded by GC.
#### 5.2.2 Computational efficiency
To verify the computational efficiency of our BGC, we conducted experiments to increase the number of groups \(N\) of \(3\times 3\) convolution mapping 1024 channels to 1024 channels with input \(\mathbf{x}\in\mathbb{R}^{128\times 1024\times 7\times 7}\). Note that this convolution is used for ResNet-50. Table 4 shows the total memory usage, computation time, and classification errors for forward and backward propagations of convolutions equipped with GC and BGC. Note that the number of groups \(N\) varies from 2 to 16. The results are computed on a single GPU. First, looking at the total memory usage, BGC uses more memory than GC, but the gap narrows as \(N\) increases. The additional memory usage of BGC occurs when computing the mean \(\bar{\mathbf{x}}\) and the convolution \(\bar{\mathbf{W}}\) defined in (3.1). On the other hand, looking at computation time, BGC is slower than GC, but
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \(N\) & ResNet-50-GC & ResNet-50-BGC \\ \hline
1 & \multicolumn{2}{c}{23.87} \\ \hline
2 & 26.54 & 22.45 \\
4 & 36.70 & 25.51 \\
8 & 43.72 & 31.13 \\
16 & 50.09 & 35.45 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification errors (%) of ResNet-50-BC and ResNet-50-BGC on ImageNet dataset after transfer learning. Each network is transfer learned from the pre-trained ResNet-50 provided in PyTorch. Note that the case of \(N=1\) is the classification error of ResNet-50.
faster than the standard convolution when \(N>2\). As can be seen in (3.14), compared to GC, BGC requires more computation time because it performs convolution twice, but as \(N\) increases, the time decreases. Moreover, we can see that the cost of calculating the mean \(\bar{\mathbf{x}}\) is small enough that it has little effect on the total computation time. Eventually, these results suggest that while BGC increases memory consumption and computation time compared to GC, it can improve performance with only a small increase in the computational cost when dealing with large channels in group convolution.
#### 5.2.3 Comparison with existing approaches
As benchmark group convolution variants, we choose GC (Krizhevsky et al., 2017), Shuffle (Zhang et al., 2018), FLGC (Wang et al., 2019), and TLGC (Lee et al., 2022), which were discussed in Section 4. All neural networks for the CIFAR-10 dataset in this section were trained using stochastic gradient descent with batch size 128, weight decay 0.0005, Nesterov momentum 0.9, total epoch 200, and weights initialized as in (He et al., 2015). The initial learning rate was set to 0.1 and was reduced to its one tenth in the 60th, 120th, and 160th epochs. For ImageNet, the hyperparameter settings are the same as the CIFAR case, except for the weight decay 0.0001, total epoch 90, and the learning rate reduced by a factor of 10 in the 30th and 60th epochs.
The classification errors of BGC, along with other benchmarks, applied to various CNNs on the CIFAR-10 dataset are presented in Table 5. BGC shows better overall results than other methods. In particular, the application of BGC significantly improves the classification performance of EfficientNet
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{N} & \multicolumn{2}{c|}{Total memory usage (Mb)} & \multicolumn{2}{c|}{Computation time (ms)} & \multicolumn{2}{c}{Classification error (\%)} \\ \cline{2-7} & GC & BGC & GC & BGC & GC & BGC \\ \hline \hline
1 & \multicolumn{2}{c|}{60.50} & \multicolumn{2}{c|}{7.979} & \multicolumn{2}{c}{23.87} \\ \hline
2 & 42.50 & 75.75 & 9.056 & 10.467 & 26.54 & 22.45 \\
4 & 33.50 & 48.62 & 4.449 & 4.790 & 36.70 & 25.51 \\
8 & 29.00 & 37.06 & 2.120 & 2.648 & 43.72 & 31.13 \\
16 & 26.75 & 31.53 & 2.265 & 2.557 & 50.09 & 35.45 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Total memory usage, computation time, and classification errors (%) for forward and backward propagations of \(3\times 3\) convolution from 1024 channels to 1024 channels equipped with GC and BGC and the input \(\mathbf{x}\in\mathbb{R}^{128\times 1024\times 7\times 7}\). Note that the case of \(N=1\) is computed with the standard convolution.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \(N\) & Method & EfficientNet-b0 & WideResNet-28-10 & ResNeXt-29 (\(8\times 64\)d) \\ \hline \hline
1 & SC & 8.42 & 3.88 & 5.32 \\ \hline & GC & 10.02 & 4.33 & 5.92 \\ & Shuffle & 8.72 & 4.00 & **4.52** \\
2 & FLGC & 8.63 & 4.62 & 4.68 \\ & TLGC & 8.93 & 4.12 & 5.16 \\ & BGC & **7.16** & **3.75** & 5.84 \\ \hline & GC & 11.25 & 5.44 & 6.76 \\ & Shuffle & 9.41 & 4.21 & **4.17** \\
4 & FLGC & 9.70 & 6.16 & 5.16 \\ & TLGC & 9.42 & 4.34 & 6.92 \\ & BGC & **7.50** & **4.02** & 5.84 \\ \hline \hline \(N\) & Method & EfficientNet-b0 & WideResNet-28-10 & ResNeXt-29 (\(8\times 64\)d) \\ \hline \hline
1 & SC & 8.42 & 3.88 & 5.32 \\ \hline & GC & 13.14 & 6.43 & 6.52 \\ & Shuffle & 10.65 & 4.73 & **4.81** \\
8 & FLGC & 10.49 & 9.81 & 5.25 \\ & TLGC & 10.68 & 4.79 & 6.56 \\ & BGC & **7.80** & **4.22** & 6.00 \\ \hline & GC & & 8.47 & 6.92 \\ & Shuffle & & 5.28 & **4.89** \\
16 & FLGC & - & 11.26 & 6.11 \\ & TLGC & & 4.95 & 6.08 \\ & BGC & & **4.76** & 5.60 \\ \hline \end{tabular}
\end{table}
Table 5: Classification errors (%) of EfficientNet-b0, WideResNet-28-10, and ResNeXt-29 (\(8\times 64\)d) on the CIFAR-10 dataset equipped with GC, Shuffle, fully learnable group convolution (FLGC), two-level group convolution (TLGC), and BGC. The case of \(N=1\) corresponds to the standard convolution (SC). Note that the EfficientNet-b0 could not increase \(N\) up to 16 due to its structure, so is was implemented only up to 8.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \(N\) & Method & EfficientNet-b0 & WideResNet-34-2 & MobileNetV2 \\ \hline \hline
1 & SC & 30.68 & 24.99 & 34.13 \\ \hline & GC & 35.18 & 26.92 & 39.50 \\ & Shuffle & 33.91 & 25.12 & 36.67 \\
2 & FLGC & 35.53 & 30.53 & 37.98 \\ & TLGC & 33.50 & 25.36 & **33.56** \\ & BGC & **31.82** & **24.19** & 36.10 \\ \hline & GC & 41.64 & 31.96 & 46.15 \\ & Shuffle & 38.02 & 26.84 & 38.14 \\
4 & FLGC & 40.71 & 38.30 & 44.37 \\ & TLGC & 36.38 & 27.17 & **36.88** \\ & BGC & **33.78** & **25.36** & 37.82 \\ \hline & GC & 48.42 & 36.70 & 51.97 \\ & Shuffle & 42.74 & 29.03 & 42.75 \\
8 & FLGC & 45.38 & 45.54 & 49.28 \\ & TLGC & 38.66 & 28.43 & **39.16** \\ & BGC & **38.13** & **27.16** & 42.65 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Classification errors (%) of EfficientNet-b0, WideResNet-34-2, and MobileNetV2 on the ImageNet dataset equipped with GC, Shuffle, fully learnable group convolution (FLGC), two-level group convolution (TLGC), and BGC. The case of \(N=1\) corresponds to the standard convolution (SC) equipped to CNNs.
b0. In addition, the classification errors of BGC for WideResNet-28-10 are always less than 5% when \(N\) varies up to 16. Although Shuffle performed the best for ResNeXt-29, BGC still outperforms the classification performance of GC. To further validate the performance of BGC, we report the classification results on the ImageNet dataset in Table 6. For this dataset, we observe that BGC outperforms other benchmark group convolution variants for CNN architectures except MobileNetV2. Therefore, through several experiments, we conclude that BGC is an effective and theoretically guaranteed alternative to group convolution for various CNN architectures on large-scale datasets.
## 6 Conclusion
In this paper, we proposed a novel variant of group convolution called BGC. We constructed BGC by combining the plain group convolution structure with a balancing term defined as the intergroup mean to improve intergroup communication. We designed a new measure (3.2) to assess the approximability of group convolution variants and proved that the approximability of group convolution is bounded by \(K(1-1/N)^{2}\). Also, we showed that the bound on approximability of proposed BGC is \(K(1-1/N)^{3}\), which is an improved bound compared to the group convolution. Moreover, under the additional assumption about the parameters of the standard convolution, we showed that the bounds for the approximability of group convolution and BGC are \(K(1-1/N)/n\) and \(K(1-1/N)^{2}/n\), respectively. Numerical experiments with various CNNs such as WideResNet, MobileNetV2, ResNeXt, and EfficientNet have demonstrated the practical efficacy of BGC on large-scale neural networks and datasets.
We conclude this paper with a remark on BGC. A major drawback of the proposed BGC is that it requires full data communication among groups. This means that when computing the intergroup mean \(\bar{\mathbf{x}}\) that appears in the balancing term, we need the entire input \(\mathbf{x}\). This high volume of communication can be a bottleneck in parallel computation, which limits the performance of the model in distributed memory systems. We note that TLGC (Lee et al., 2022) has addressed this issue by minimizing the amount of communication required. Exploring how to improve BGC by reducing communication in a similar manner to (Lee et al., 2022), while maintaining strong performance in both theory and practice, is considered as a future work.
## Acknowledgments
This work was supported in part by the National Research Foundation (NRF) of Korea grant funded by the Korea government (MSIT) (No. RS-2023-00208914), and in part by Basic Science Research Program through NRF funded by the Ministry of Education (No. 2019R1A6A1A10073887).
|
2308.07540 | CALYPSO: LLMs as Dungeon Masters' Assistants | The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to
perform multiple tasks simultaneously. The DM must digest information about the
game setting and monsters, synthesize scenes to present to other players, and
respond to the players' interactions with the scene. Doing all of these tasks
while maintaining consistency within the narrative and story world is no small
feat of human cognition, making the task tiring and unapproachable to new
players. Large language models (LLMs) like GPT-3 and ChatGPT have shown
remarkable abilities to generate coherent natural language text. In this paper,
we conduct a formative evaluation with DMs to establish the use cases of LLMs
in D&D and tabletop gaming generally. We introduce CALYPSO, a system of
LLM-powered interfaces that support DMs with information and inspiration
specific to their own scenario. CALYPSO distills game context into bite-sized
prose and helps brainstorm ideas without distracting the DM from the game. When
given access to CALYPSO, DMs reported that it generated high-fidelity text
suitable for direct presentation to players, and low-fidelity ideas that the DM
could develop further while maintaining their creative agency. We see CALYPSO
as exemplifying a paradigm of AI-augmented tools that provide synchronous
creative assistance within established game worlds, and tabletop gaming more
broadly. | Andrew Zhu, Lara J. Martin, Andrew Head, Chris Callison-Burch | 2023-08-15T02:57:00Z | http://arxiv.org/abs/2308.07540v1 | # Calypso: LLMs as Dungeon Masters' Assistants
###### Abstract
The role of a Dungeon Master, or DM, in the game Dungeons & Dragons is to perform multiple tasks simultaneously. The DM must digest information about the game setting and monsters, synthesize scenes to present to other players, and respond to the players' interactions with the scene. Doing all of these tasks while maintaining consistency within the narrative and story world is no small feat of human cognition, making the task string and unapproachable to new players. Large language models (LLMs) like GPT-3 and ChatGPT have shown remarkable abilities to generate coherent natural language text. In this paper, we conduct a formative evaluation with DMIs to establish the use cases of LLMs in D&D and tabletop gaming generally. We introduce Calypso, a system of LLM-powered interfaces that support DMIs with information and inspiration specific to their own scenario. Calypso distills game context into bite-sized prose and helps brainstorm ideas without distracting the DM from the game. When given access to Calypso, DMIs reported that it generated high-fidelity text suitable for direct presentation to players, and low-fidelity ideas that the DM could develop further while maintaining their creative agency. We see Calypso as exemplifying a paradigm of AI-augmented tools that provide synchronous creative assistance within established game worlds, and tabletop gaming more broadly.
## 1 Introduction
Dungeons & Dragons (D&D) [12] is a tabletop role-playing game (TTRPG)--a collaborative storytelling game where a group of players each create and play as their own character, exploring a world created by and challenges set by another player known as the Dungeon Master (DM). It is the DM's role to play the non-player characters and monsters, and to write the overarching plot of the game.
As a co-creative storytelling game, Dungeons & Dragons presents multiple unique challenges for AI systems aiming to interact with it intelligently. Over the course of a game, which is played out across multiple sessions spanning a long duration of time (often multiple months to years), the DM and the other players work together to produce a narrative grounded in commonsense reasoning and thematic consistency [1, 1, 1]. As the group plays for longer, the players define more of the world and ad-hoc rules for interacting with it [22]. In order to make in-character decisions, each individual player must maintain a personal understanding of the game world which they build from the game history [13] while keeping track of what information other players and their characters know [15].
By using an AI co-DM tool, human DMIs can devote more mental energy to cognitively demanding tasks of being a DM, such as improvising dialog of NPCs (non-player characters) or repairing the script of their planned campaign. Furthermore, an AI co-DM would drastically reduce the barrier of entry into DMing. Therefore, an AI co-DM tool would be invaluable to the D&D community.
An effective AI co-DM tool should not only produce coherent and compelling natural language output for a DM to effectively use for inspiration but also account for an immense amount of background context and requirements for internal consistency--both within D&D rules and within a given scenario or campaign. Large language models (LLMs), such as GPT-3 [1] and ChatGPT [1], have shown impressive abilities to generate
Figure 1: After rolling a random encounter (red), DMIs can use LLMs with Calypso to help generate an encounter scene and digest information about monsters. Calypso can present monster information concisely (green) and brainstorm conversationally (purple) to help build a compelling narrative to present to players (purple).
coherent text. Some [15, 14] have even applied LLMs to the problem of D&D dialog and narrative by finetuning the models with structured information. Whereas these works used structured information scraped from user data to fine-tune a single model, we use existing data in D&D source books to improve generation using zero-shot prompting with multiple models.
In this paper, we present a study in which we created a LLM-augmented tool to assist DMs in playing D&D. We employed the following methods:
1. [leftmargin=*]
2. We interviewed DMs to understand how they digest game information and learn design motivations for AI assistants in the domain.
3. We created a gameplay setting that allowed us to study D&D gameplay on a larger scale than other recent works and invited 71 players to participate.
4. We created a system of three LLM-powered interfaces, which we call Calypso (**C**ollaborative **A**ssistant for **L**ore and **Y**ielding **P**lot **S**ynthesis **O**bjectives), that DMs and players could use as they played D&D, and studied the ways in which DMs and players incorporated them into their creative process over four months using established HCI methods.
We show that language models are capable "co-DMs" - not a player in the same way that the human players and DM are, but still a synchronous agent that acts as a guide for the human DM. We provide insights into how TTRPG players actually want to use these tools and present validated solutions that can extend beyond the D&D domain. Our study shows that a system designed with these motivations in mind saw consistent prolonged usage among a community of creative writers.
## 2 Background and Related Work
### Dungeons & Dragons in the Time of COVID
Traditionally, Dungeons & Dragons is played in person. Players use physical character sheets and monster states referenced from books containing hundreds of prewritten "stat blocks" (as pictured in Figure 2a) [12]. DMs have the option to create a world either own to play in (also sometimes called "homebrewing" a setting) or to set their game in a professionally written "module": a book containing a detailed outline of an adventure, including the setting, non-player characters, predesigned challenges and monster encounters, and lore. Previous works have explored methods of how to present information in these existing settings more clearly to DMs, such as through a computer-generated adventure flowchart [1] or recommender systems for relevant entities in a scene [12].
Since the beginning of the COVID-19 pandemic, there has been a shift towards playing D&D online [20]. Rather than using physical character sheets and reference books while playing in person, a large number of groups instead play virtually using tools like D&D Beyond [10] for virtual character sheets and reference books, Discord for messaging, virtual tabletops like Foundry [13] to simulate maps, and game state trackers like Avrae [15] to track character and monster stats. For inspiration and immersion, DMs also use online tools like dscrb2020, which provides prewritten text, Tabletop Audio [14], which provides soundboards and soundscapes, and random tables published in D&D source books [13], which provide a prewritten set of options, for specific scenarios (e.g. encountering a dragon).
### Large Language Models and D&D
Large language models (LLMs) are a recent development in the area of Natural Language Processing that have demonstrated emergent capabilities of understanding users' input and replying directly in the user's language (c.f. a machine language). A neural architecture based on the Transformer [21], they are capable of learning user-defined tasks with no additional training ("few-shot" or "in-context" learning) and referencing concepts defined in their large training corpus [15].
Although there has been some work looking at playing Dungeons & Dragons using earlier neural language models [16, 17, 18, 19], the introduction of LLMs has created a renewed interest in researching tabletop gaming. Callison-Burch et al. [19] frame D&D as a dialogue challenge and examine whether LLMs are capable of predicting a player's next utterance based on the conversational history, finding that local game-specific state context is important for grounded narrative generation. Newman and Liu [19] use LLMs to generate novel material (namely spells) that is consistent with the style and rules of the game. Zhou et al. [19] create a system that models the intents of D&D players using LLMs to inform a surrogate Theory of Mind. Zhu et al. [19] instrument a game state tracker to provide concrete actor stats and combat state, finding that LLMs are capable of producing interesting roleplay in combat scenarios and predicting the action a player will take. They highlight the importance of player and DM agency in LLM-generated texts, proposing that LLMs are better suited for assistant-style use cases. Kelly, Mateas, and Wardrip-Fruin [18] present a preliminary work using LLMs to identify player questions from live transcriptions of gameplay and suggest in-character responses.
Santiago et al. [19] have proposed multiple scenarios where LLMs and other generative AI models may be used to assist DMs, and discuss the ways AI may be used. In this workshop paper, they hypothesize the potential for AI to help inspire and take cognitive burden off the DM and provide brainstorming inspiration, but also weaknesses where AI may fall back onto overused tropes or underrepresent minority groups. In this work, we explore and expand upon many of these hypotheses through interviews with DMs. We create a system where DMs can fluently incorporate a LLM into their creative process and run a broad study on its use and failure cases.
LLMs have been explored as a writing assistant in other modalities as well, using various methods to assist in collaboratively building a narrative. These works have exam
ined the use of conversational agents Coenen et al. (2021); Ippolito et al. (2022), writing in established settings Akoury et al. (2020), and other human-in-the-loop methods Chung et al. (2022); Roemmele and Gordon (2015); Samuel et al. (2016); Calderwood et al. (2020); Yang et al. (2022); Krerminski et al. (2022). There has also been work proposing LLMs for multimodal co-creative frameworks Lin et al. (2022). Overall, these techniques differ from D&D and other TTRPGs in that they primarily focus on a single writer/creator interacting with the system, rather than the multi-player experience in TTRPGs where all players directly interact with the story.
To our knowledge, our work is the first to examine concrete implementations of multiple unique interaction modalities in and outside of combat scenarios and the ways D&D players interact with language models on this scale.
## 3 Design Motivation
To better understand the friction DMs face in looking up reference material midgame, we conducted interviews and ran workshop sessions with seven DMs (referred to as D1-7 below) from a wide range of backgrounds before creating our system. Participants ranged from 1 to 39 years of experience playing D&D (various editions). In these sessions, we asked DMs how they approached improvising encounters - i.e., to run random encounters that are generated on the fly (usually by rolling on an encounter table). In random encounters, DMs do not have time to research the monster's stats and lore beforehand and think of backstories as to why the monster ended up in a particular setting. From these interviews, we identify several ways how an AI system could be helpful to DMs:
Inspiration.As proposed by Santiago et al. (2023), we find that DMs desired the ability to use a language model to generate the first draft of an encounter, which they could then build on top of with their own ideas (D1-3). Different DMs envisioned giving the system varying amounts of control over the narrative. D3 expressed that they would want a system to write a scene that they would then vet and choose whether to present it verbatim to their players, edit it to their liking, or use as inspiration to overcome writer's block. D1 and D2 envisioned using the system's generation verbatim to present an initial scene to players while they either read the complete text of the monster description (D2) or to reduce cognitive load (D1).
Strategic Copilot.One DM mentioned that managing both narrative gameplay and tracking monster stats and mechanics overwhelmed their short-term memory, and expressed interest in a system that could aid them in making strategic decisions and acting as a high-level copilot. They expressed that the large amount of low-level management was a barrier to them running more D&D, and that they wanted to "feel more like an orchestra conductor over someone who's both putting down the train tracks AND fueling the train" (D4).
Another DM said that DMs often fail to take into account monsters' unique abilities and stats when running encounters, making simplifications to manage a large number of monsters. For example, a monster with very high intelligence and low dexterity attempting to move sneakily "should know not to move and make a bunch of noise" (D6).
Thematic Commonsense.We asked DMs what parts of monsters' game statistics they found to be the most important for their understanding of how to use a monster in their game, and found that multiple DMs used a concept of "baseline" monsters to gain a broad understanding of a monster when they first encounter it. The idea of the baseline monster was not to find a specific monster to compare another to, but to determine which parts of an individual monster's game statistics to focus on, and which parts to use prior thematic commonsense to fill in.
In this context, we define _thematic commonsense_ as the DM's intuitive understanding of D&D as a game with medieval fantasy themes, and how they might draw inspiration from other works of fantasy literature. For example, a DM might intuitively understand that a dragon is a kind of winged reptile with a fire breath based on their consumption of other fantasy works, reason that all dragons are capable of flight, and focus on a particular dragon's unique abilities rather than flight speed (D7). Although D&D reference material does not include an explicit description of the dragon's fire breath, the DM might base their narration on depictions of fire breath from other authors.
We find this similar to the idea of a _genus-differentia_ definition Parry and Hacker (1991), in that DMs use their general background understanding of fantasy settings to define their personal _genus_ and supplement prior knowledge by skimming monster reference books for _differentia_. This suggests that co-DM systems should focus on helping DMs extract these _differentia_, and that they also require the same extensive background knowledge as the user. For the D&D domain, we believe that LLMs such as GPT-3 Brown et al. (2020) have included sufficient information on the game and the game books themselves in their training corpus so as to establish such a background knowledge. However, we are interested in methods for establishing this thematic commonsense knowledge for works not included in models' training data in future work.
Simple Language.Multiple DMs emphasized that they would like a co-DM system to present monster information in plain language, rather than the elaborate prose found in game reference manuals (D3-6). As a work of fantasy literature, D&D publications (including reference manuals) often use heavy figurative language and obscure words. For example, the first paragraph of an owlbear's description reads:
An owlbear's screech echoes through dark valleys and benighted forests, piercing the quiet night to announce the death of its prey. Feathers cover the thick, shaggy coat of its bearlike body, and the limpid pupils of its great round eyes stare furiously from its owlish head Crawford et al. (2018), pg. 147).
This style of description continues for seven additional paragraphs. On average, across all D&D monsters published on D&D Beyond, a monster's description and list of abil
ities contains 374 words (min: 0, max: 2,307). DMs often use multiple monsters together in the same encounter, compounding the amount of information they must hold in their mind.
Monster descriptions often include descriptions of the monster, its abilities, and lore. Some DMs' preferred method of referencing monster lore while running the game was to skim the full monster entry, and the complex and long prose often led to DMs feeling overwhelmed (D4, D5). Other DMs wanted a short and salient mechanical (i.e. focusing on monster's game abilities and actions) description, rather than a narrative (lore and history-focused) one (D3, D6).
Overall, the complexity of monster descriptions led DMs to forget parts of monsters' lore or abilities during gameplay (D5) or use overly broad simplifications that did not capture an individual monster's uniqueness (D6). While offline resources exist to help DMs run monsters (e.g. Amman (2019)), they cannot account for the environment or generate a unique scenario for each encounter with the same monster. We believe that LLMs' capability to summarize and generate unique material is particularly applicable to these challenges.
## 4 Implementation
In this section, we describe the three interfaces we developed to provide DMs with the sorts of support they desired. These interfaces were designed with "in the wild" deployment in mind:
1. Encounter Understanding: a zero-shot method to generate a concise setup of an encounter, using GPT-3.
2. Focused Brainstorming: a conversational method for DMs to ask additional questions about an encounter or refine an encounter summary, using ChatGPT.
3. Open-Domain Chat Baseline: a conversational interface without the focus of an encounter, using ChatGPT.
Our implementation differs from other efforts to develop AI-powered co-creative agents in two ways. First, compared to models where the AI acts as the writer, AI-generated content is not necessarily directly exposed to the audience. Calypso only presents ideas to a human DM, who has final say over what is presented to the players. Second, compared to co-writing assistants where the writer has plentiful time to iterate, the time between idea and presentation is very short. Since the DM uses Calypso in the midst of running a real game, Calypso should be frictionless to adopt and should not slow down the game.
### Encounter Understanding
The first interface we provided to DMs was a button to use a large language model to distill down game statistics and lore available in published monster stat blocks. To accomplish this, we prompted GPT-3 (Brown et al., 2020) (specifically, the text-davinci-003 model) with the text of the chosen encounter, the description of the setting the encounter was taking place in, and the game statistics and lore of each monster involved in the encounter. The full prompts are available in Appendix A.
We began by presenting the LLM with the task to summarize monsters' abilities and lore and the environment. We collected feedback from DMs after generating the extracted information by allowing them to select a positive or negative feedback button, and optionally leave comments in an in-app modal. This interaction is illustrated in Figure 2.
Summarization.At first, we prompted GPT-3 to "summarize the following D&D setting and monsters for a DM's notes without mentioning game stats," then pasted verbatim the text description of the setting and monster information. For decoding, we used a temperature of 0.9, top-p of 0.95, and frequency and presence penalties of 1. Based on feedback from DMs (discussed in Section 6.1), we later changed to a more abstract "understanding" task described below.
Abstractive Understanding.In the understanding task, we prompted GPT-3 with the more abstract task to help the DM "understand" the encounter, along with explicit instructions to focus on the unique aspects of each creature, use information from mythology and common sense, and to mention how multiple creatures interact with each other. After these instructions, we included the same information as the _Summarization_ task above. Finally, if a monster had no written description, we included instructions in place of the monster's description telling Calypso to provide the DM information from mythology and common sense. For decoding, we used a temperature of 0.8, top-p of 0.95, and a frequency penalty of 0.5.
### Focused Brainstorming
To handle cases where a single round of information extraction was not sufficient or a DM had additional focused questions or ideas they wanted assistance elaborating, we also provided an interface to open a private thread for focused brainstorming. Available at any time after an encounter was randomly chosen, we provided the same encounter information as in the _Encounter Understanding_ interface as an initial prompt to ChatGPT (i.e., gpt-3.5-turbo) (OpenAI, 2022). If the DM had used the _Encounter Understanding_ interface to generate an information block, we also provided it as context (Figure 4). The full prompts are available in Appendix A. For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3.
### Open-Domain Chat Baseline
Finally, we made a baseline open-domain chat interface available to all players, without the focus of an encounter. As this interface was available at any time and open-ended, it helped provide a baseline for how DMs would use AI chatbots generally. To access the interface, users were able to run a bot command, which would start a new thread. We prompted ChatGPT to take on the persona of a fantasy creature knowledgeable about D&D, and generated replies to every message sent in a thread opened in this manner. For decoding, we used a temperature of 1, top-p of 0.95, and a frequency penalty of 0.3. Unlike the private threads created by the _Focused Brainstorming_ interface, open-domain conversation threads were public and allowed other users to join.
## 5 Experimental Setup
By deploying Calypso in the wild, we sought to learn how real DMs would adopt the new technology (if at all) and the emergent use cases that would arise.
We set up a special "play-by-post living world" game, which we describe below, and invited 71 players and DMs (referred to as P1-71) to participate by posting on D&D recruitment forums. While preserving the core foundations of D&D, our setup allowed us to conduct a large-scale study with a greater number of play sessions than studying individual games of D&D.
In this section, we describe our methodology for setting up this large-scale D&D game.
### D&D Game Setup
All gameplay occurred on our Discord server. We used Avrae, a D&D Discord bot with over five million users, to facilitate gameplay. Avrae is commonly used to run D&D games in this fashion, so the large-scale game was familiar to players and DMs [13]. All participants were asked to review the server's research statement and to provide their informed consent before participating. Participants were compensated with free access to all published D&D game materials (worth $981.35). We explain the core differences between a traditional game of D&D and our setup here:
Play-by-Post.While most commonly D&D is played in person or using a virtual teleconference, a large number of players also play in a text-only mode known as "play-by-post". In play-by-post games, rather than acting out characters using voices and body movements, players narrate their characters' actions and speech in a textual format. This text-based modality allowed us to monitor a large number of play sessions and allowed players to interface with language models without having to add an additional step to transcribe verbal play into text.
Living World.Our setup takes aspects from playing both prewritten modules and homebrew worlds. Traditionally, groups are comprised of 1 DM and 3-6 players playing in different worlds created by the DM, who play in regularly scheduled 3-4 hour play sessions (most commonly, once a week). To allow for a larger scale study, in our setting, all 71 players exist in the same would, which we created. To emulate traditional play sessions, players form groups of 3-6 (on average) to partake in self-contained quests in the setting, always returning to a central hub after each quest. Within the hub, players are free to interact with each other, allowing room for storytelling and character development through roleplay without a DM. Outside the hub, we created a diverse set of environments that players could explore, each with a short description and image.
### Mode of Play
In the Calypso study, the primary mode of play was a random encounter system: players explore the established world and the DM is given a set of monsters to use (randomly selected from a table of encounters). The DM must then create an encounter involving those monsters for their players to play in. By giving DMs access to the interfaces described above, we study how language models can be used with focused local knowledge to help a DM understand complex monster lore, interactions, and how to incorporate new encounters into an existing setting. Figure 1 shows the flow of how an encounter is randomly chosen and how a DM might use Calypso to help build a compelling narrative.
## 6 Results and Discussion
In this section, we compile the ways that DMs adopted Calypso's three interfaces into their games. We highlight the
Figure 2: Using the _Encounter Understanding_ interface to generate a distilled summary of an encounter.
successes and failures of LLMs, and describe its emergent use cases. We refer to the DMs who used Calypso as P1-P71 in this section.
### Encounter Understanding
Summarization.DMs interacted with the summarization model in 37 encounters, indicating that the summary helped them understand the monsters and setting in 13 encounters and did not help in 7 encounters.
We found that DMs appreciated the summary for distilling long information from stat blocks (P10) or highlighting an aspect of the monster to focus on (P8). However, when prompted with the task to summarize provided information, GPT-3 would focus too much on numeric game stats (despite the prompt stating not to) and summarize the environment and monsters separately, often focusing too much on the environment (P3). For example, for an encounter involving 12 Blink Dogs, the model's output simply paraphrased the information written in the setting and stat block rather than suggesting more interesting interactions:
**Calypso**: A warm and calm autumnal forest with a river running through it, the players can travel through this area towards the distant mountain range. The creatures they may encounter here are blink dogs; they have an armor class of 13, 22 Hit Points, a speed of 40 ft., as well as skills in Perception and Stealth. They also possess limited intelligence and understand Sylvan though cannot speak it.
Although the information presented by Calypso is factually correct, DMs argued that simply rephrasing the information in the monster's reference material "doesn't really help in running an encounter" (P8) and generally wanted "more information about [the monsters] and why they might interact with [the players]" (P11). DMs wanted Calypso to generate details that were entirely omitted from the reference material, using the written reference as only a basis to gain an understanding of the monster.
Another common case where Calypso did not present a helpful summary was when a monster did not have a published prose description, only numeric game stats. Most commonly, such creatures were an animal found in real life (e.g. a brown bear; P3) or relied entirely on a DM's thematic commonsense to run (e.g. a snow golem; P3). In these cases, the model would only use minimal external knowledge (e.g. noting that giant boars usually had "thick, protective skin") rather than using thematic commonsense to make up a more interesting narrative (discussed in Section 3). We found that in order for the LLM to use external knowledge about monsters rather than only what was written in game stats, we had to prompt it with explicit instructions to do so. With these findings in mind, we remodeled the summarization task to a more abstract "understanding" task, in which we provided the model explicit instructions to use thematic commonsense.
Abstractive Understanding.Overall, DMs much preferred this more abstract task: over interactions in 114 encounters, DMs indicated that the summary helped them understand the monsters and setting in 55 encounters and did not help in 2 encounters. Figure 3 shows the breakdown of feedback for each interface.
Primarily, DMs using this model were satisfied with its ability to describe monsters based on real-life animals (P8), creature behavior (P3), interactions with players (P3), how to replay creatures (P12), and general performance (P3, P8, P11-13). The most common undesired behavior DMs noticed was the model hallucinating, or making up, facts about the monsters, such as giving giant spiders telepathy (P15), an insect monster a poison ability (P8), or a frost salamander wings (P14). Given the same example encounter as above, the model's output includes much more useful information about appearance and monster behavior:
**Calypso**: The blink dogs are canine creatures, about the size of a large dog. Their fur is a mottled grey and white colour, and their eyes flash yellow when they use their supernatural ability to blink in and out of existence. They are surprisingly fast despite their size, and can easily keep up with most horses. In terms of behavior, the blink dogs are curious but shy creatures who prefer to avoid contact with other creatures unless absolutely necessary. If threatened, they will use their blinking power to confuse and disorientate their opponents before attacking with a pack mentality. The blink dogs of this forest seem particularly adept at working together as a unit and can prove a formidable foe if provoked.
As the _Summarization_ and _Abstractive Understanding_ tasks differ only in prompting methods, we demonstrate how sensitive prompting approaches can be. It is possible that GPT-3 is overfit on the word "summarize", and even subtle changes in prompt design encourage original and domain-specific generation. With these alterations, we found that LLMs are capable of synthesizing creative ideas about D&D monsters using thematic commonsense contained in its training priors.
We also found that if we repeated a certain phrase multiple times in the prompt, e.g. "you may use information from mythology and common sense" (which could occur if an encounter involved multiple monsters with no published lore), the model would often include the phrase "mythology and common sense" verbatim in its output. This is likely a case of degenerative output Holtzman et al. (2020). To prevent this, each time a set phrase would have been templated in, we instead randomly chose 2-4 words from the set
Figure 3: DMs found the _Abstractive Understanding_ method of distilling monster information more consistently helpful than the _Summarization_ method.
['folklore', "common sense", "mythology", "culture"] and inserted them in a random order (e.g. "use information from common sense, mythology, and folklore").
Effect of hallucinations.We find that not all hallucinations are undesired. In many cases, the model suggests monster behaviors or appearances that are not explicitly written out in monster descriptions, such as the appearance of the blink dogs' fur in the example above. More drastic deviations, such as the model suggesting giving a creature wings, were however undesired.
DMs often take creative liberty to synthesize sensical information that isn't included in the source material. As shown above, they expect their tools to do the same when necessary - while the _Summarization_ interface was more conservative in ensuring it did not hallucinate any details, the _Abstractive Understanding_ interface was more well-received even with minor hallucinations. Since the DM acts as a curator of the model's output, the DM can choose which of the generations to accept.
### Focused Brainstorming
In total, DMs used the focused brainstorming model in 71 encounters, comprising a total of 162 rounds of conversation. DMs used the brainstorming model in a number of diverse way, which we qualitatively coded and tabulate in Table 1. Here, we discuss these use cases and some failure cases.
General and Specific Descriptions.The most common way DMs used the interface was to ask it for a high level description of a given encounter and specific descriptions of points in the encounter. Since our prompt included information on the setting and involved monsters, the model was able to reference the information in its description. Additionally, the conversational nature of the language model added to its context, so DMs could reference earlier ideas without having to repeat them. This allowed DMs to ask Calypso to simply "describe this scene" or "describe X" without having to specify additional details (P3, P8-10, P12, P16-20).
After presenting an idea to their players and seeing what part of the encounter players interacted with, the DM was also able to ask follow-up questions to describe in detail specific elements the players interacted with. For example, when running an encounter involving a ship's figurehead that was washed ashore, P3 first asked for a description of the figurehead. Then, when the players investigated it further, the DM followed up by asking for "a description about its construction, specifically how it was carved, and perhaps what D&D race crafted it." This allowed DMs to elaborate on specific parts of an encounter when it became relevant, rather than presenting a large amount of information up front.
However, DMs found that the model struggled sometimes to describe combat, and suggested that including more information about the combat state (similar to Zhu et al. (2023)) or map placement information could help generate more specific descriptions (P3, P9). Some DMs used these descriptions verbatim (P3, P8, P17), while others picked out particularly vivid phrases to use in a description of their own (P3, P8, P10, P12, P20). Others disagreed with the model's description and wrote their own instead (P13, P16, P18, P19).
Strategy.Another common use case for DMs was to ask the model for monsters' "motives, tactics, and who they might prioritize [in a fight]" (P8-9, P12-13, P19, P23). As discussed in section 3 (_Strategic Copilot_), coming up with and sticking to strategies for each monster can be overwhelming, and often DMs use simplifications to manage their mental load. This use case allowed DMs to create more engaging fights with clearer paths to resolutions by describing a creature's motive and specific tactics the creature would use. For example, when a DM asked how a pack of ten wolves might approach a camping party, the model suggested to have the wolves "circle around the camp, hiding behind trees and bushes [...] and wait until a member of the party is alone and vulnerable before striking, hoping to separate and weaken the group" (P8). Similar to the interactions with descriptions, these DMs did not always use the strategy presented by the model; sometimes they picked and chose interesting suggestions, while other times they chose a different approach.
Making Decisions.Some DMs used the model to get an opinion on two options they had already written or thought of (P3, P8-9, P12-14, P18, P23). For example, when players encountered a ravine whose bottom was enshrouded in mist, one DM asked whether the mist should hide a very long or
Figure 4: Using the _Focused Brainstorming_ interface to ask specific questions about an encounter. Calypso suggests reasons why the players might encounter the monsters and how they might act.
short drop. The model would sometimes simply give feedback on both of the options without choosing one ("_Both options have their merits depending on the tone and style of your game..._"; P3) and sometimes give a more straightforward answer ("...would that revenant have a vengeance towards the party member?" _/ "Yes, absolutely..._"; P12). DMs did not ask the model to come to a conclusive decision, suggesting that the model providing its "opinion" helped inspire the DM, without relying on it to run the encounter.
List of Ideas.In this use case, the DM simply asks the model for a list of ideas; for example, a list of magic items sea-dwelling humanoids might have (P10). We believe that the reasoning for this use case is the same reason that makes random tables (as discussed in Section 2.1) a popular method of inspiration - however, compared to prewritten random tables, LLMs have the powerful capability of generating unique "random table" entries customized for specific contexts.
Failure Cases.The most common failure case was when DMs tried to invoke other tools (such as a dice rolling or spell search bot) available in the brainstorming chat. As the model responded to every message in the thread, it would also respond to the other tool's invocation and reply with a generic error message or try to infer the other tool's output (e.g. "'check stealth" _/ "Abominable Yeti stealth check: 18"_, hallucinating a result while ignoring the output of an actual dice roller). In some cases, the DM attempted to upload an image, which the model was unable to view. Finally, as discussed in Section 6.1, the model sometimes hallucinated facts about creatures and rules. We believe multimodality (allowing the model to view images) and allowing the model to use tools (e.g. to retrieve rules text, spell descriptions, or search monsters) to be an interesting direction to explore in future work.
We also find that certain artifacts of the model's training process influences its output. For example, the model would sometimes refuse to suggest (fantasy) races, likely due to efforts to reduce the potential for real-world racial bias. In another case, the model insists that it is incapable of playing D&D, likely due to efforts to prevent the model from making claims of abilities it does not possess. Although generally infrequent, these artifacts suggest that domain-specific fine-tuning may improve models' performance.
### Open-Domain Chat Baseline
Participants chatted with Calypso in 51 unique threads, comprising a total of 2,295 rounds of conversation. Compared to conversations with the AI in the _Focused Brainstorming_ interface, conversations lasted much longer (averaging 45.0 rounds per interaction vs. the brainstorming interface's 2.3). Without the time pressure of an active game that the DM is responsible for, participants spent more time playing with the model and refining its responses to generate high-level quest ideas (P3, P8, P12, P16), character and location names (P3, P9, P19, P22), role-play specific characters from other games (P3, P9, P12, P16), and write fanfication about events happening between their characters in the game (P3, P8, P9, P16, P21), among other non-D&D uses.
However, during a game of D&D, DMs did not have the time luxury to iterate on responses for hours. Without Calypso's management of the game, DMs would have to spend many turns of conversation copying and pasting information to provide it to the LLM, taking attention away from the game and making the baseline implementation unsuitable for real-world adoption.
We believe this highlights the difference between synchronous and asynchronous systems and the importance of removing friction from AI-augmented user interfaces as discussed in Section 4 - while the human user may have the capability to supply a LLM with additional information, the time and computational burden should be on the synchronous system rather than the user.
## 7 Conclusions
In this paper, we present Calypso, a system of three LLM-powered interfaces that DMs could use to assist them in preparing and running focused monster encounters in
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline
**Use Case** & **Description** & **Example** \\ \hline General Descriptions & Asking the model to generate a high-level description of a scene and encounter. \\ Specific Descriptions & Asking specific questions about parts of the encounter, often in response to player actions. \\ Strategy & Using the model to understand monster motives and get suggestions for their tactics. \\ Making Decisions & Using the model to decide how the DM should run a given encounter. \\ List of Ideas & Generating a list of multiple ideas to build off of individually. \\ \hline \end{tabular}
\end{table}
Table 1: A list of common ways DMs used the _Focused Brainstorming_ interface.
an established setting, and a large-scale study of how 71 D&D players incorporated Calypso into their gameplay. Through interviews with DMs, we established common themes and desires for AI-augmented DM tools, and used these motivations and iterative design to guide our development. In conclusion, we found that:
1. LLMs are capable brainstorming partners. DMs used Calypso to generate both low-fidelity ideas that they could grow using their own creative expression, and guided it to generate high-fidelity descriptions they could present to other players with only minor edits.
2. LLMs present thematic commonsense when prompted to. Having been trained on a large corpus containing D&D texts and discussions, works of fantasy literature, and descriptions of real-world creatures, Calypso was able to fill in gaps in the D&D literature by probing into thematically relevant common sense knowledge. However, we found that to access this trove of information, the LLM had to be explicitly prompted to do so.
3. LLMs assist, rather than replace, human DMs. Calypso was designed to aid a human DM while maintaining their creative agency. We find that human DMs use AI co-DMs to understand complex rules text, brainstorm interactions between non-player characters or monsters, and present DMs with suggestions that the DM can weave into a story to present to players without taking away from the pace of the game. Human creativity is an integral part of storytelling games like D&D, and it is important for future AI tools to always maintain the human's creative agency.
## Appendix A LLM Prompts
In this section, we provide the prompts used in the Calypso system. Generally, we make use of Markdown-style headers to divide sections of the prompt. For chat-based models, we annotate each message with the corresponding role (system, assistant, or user, as exposed in the ChatGPT API).
### Encounter Understanding
#### Summarization
Summarize the following D&D setting and monsters for a Dungeon Master's notes without mentioning game stats.
Setting ================ <Setting description inserted here.>
Creatures ================ <Name> <Statistics and lore inserted here. If the encounter involves multiple creatures, repeat for each creature.>
Summary
#### Abstractive Understanding
Your name is Calypso, and your job is to help the Dungeon Master with an encounter. Your task is to help the DM understand the setting and creatures as a group, focusing mainly on appearance and how they act. Especially focus on what makes each creature stand out. Avoid mentioning gamestats. You may use information from common sense, mythology, and culture. If there are multiple creatures, conclude by mentioning how they interact.
#### Encounter: <Encounter inserted here.>
The rest of the prompt follows as in the Summarization prompt above, beginning with the setting. If a monster did not have published lore, we inserted the string "Calypso, please provide the DM with information about the (monster name) using information from (folklore, common sense, mythology, and culture)" (see section 6.1) in place of lore.
### Focused Brainstorming
SYSTEM: You are a creative D&D player and DM named Calypso. Avoid mentioning game stats. You may use information from common sense, mythology, and culture.
USER: I'm running this D&D encounter: < Encounter inserted here.>
<Setting and creatures inserted here, in the same format as Abstractive Understanding.>
Your job is to help brainstorm some ideas for the encounter. If the DM used the Encounter Understanding interface before starting a brainstorming thread, we add an additional message to the prompt:
USER: Here's what I have so far: <Summary generated by Encounter Understanding inserted here.>
This allows the DM to reference ideas proposed by Calypso in its summary without having to repeat the entire message, aiding continuity.
## Acknowledgments
Thank you to the Northern Lights Province Discord server for playing with us and being so enthusiastic about AI and D&D! Thank you to the NLP server staff - friends and players who helped us write rules, settings, game mechanics, and manage so many players: Ryan Crowley, Nicki Dulmage-Bekker, @ephesia, @lyra.kat, and Joseph Keen. Finally, thank you to D&D Beyond for providing us with access to monster information and game materials.
This material is based upon work supported by the National Science Foundation under Grant #2030859 to the Computing Research Association for the CIFellows Project. |
2305.08857 | COWPEA (Candidates Optimally Weighted in Proportional Election using
Approval voting) | This paper describes a new method of proportional representation that uses
approval voting, known as COWPEA (Candidates Optimally Weighted in Proportional
Election using Approval voting). COWPEA optimally elects an unlimited number of
candidates with potentially different weights to a body, rather than giving a
fixed number equal weight. A version that elects a fixed a number of candidates
with equal weight does exist, but it is non-deterministic, and is known as
COWPEA Lottery. This methods passes the "Holy Grail" criteria of monotonicity,
Independence of Irrelevant Ballots, and Independence of Universally Approved
Candidates. There are also ways to convert COWPEA and COWPEA Lottery to score
or graded voting methods. | Toby Pereira | 2023-04-03T20:13:09Z | http://arxiv.org/abs/2305.08857v4 | ## COWPEA (Candidates Optimally Weighted in Proportional Election using Approval voting)
###### Abstract
This paper describes a new method of proportional representation that uses approval voting, known as COWPEA (Candidates Optimally Weighted in Proportional Election using Approval voting). COWPEA optimally elects an unlimited number of candidates with potentially different weights to a body, rather than giving a fixed number equal weight. A version that elects a fixed a number of candidates with equal weight does exist, but it is non-deterministic, and is known as COWPEA Lottery. This is the only proportional method known to pass monotonicity, Independence of Irrelevant Ballots, and Independence of Universally Approved Candidates. There are also ways to convert COWPEA and COWPEA Lottery to score or graded voting methods.
## 1 Introduction
The search for the "Holy Grail" of a method of proportional representation that uses approval voting has been long and arduous. From the likes of Thovald N. Thiele and Lars Edvard Phragmen in the late 19th century to the modern day revival, no-one has succeeded in finding a method that satisfies all the desirable criteria. This paper discusses some of the better known methods and their criterion compliance, and introduces COWPEA and COWPEA Lottery: new methods with superior criterion compliance.
## 2 Criteria
There are various criteria one might demand for a Holy Grail of approval-based proportional representation. However, there are four main criteria that will be considered in this paper, which have not been unified in a single method. They are:
**Perfect Representation In the Limit (PRIL):** As the number of elected candidates increases, then for \(v\) voters, in the limit each voter should be able to be uniquely assigned to \(\mathcal{V}_{v}\) of the representation, approved by them, as long as it is possible from the ballot profile. Such a result is known as Perfect Representation. That this is achieved in the limit is the proportionality criterion being used in this paper.
PRIL is related to the stronger criterion of Perfect Representation (Sanchez-Fernandez et al., 2016): for \(v\) voters, if there exists a set of candidates that would allow each voter to be able to be uniquely assigned to \(\mathcal{V}_{v}\) of the representation, approved by them, then such a set must be the winning set. In other words, the Perfect Representation criterion requires a result that gives Perfect Representation whenever possible, not just in the limit as the number of candidates increases.
**Monotonicity**: Adding an approval for a candidate on a single ballot while leaving everything else unchanged cannot cause this candidate to go from being elected to being unelected. Conversely, removing an approval from an unelected candidate cannot cause that candidate to become elected. All methods considered in this paper pass monotonicity. However, there is also strong monotonicity, which is the idea that an approval should count in a candidate's favour rather than merely not count against, and also not merely in a tie-break manner. A suitable working definition that covers all the methods discussed in this paper is as follows:
For any method that elects a candidate set based on a score given to each set, adding an approval for a candidate on an existing ballot while leaving everything else unchanged must improve the score of any set containing that candidate, with an exception for the case where the score is already maximally bad (this is to account for methods that give a maximally bad score to any set that contains a candidate with zero approvals). For a lottery method or a method that elects candidates with different weights, each extra approval must increase the probability of that candidate being elected or their weight in the elected body, except where the probability or weight is already 0 or 1.
**Independence of Irrelevant Ballots (IIB)**: A ballot approving all or none of the candidates should not make a difference to the outcome. More generally, adding a ballot approving both or neither of candidates \(A\) and \(B\) should not cause \(A\) to become elected in place of \(B\) or vice versa. There are two main ways for a method to fail IIB. One is with the addition of ballots that approve some or all of the candidates in contention for seats (full ballots), and the other is with the addition of ballots that approve none of these candidates (empty ballots). Methods can pass in one way but fail in the other.
**Independence of Universally Approved Candidates (IUAC)**: The addition and election of one or more universally approved candidates along with the addition of the right number of extra seats for them to fill should not make any difference to which candidates get elected to the other seats.
There are also other criteria that one might expect a method to pass, such as Independence of Irrelevant Alternatives and the multi-winner version of Independence of Clones, but these two are a low bar for an approval-based method of proportional representation to pass. All methods discussed pass these two criteria.
**Independence of Irrelevant Alternatives (IIA)**: If a new candidate is added to the election and this candidate is not elected, and the voters' preferences regarding the other candidates remain unchanged, then the result should not change.
**Independence of Clones**: A set of candidates approved on exactly the same ballots as each other is known as a "clone set". If there are \(x\) members of the clone set, then for a method that elects a fixed number of candidates with equal weight, adding an (\(x\)+1)th clone must not change the probability of exactly \(y\) members of the clone set being elected for all \(y\) from 1 to \(x\)-1, and must not change the probability of at least \(x\) members being elected. Similarly, reducing the number of clones from \(x\) to \(x\)-1 must not change the probability of exactly \(y\) members of the clone set being elected for all \(y\) from 1 to \(x\)-2, and must not change the probability of at least \(x\)-1 members being elected. For a method that elects candidates with variable weights, the overall share of the clone set in the elected body must remain constant for any non-zero members of the clone set.
In a single-winner method, the situation is much simpler: adding a clone would mean that the winner must not switch between a candidate outside the clone set and one inside, in either direction. However, with proportional methods, it is possible that a particular party or faction does not have
enough candidates standing to achieve their proportional share. So adding a clone candidate could increase their share up to a certain limit in this case.
IIA and Independence of Clones tend to affect voting methods that used ranked ballots, and are less of a problem for cardinal methods that can consider a possible winning set without reference to candidates outside that set. As said, all the methods considered in this paper pass these criteria.
There are two criteria that will be discussed separately in section 6 because, as will be shown, they are of debatable utility. These are multi-winner Pareto efficiency and the consistency criterion.
**Multi-winner Pareto efficiency**: For the winning set of candidates, there should not be another set for which every voter has approved at least as many elected candidates as they have in the winning set, and at least one voter has approved more.
**Consistency**: If two separate elections give the same result (or probability profile in a lottery method), then combining the ballots for a single election should give the same result (or probability profile).
This multi-winner version of the Pareto efficiency criterion is an extension of candidate Pareto efficiency. Candidate \(A\) Pareto dominates Pareto dominates candidate \(B\) if \(A\) is approved on all the ballots that \(B\) is and at least one other ballot. It works on the assumption that a voter's satisfaction with a result is entirely dependent on the number of elected candidates that they have approved. This multi-winner version was also discussed by Lackner and Skowron (2020). As with monotonicity, it can be weak or strong. A method can be seen as weakly passing Pareto efficiency if it is possible for a Pareto dominated set to be tied with the set that dominates it.
It is worth discussing why we are using the PRIL criterion rather than Perfect Representation itself, or indeed one of the other criteria that have been defined. To deal with Perfect Representation first, it is incompatible with strong monotonicity for methods that elect a fixed number of candidates. Strong monotonicity is deemed the more important criterion, as should become clear from the examples given in this paper where there is a conflict.
There is also a proliferation of other proportionality criteria (see Lackner and Skowron, 2023) that have been defined in an attempt to capture the essence of proportionality. However, most of these require a method to satisfy lower quota. This would rule out the Sainte-Lague party list method and equivalently the Webster apportionment method, and by extension any approval method that reduces to them under party-style voting. These are considered by many to be the most mathematically proportional methods (see e.g. Balinski and Peyton Young, 2010; Benoit, 2000), so to use criteria that disqualify them would be to throw the baby out with the bathwater.
The common thread among proportionality criteria is the notion that a faction that comprises a particular proportion of the electorate should be able to dictate the make-up of that same proportion of the elected body. But this can be subject to rounding and there can be disagreement as to what is reasonable when some sort of rounding is necessary. However, taken to its logical conclusions, each voter individually can be seen as a faction of \(\mathcal{V}_{\nu}\) of the electorate for \(\nu\) voters.
This makes PRIL a fairly uncontroversial criterion, and it is why we are using it as our proportionality criterion in this paper. One potential downside is that it does not define anything about the route to Perfect Representation, other than that it must be reached in the limit as the number of candidates increases. However, in that respect it has similarities with Independence of
Clones, which is a well-established criterion. Candidates are only considered clones if they are approved on exactly the same ballots (or ranked consecutively for ranked-ballot methods). We would also want a method passing Independence of Clones to behave in a sensible manner with near clones, but it is generally trusted that unless a method has been heavily contrived then it would do this. Similarly, one would expect the route to Perfect Representation in a method passing PRIL to be a smooth and sensible one unless a method is heavily contrived, and none of the methods considered in this paper are contrived in such a manner.
This also ties in with strong monotonicity, where a method could be contrived so that each extra approval gives such a tiny improvement to the set score that it wouldn't ever make a difference to the result of an election, except effectively as a tie-break. But as with the PRIL case, none of the methods discussed in this paper behave in such a manner.
For a deterministic approval method where a fixed number of candidates are elected, a stronger proportionality criterion is Perfect Representation when Candidates Equals Voters (PR-CEV): if the number of elected candidates is equal to the number of voters (\(\nu\)), then it must be possible for each voter to be assigned to a unique candidate that they approved, as long as it is possible from the ballot profile. This is because no compromise due to rounding is necessary at that point. However, because COWPEA and COWPEA Lottery are not this type of method (COWPEA does not elect a fixed number of candidates and COWPEA Lottery is non-deterministic), this paper will stick with PRIL as its primary proportionality criterion.
## 3 Methods to be discussed
Although there are many proportional approval methods described in the literature, there are a small number of methods and their variants that are widely known and commonly discussed. It is therefore worthwhile to discuss these methods relative to the criteria specified above. The methods to be discussed in this paper are:
**Proportional Approval Voting (PAV)** by Thovald N. Thiele (1895).
**Phragmen's Voting Rules** by Lars Edvard Phragmen (1899) (see Janson, 2018, for an English language discussion of this and PAV).
**Fully Proportional Representation** by Burt Monroe (1995).
**Chamberlin-Courant Rule**, first described by Thiele (1895) and then later independently by Chamberlin and Courant (1983).
There are also various subtractive quota methods, which elect candidates sequentially and remove the quota of votes required to elect each candidate (e.g. a Hare or Droop quota). However, while some of these methods may be of practical interest, they are less so from a theoretical and criterion-compliance point of view, given their crude method of operation and lack of optimisation of any particular measure. This is why they are not being considered in this paper.
### Proportional Approval Voting (PAV)
Proportional Approval Voting gives satisfaction scores to voters based on the number of elected candidates that they have approved. If a voter has approved \(k\) elected candidates, their satisfaction
score is \(1+\nicefrac{{1}}{{2}}+...+\nicefrac{{1}}{{k}}\) (this is the harmonic series). The set of candidates that gives the highest sum of satisfaction scores is the winning set. It is a generalisation of the D'Hondt party list method. There are variants with different divisors (such as one which generalises the Sainte-Lague party list method), but these do not affect the criterion compliance for the criteria discussed in this paper.
PAV passes strong monotonicity and IIB, but fails PRIL and IUAC. Since each extra approval for a candidate simply adds to the sum of satisfaction scores of all the sets containing that candidate, it passes strong monotonicity fairly trivially. If a voter is added that approves every candidate (or no candidates), then their satisfaction score for every possible candidate set would be identical, so their ballot would make no difference to the outcome. If the winning candidate set contains candidate \(A\) but not candidate \(B\), then adding a ballot that approves neither or both would similarly make no difference to their relative performance and cannot cause a swap in the winning set. This demonstrates IIB compliance. Example 1 shows a failure of IUAC:
**Example 1**: 6 to elect
\(2n\) voters: _U1-U3_; _A1-A3_ (these are the six candidates that have been approved by these voters) \(n\) voters: _U1-U3_; _B1-B3_
With 3 seats and no \(U\) candidates, PAV would proportionally elect _A1-A2_, _B1_ (or an equivalent result with 2 _A_s and 1 _B_). To pass IUAC, therefore, PAV should elect _U1-U3_, _A1-A2_, _B1_. However, it would elect _U1-U3_, _A1-A3_ over the IUAC-compliant result by a score of \(6.73n\) to \(6.65n\). Note 1 At an intuitive level, this is because PAV gives the result where the \(2n\) voters have double the number of elected candidates of the \(n\) voters (6 to 3). The IUAC-compliant result gives them 5 and 4 candidates respectively.
This in itself might not seem too unreasonable. However, using a modified version of example 1, PAV can even be shown to fail PRIL.
**Example 2**: 20 to elect
\(2n\) voters: _U1-U10_; _A1-A10_
\(2n\) voters: _U1-U10_; _B1-B10_
\(n\) voters: _C1-C20_
In this case, proportionally the _UA_ and _UB_ factions should have 16 elected candidates between them, and the \(C\) faction should have 4 candidates. The \(U\) candidates clearly give a higher satisfaction score than either the \(A\) or \(B\) candidates because of the Pareto dominance, so would be elected in preference to them. The winning candidate set should therefore be _U1-U10_, _A1-A3_, _B1-B3_, _C1-C4_. However, under PAV, _U1-U10_, _A1-A2_, _B1-B2_, _C1-C6_ gives a higher satisfaction score (\(14.86n\) to \(14.80n^{\text{Noe}^{\text{2}}}\)), and this is the set of candidates that would be elected. The \(4n\) members of the _UA_ and _UB_ factions only get 14 candidates, whereas the \(n\) members of the \(C\) faction get 6. Because of this result, PAV cannot be said to be truly proportional.
Again, at an intuitive level, this is because the PAV-preferred result of _U1-U10_, _A1-A2_, _B1-B2_, _C1-C6_ gives the _UA_ voters 12 elected candidates, _UB_ voters also 12, and the \(C\) voters 6, which fits in with the faction sizes, not taking into account the overlap. The proportional result of _U1-U10_, _A1-A3_, _B1-B3_, _C1-C4_ awards 13, 13, 4, which looks disproportional to PAV. To be clear, this result is not because a lack of seats has caused a rounding error. This is PAV's version of proportionality. Overlapping factions count against the voters in these factions under PAV.
It might seem strange to include a method that is not truly proportional in a discussion of proportional approval methods. However, by some proportionality criteria (e.g. where simple party voting is used), PAV does pass, and it is also widely discussed in the literature so it deserves inclusion here. There is also optimised variable-candidate-weight PAV, discussed in section 7, which might pass PRIL and Perfect Representation, though it has not been proved.
### 3.2 Phragmen's Voting Rules
Phragmen considered various related methods (see Janson, 2018), but the ones we will consider for the purposes of this paper are max-Phragmen (also known as leximax-Phragmen) and var-Phragmen (see e.g. Brill et al., 2017). These are optimised elect-all-candidates-at-once variants, rather than sequential ones, as the sequential variants do not have the same criterion compliance.
Phragmen's methods use the concept of "loads". Each elected candidate has a load of 1 that is spread across their voters, but this does not have to be done evenly. For example, if 10 voters approve a particular elected candidate, the mean load on the voters of that candidate would be \(\nicefrac{{1}}{{40}}\), but this can vary across the voters. Allowing the loads on each voter to be unequal avoids failures of PRIL and monotonicity. An "ideal" result is when the total load from all elected candidates on every voter is equal. This means that Perfect Representation has been achieved. Max-Phragmen elects the candidate set that minimises the maximum voter load, whereas var-Phragmen elects the set that minimises the sum of the squared loads (or, equivalently, the variance). Max-Phragmen generalises the D'Hondt party list method, whereas var-Phragmen generalises the Sainte-Lague party list method. They have the same criterion compliance as each other for the criteria we are looking at, and both pass PRIL and weak monotonicity, but fail IIB (with full ballots) and IUAC. They also pass Perfect Representation. We will start by looking at their failure of strong monotonicity.
**Example 3**: 2 to elect
\(n\) voters: AB
\(n\) voters: AC
If the load from a candidate had to be spread equally across their voters, then \(BC\) would be the winning set, despite universal support for \(A\). Allowing unequal load-spreading means that AB (and AC) ties with \(BC\), by allowing all voters to have identical total loads. However, this tie is a failure of strong monotonicity. Just a small change would cause the non-election of \(A\).
**Example 4**: 2 to elect
\(99n\) voters: \(AB\)
\(99n\) voters: \(AC\)
\(n\) voters: \(B\)
\(n\) voters: \(C\)
In this case \(BC\) wins outright under max-Phragmen and var-Phragmen because it is the only way to equalise the load spread across the voters. Candidate \(A\) is approved by 99% of voters but is not elected. Candidates \(B\) and \(C\) are each approved by just 50% of the voters. The \(99n\) could be increased to any value, and the result would not change. This is the consequence of weak monotonicity.
It should be noted that electing \(BC\) does mean that each voter would be able to be uniquely assigned to \(\mathcal{V}_{\mathcal{V}}\) of the representation (for \(\mathcal{V}\) voters), unlike with \(AB\) or \(AC\), making \(BC\) a more proportional result in that respect, and passing Perfect Representation. However, under PRIL, there is no requirement for a method to give such a result except in the limit as the number of elected candidates increases, and here we can see why. This example demonstrates why the Perfect Representation criterion is incompatible with strong monotonicity for a fixed number of candidates, and this is why it is not being considered as an important criterion in this paper. If we imagine that \(A\), \(B\), and \(C\) are parties fielding multiple candidates and increase the number of seats, then PRIL would require only that \(B\) and \(C\) each take at least \(\mathcal{V}_{200}\) of the seats in the limit.
Electing sequentially means that max-Phragmen and var-Phragmen would elect candidate \(A\), but it is still possible to create examples that violate strong monotonicity when the methods are used sequentially.
**Example 5**: 2 to elect
\(2n\) voters: \(A\)
\(n\) voters: \(AB\)
\(2n\) voters: \(ABC\)
\(n\) voters: \(BC\)
In this example, \(A\) has the most voters (\(5n\)) and would be elected first. Either \(B\) or \(C\) could be elected next to give Perfect Representation, giving a tie, despite \(B\) having more approvals than \(C\) (\(4n\) to \(3n\)), and also Pareto dominating \(C\). Weak monotonicity means that the extra \(n\) voters for \(B\) make no difference, except perhaps in a tie-break. We can then change this scenario slightly:
**Example 6**: 2 to elect
\(20n\) voters: \(A\)
\(10n\) voters: \(AB\)
\(20n\) voters: \(ABC\)
\(10n\) voters: \(BC\)
\(n\) voters: \(C\)
Those \(n\) voters who approve only \(C\) now mean that \(C\) is elected over \(B\), despite \(B\) still having more votes by \(40n\) to \(31n\). While those \(n\)\(C\)-only voters remain, adding \(B\) to other ballots would only get \(B\) elected by eventually surpassing \(A\) in terms of total approvals. Each approval would make no difference to the score of \(AB\) (either the maximum load on a single voter or the sum of squared loads). So the failure of strong monotonicity is still evident in a sequential election. We will now move on to an IIB failure (full ballots). We will deal with max-Phragmen first.
**Example 7**: 2 to elect
\(6n\) voters: \(A1\)-\(A2\)
\(3n\) voters: \(B1\)-\(B2\)
This example gives a tie between \(A1\)-\(A2\) and \(A1\), \(B1\). If \(A1\)-\(A2\) is the elected set, the \(6n\)\(A\) voters would all have a load of \(\mathcal{V}_{(3n)}\), and the \(3n\)\(B\) voters would have a load of zero. This gives a max load of \(\mathcal{V}_{(3n)}\). If \(A1\), \(B1\) is elected, the \(A\) voters would have a load of \(\mathcal{V}_{(6n)}\) and the \(B\) voters would have a
load of \(\mathcal{V}_{(3n)}\). This also gives a max load of \(\mathcal{V}_{(3n)}\). So we have a tie between \(A1\)-\(A2\) and \(A1\), \(B1\). We can now add some "irrelevant" ballots.
**Example 8**: \(2\) to elect
\(6n\) voters: \(A1\)-\(A2\)
\(3n\) voters: \(B1\)-\(B2\)
\(2n\) voters: \(A1\)-\(A2\); \(B1\)-\(B2\)
In the case of \(A1\)-\(A2\) being elected, the \(8n\)\(A\) voters would all have a load of \(\mathcal{V}_{(4n)}\), and the \(3n\)\(B\)-only voters would have a load of zero. This gives a max load of \(\mathcal{V}_{(4n)}\). If \(A1\), \(B1\) is elected, then the best way to balance the loads is to assign the \(AB\) voters exclusively to \(B1\), so give them none of the load from \(A1\). The load on the \(6n\) voters assigned to \(A\) would be \(\mathcal{V}_{(6n)}\) and the load on the \(5n\) voters assigned to \(B\) would be \(\mathcal{V}_{(5n)}\). This gives a max load of \(\mathcal{V}_{(5n)}\), which is lower than for the \(A1\), \(A2\) result. \(A1\), \(B1\) would therefore be elected. As this is no longer the tie from the previous example, this is a failure of IIB with full ballots.
Because max-Phragmen is a generalisation of the D'Hondt party list method and var-Phragmen is a generalisation of the Sainte-Lague party list method, the voter ratios needed to make a tie are slightly different, so the failure example for var-Phragmen is different.
**Example 9**: \(2\) to elect
\(9n\) voters: \(A1\)-\(A2\)
\(3n\) voters: \(B1\)-\(B2\)
This example gives a tie between \(A1\)-\(A2\) and \(A1\), \(B1\). In the case of \(A1\)-\(A2\) being elected, all the \(A\) voters would have a load of \(\mathcal{V}_{(9n)}\) and the \(B\) voters would have a load of zero. In the case of \(A1\), \(B1\) being elected, the load on the \(A\) voters would be \(\mathcal{V}_{(9n)}\) and the load on the \(B\) voters would be \(\mathcal{V}_{(3n)}\). Both these results give the same sum of squared loads of \(\mathcal{V}_{(9n)}\). Note 3 This gives a tie between \(A1\)-\(A2\) and \(A1\), \(B1\). As before, we can now add some irrelevant ballots.
**Example 10**: \(2\) to elect
\(9n\) voters: \(A1\)-\(A2\)
\(3n\) voters: \(B1\)-\(B2\)
\(2n\) voters: \(A1\)-\(A2\); \(B1\)-\(B2\)
If \(A1\)-\(A2\) is elected, all \(11n\)\(A\) voters would have a load of \(\mathcal{V}_{(11n)}\) and the \(B\)-only voters would have a load of zero. If \(A1\), \(B1\) is elected, the best way to balance the loads is to assign the \(AB\) voters exclusively to \(B1\). The load on the \(9n\) voters assigned to \(A\) would be \(\mathcal{V}_{(9n)}\) and the load on the \(5n\) voters assigned to \(B\) would be \(\mathcal{V}_{(5n)}\). This gives a lower sum of squared loads than the \(A1\)-\(A2\) result by \({}^{0.31}\), to \({}^{0.36}\), meaning that \(A1\), \(B1\) would be elected. Note 4 As with max-Phragmen, this is no longer the tie from the previous example, so this is a failure of IIB with full ballots. Max-Phragmen and var-Phragmen do not fail IIB with empty ballots, however, because there is no quota or fixed proportion of voters a candidate has to represent.
Var-Phragmen also fails IUAC with a similar example:
**Example 11**: \(3\) to elect
\(9n\) voters: \(U\), \(A1\)-\(A2\)
\(3n\) voters: \(U\), \(B1\)-\(B2\)
This is exactly the same as example 9 except with a universally approved candidate added, and an extra seat for them to fill. For var-Phragmen to pass IUAC, there should therefore be a tie between \(U\), \(A1\)-\(A2\) and \(U\), \(A1\), \(B1\). If \(U\), \(A1\)-\(A2\) is elected, the load can be spread equally across all voters, giving Perfect Representation (the loads can be set so that \(4n\) voters are effectively assigned to each candidate). But if \(U\), \(A1\), \(B1\) is elected, then \(B1\) is responsible for a third of the load but only has a quarter of the support, so Perfect Representation cannot be achieved. \(U\), \(A1\)-\(A2\) is therefore the winning set, meaning that var-Phragmen fails IUAC. Note that in this failure the \(A\) faction is favoured, which is in the opposite direction from the IIB failure, where the \(B\) faction was favoured.
**Example 12**: 3 to elect
\(6n\) voters: \(U\), \(A1\)-\(A2\)
\(3n\) voters: \(U\), \(B1\)-\(B2\)
This is the max-Phragmen version of example 11. For max-Phragmen to pass IUAC, there should therefore be a tie between \(U\), \(A1\)-\(A2\) and \(U\), \(A1\), \(B1\). In fact both sets give Perfect Representation (\(3n\) voters can be satisfactorily assigned to each candidate in either case), so there is still a tie. In this case at least, max-Phragmen behaves better than var-Phragmen with respect to the IUAC criterion. However, both max-Phragmen and var-Phragmen fail both IIB and IUAC in a more general sense because of their indifference to results when more than one candidate set can give Perfect Representation. Starting with IIB:
**Example 13**: No specified number to elect
\(2n\) voters: \(A1\), \(A2\)...
\(n\) voters: \(B1\), \(B2\)...
\(n\) voters: \(A1\), \(A2\)...; \(B1\), \(B2\)...
A method passing IIB would elect \(A\) and \(B\) candidates in the ratio 2:1 respectively. However, because the loads can be unequally spread, the \(n\)\(AB\) voters can each effectively be turned into an exclusively \(A\) or exclusively \(B\) voter to give a particular set of candidates the best spread of loads. This means that any result in the range from 3:1 to 1:1 \(A\) to \(B\) candidates would be considered equally good under both max-Phragmen and var-Phragmen. This is a failure of IIB. It is a weak failure, since the IIB-compliant result is among the possible winning sets, but any tie-break mechanism that does not give the IIB-compliant result would cause a strong failure. Tie-breaking based on most total approvals would give an \(A\) to \(B\) ratio of 3:1, for example.
Just as a point of interest, var-Phragmen without variable load spread is known as Ebert's Method (Ebert, 2003), and it would elect in the correct 2:1 ratio in example \(13^{\text{Note}}\)5, although it does not pass IIB in general, or indeed PRIL or monotonicity, which is why it is not being considered in this paper. However, it could be argued that its idea of exact proportionality is a purer form than even that of Perfect Representation. It would elect \(BC\) in example 3, which is a more balanced, and arguably more proportional, result than one containing \(A\). However, it gives no consideration to anything other than this balance, such as monotonicity, and is unlikely to have too many practical uses.
We can see by revisiting example 1 that max-Phragmen and var-Phragmen fail IUAC:
**Example 1**: 6 to elect
\(2n\) voters: _U1-U3_; _A1-A3_
\(n\) voters: _U1-U3_; _B1-B3_
Assuming all the \(U\) candidates are elected, the variable load mechanism gives three possible candidate sets Perfect Representation (considering the candidates to be interchangeable with others of the same letter): _U1-U3_, _A1-A3_; _U1-U3_, _A1-A2_, _B1_; _U1-U3_, _A1_, _B1-B2_. The IUAC-compliant set is within this, so as with IIB, it is a weak failure, although a tie-break based on most approvals would elect _U1-U3_, _A1-A3_. This would make make it a strong failure, as with their IIB failure.
Without the variable load mechanism, both methods would pass IUAC (so Ebert's method passes). Adding a universally approved candidate into an extra seat would simply add \(\mathcal{V}_{\nu}\) to the load of each voter in every possible candidate set (for \(\nu\) voters). This would increase the maximum load by \(\mathcal{V}_{\nu}\) for each set (leaving the relative order unchanged), and also leave the variance unchanged.
All methods discussed in this paper apart from PAV and COWPEA (along with COWPEA Lottery) are guilty of overly failing to distinguish between results, due to weak monotonicity and essentially apathy once Perfect Representation is achieved.
It is worth pointing out here that weakly failing a criterion is not the same thing as weakly passing it. Weakly failing is when a change can cause an undesirable effect, but only to the point of a tie. Weakly passing is when making a change fails in some cases to cause the desired effect. Weakly failing is worse.
### Fully Proportional Representation
Monroe's original version of Fully Proportional Representation was for ranked ballots, but the version for approval ballots is the method being discussed in this paper (see Lackner & Skowron, 2023). Each voter is assigned to a candidate, where each elected candidate has the same number of voters assigned to them. The winning candidate set is the one that allows the most voters to be assigned to a candidate that they approved. It is a generalisation of the Hamilton apportionment method.
Fully Proportional Representation passes PRIL, but is only weakly monotonic, and it fails IIB and IUAC. It also passes Perfect Representation and is underspecified in that it only takes into account what a voter thinks of their one representative. This all means that it behaves in essentially the same way as max/var-Phragmen in examples 3 to 6, which demonstrate a failure of strong monotonicity. It fails IIB in example 10 and IUAC in example 11 in a similar way to var-Phragmen. It also fails IIB due to being underspecified in example 13, and IUAC in example 1. Unlike any other method discussed in this paper, Fully Proportional Representation also fails IIB with empty ballots.
**Example 14**: 2 to elect
\(9n\) voters: _A1-A2_
\(3n\) voters: _B1-B2_
\(2n\) voters: _C1-C2_
This is the same as example 9 except with two irrelevant ballots added, approving only candidates that will not be elected. Fully Proportional Representation gives the same tie in example 9 (between _A1_-_A2_ and _A1_, _B1_) as var-Phragmen. With \(14n\) voters in total, each elected candidate would have \(7n\) voters assigned to them. In the case of _A1_-_A2_ being elected, \(9n\) voters (all of the \(A\) voters) would be assigned to a candidate that they approved. In the case of _A1_, _B1_, it would be \(7n\)_A_ voters and all \(3n\)_B_ voters, making \(10n\). _A1_, _B1_ would be elected instead of the tie from example 9, giving a failure of IIB with empty ballots. This example would also give the same result if no candidates at all were approved on the irrelevant ballots. As said, this is the only method of those discussed that fails IIB in this way.
### Chamberlin-Courant Rule
Like Fully Proportional Representation, Chamberlin-Courant Rulelects a fixed number of candidates and assigns each voter to a candidate, electing the candidate set that allows the most voters to be assigned to a candidate that they approved. The difference is that Chamberlin-Courant Rule allows each candidate to represent a different number of voters, and weights the candidates in the elected body accordingly.
Chamberlin-Courant Rule passes PRIL and weak monotonicity, but fails IIB (with full ballots) and IUAC. It passes Perfect Representation and behaves essentially the same as max/var-Phragmen and Fully Proportional Representation in examples 3 to 6 to fail strong monotonicity. It also behaves in essentially the same way in example 13 to fail IIB (full ballots) and example 1 to fail IUAC. Its ability to allow different numbers of voters to be assigned to each candidate, and weight the candidates accordingly, means that even more ties are possible (such as in example 1 where a continuum of candidate weights is possible). Like Fully Proportional Representation, it does not take into account a voters' views on anything other than the one candidate they are assigned to, so it is underspecified as a method, which gives a lot of weak failures. It does not fail IIB with empty ballots, however, as there is no quota or fixed proportion of voters a candidate must represent. It also does not give the strong failures of IIB and IUAC displayed by max/var-Phragmen and Fully Proportional Representation in the examples.
## 4 COWPEA and COWPEA Lottery
### Cowpea
COWPEA stands for "Candidates Optimally Weighted in Proportional Election using Approval voting". COWPEA does not elect a fixed number of candidates to a fixed number of positions with equal weight, but instead elects an unlimited number of candidates with potentially different weights. This is because COWPEA is not simply an election method but an attempt at an answer to the question: How do we determine the mathematically optimal candidate weights in a proportional election? The proportion of the weight each candidate gets in the elected body is equal to their probability of being elected in the following lottery:
Start with a list of all candidates. Pick a ballot at random and remove from the list all candidates not approved on this ballot. Pick another ballot at random, and continue with this process until one candidate is left. Elect this candidate. If the number of candidates ever goes from \(>\)1 to 0 in one go,
ignore that ballot and continue. If any tie cannot be broken, then elect the tied candidates with equal probability.
This is similar to a tie-break mechanism for single-winner score voting proposed by Smith & Smith (2007):
Choose a ballot at random, and use those ratings to break the tie. (I.e. if the tied candidates are A and B, and the randomly chosen ballot scores A higher than B, then A wins.) In the unlikely event this ballot still indicates that some or all of the tied candidates are tied, then one chooses at random again, and continues until the number of tied candidates is reduced to a unique winner.
With COWPEA, however, it is an entire method in its own right rather than just one suggestion for a tie-break for another method!
Because each voter's ballot would be the starting ballot on \(\mathcal{V}_{\!\!\vee}\) of occasions (for \(\nu\) voters), it would always be possible for each voter to be able to be uniquely assigned to \(\mathcal{V}_{\!\!\vee}\) of the representation, approved by them, (as long as voters have all approved at least one candidate), meaning that it passes PRIL, as well as Perfect Representation. Furthermore, if the ballot at the start of an iteration of the algorithm has approved more than one candidate, then the candidates approved on that ballot would be elected in a proportional manner according to the rest of the electorate, with potentially many ballots looked at, so the proportionality runs all the way down.
COWPEA is strongly monotonic. Approving a candidate can never decrease their weight in the elected body. If a candidate is Pareto dominated by another candidate both before and after an extra approval, then their weight would be zero in either case, so it's possible for an approval to make no difference, but this is expected behaviour. It is also possible for a candidate's weight to already be at 100% in the elected body without being fully approved, if there are ballots that approve no candidates at all. In all other cases, an extra approval would increase the candidate's weight in the elected body. Strong monotonicity is incompatible with Perfect Representation only in methods that elect a fixed number of candidates, so COWPEA is exempt from this incompatibility, and it passes both criteria.
COWPEA passes IIB. If a ballot has approved none or all of the remaining candidates at some stage of the lottery, then it would be ignored.
IUAC does not properly apply to COWPEA since any universally approved candidates would take all the weight in the elected body. All other candidates would be Pareto dominated and would have no weight.
### COWPEA Lottery
Unlike COWPEA, COWPEA Lottery elects a fixed number of candidates to a fixed number of positions with equal weight, so it can be used for elections of this type. It is simply the method that runs the lottery \(k\) times for \(k\) candidates to be elected. For each iteration of the lottery, the list starts with all currently unelected candidates.
The algorithm, to be run \(k\) times, would be:
Start with a list of all currently-unelected candidates. Pick a ballot at random and remove from the list all candidates not approved on this ballot. Pick another ballot at random, and continue with this process until one candidate is left. Elect this candidate. If the number of candidates ever goes from >1 to 0 in one go, ignore that ballot and continue. If any tie cannot be broken, then elect the tied candidates with equal probability.
While COWPEA Lottery is non-deterministic and cannot guarantee proportionality in a given election, it is proportional in the limit, so it passes the PRIL criterion.
COWPEA Lottery is strongly monotonic. An approval would always increase the probability of that candidate being elected unless that candidate is Pareto dominated by at least \(k\) candidates even after the extra approval (as they would definitely not be elected), or if they are guaranteed to be elected anyway. This fits the definition used in this paper.
It also passes IIB. As with plain COWPEA, if a ballot has approved none or all of the remaining candidates at some stage of the lottery, then it would be ignored.
And it passes IUAC. If there are any universally approved candidates, then they would be elected to the first positions. The election would then continue to run the same as if there had been no universally approved candidates.
There are no other methods known to pass all these criteria. COWPEA and COWPEA Lottery break new ground in this regard, although see the discussion of Optimised PAV in section 7.
### Score Voting conversion
COWPEA and COWPEA Lottery can be turned into score voting methods. For example, they can be used with the Kotze-Pereira Transformation (see Pereira, 2016) to achieve this. This converts scores into approvals, by "splitting" a voter into s parts (numbered from 1 to s) for a maximum score of \(s\). Part \(n\) of s approves all candidates given a score of \(n\) or higher. From there everything else would be run the same. Alternatively, and more simply, scores or grades can instead be used as different layers of approval, so the actual value of the score is not relevant. For this, the lottery would become:
Start with a list of all currently-unelected candidates. Pick a ballot at random and remove from the list all candidates not at the highest score/grade given to any candidate currently on the list. Pick another ballot at random, and continue with this process until one candidate is left. Elect this candidate. If any tie cannot be broken, then elect the tied candidates with equal probability.
The score and graded versions of COWPEA and COWPEA Lottery have the same criterion compliance as the standard approval versions.
### Determinism and the Holy Grail
While COWPEA itself is deterministic, there is no known deterministic version of the method that elects a fixed number of candidates with equal weight, while retaining the same criterion compliance as COWPEA Lottery. This version of the Holy Grail - a deterministic approval method that passes PRIL, strong monotonicity, IIB and IUAC - remains undiscovered, and its possibility
status remains unknown. Obviously if being deterministic and electing a fixed number of candidates with equal weight is a requirement in a particular election, then COWPEA and COWPEA Lottery would not be appropriate for use in this election. However, if the Holy Grail is the method that produces mathematically optimal candidates weights, then COWPEA is a candidate for this.
### Election examples
We will now revisit the previous election examples to see the performance of COWPEA and COWPEA Lottery:
**Example 1**: 6 to elect (number to elect relevant only to COWPEA Lottery since COWPEA only deals with proportions)
_2n_ voters: _U1-U3_; _A1-A3_
\(n\) voters: _U1-U3_; _B1-B3_
Under COWPEA, the \(U\) candidates Pareto dominate all others, so would each be elected with \(\%\) of the total weight. No other candidates would be elected, and this demonstrates why IUAC is not applicable to COWPEA. Under COWPEA Lottery, the \(U\) candidates would be elected to the first 3 positions. It would not be determined which candidates would be elected to the final 3 positions, but on average, \(A\) candidates would take 2 seats and \(B\) candidates would take 1, just as they would if there were no \(U\) candidates and just 3 seats. It therefore passes IUAC in this case, the only method to do so of those considered in this paper.
**Example 2**: 20 to elect
_2n_ voters: _U1-U10_; _A1-A10_
_2n_ voters: _U1-U10_; _B1-B10_
\(n\) voters: _C1-C20_
In this case, COWPEA would elect the \(U\) candidates with \(\%\) of the weight between them, and the \(C\) candidates with \(\%\) of the weight (the Pareto dominated \(A\) and \(B\) candidates would have no weight). COWPEA lottery would elect on average 4 \(C\) candidates, which is the correct proportional result. PAV by contrast would elect 6 \(C\) candidates demonstrating a failure of PRIL.
**Example 3**: 2 to elect
\(n\) voters: _AB_
\(n\) voters: _AC_
COWPEA would elect \(A\) with all of the weight, as \(A\) Pareto dominates \(B\) and C. COWPEA Lottery, being strongly monotonic, would elect \(A\) with 100% probability, and then \(B\) or \(C\) with a probability of \(\%\) each. PAV is the only other strongly monotonic method. The other methods would consider \(BC\) equal to the sets containing \(A\).
**Example 4**: 2 to elect
_99n_ voters: \(AB\)
_99n_ voters: \(AC\)
\(n\) voters: \(B\)
\(n\) voters: \(C\)
COWPEA would elect \(A\) with 0.9801 of the weight, with the remaining 0.0199 shared equally between \(B\) and \(C\).\({}^{\textsc{Note}}\) 6 COWPEA Lottery would elect \(BC\) with a probability of 0.000199, which is less than 1 in 5000.\({}^{\textsc{Note}}\) 7 AB and \(AC\) would share the rest of the probability equally. So \(A\) has more than a 4999 in 5000 probability of being elected. This is the result of being strongly monotonic, with each extra vote for \(A\) counting towards their chances of being elected. Of the other methods discussed, only PAV would elect a set containing candidate \(A\).
Examples 5 and 6 were also used to demonstrate a failure of strong monotonicity, by showing that making the failing methods sequential does not cause them to pass this criterion. Since COWPEA and COWPEA Lottery pass this criterion anyway, there is no need to consider these examples.
Examples 7 to 11 and 13 are variants to test IIB with full ballots. To avoid duplication, we will just look at example 13:
**Example 13:** No specified number to elect
\(2n\) voters: _A1_, _A2_...
\(n\) voters: _B1_, _B2_...
\(n\) voters: _A1_, A2_...; _B1_, _B2_...
COWPEA would elect \(A\) and \(B\) candidates with a 2:1 weight ratio respectively. COWPEA Lottery would elect this ratio in the limit. The ballots of the \(n\)_AB_ voters would make no difference at any point as they would effectively be ignored. COWPEA and COWPEA Lottery pass IIB with full ballots along with only PAV of the other methods.
Example 12 is a variant of example 1 to test IUAC. And that leaves example 14:
**Example 14:** 2 to elect
\(9n\) voters: _A1-A2_
\(3n\) voters: _B1-B2_
\(2n\) voters: _C1-C2_
This example was used to show a failure of IIB with empty ballots, by adding ballots that approve only candidates that won't be elected. However, \(C\) would be elected with some weight under COWPEA and with some probability under COWPEA Lottery, so these ballots would not actually be irrelevant for these methods. But if we instead imagine that the \(2n\) voters approved no candidates at all, the ballots would be ignored and not affect the result, demonstrating that COWPEA and COWPEA Lottery pass this form of IIB. Of the methods discussed, only Fully Proportional Representation actually fails IIB with empty ballots.
None of the examples give a problematic result for either COWPEA or COWPEA Lottery. This is due to their criterion compliance as already outlined.
## 5 Criterion compliance table and discussion
The following table gives the criterion compliance of the methods discussed:
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Criterion** & **PRIL** & **Monotonicity** & **IIB** & **IUAC** \\ \(\backslash\) & & & & \\ \hline
**Method** & & & & \\ \hline
**COWPEA** & YES & YES (strong) & YES & N/A \\ \hline
**COWPEA Lottery** & YES & YES (strong) & YES & YES \\ \hline
**PAV** & NO & YES (strong) & YES & NO \\ \hline
**Max/Var-Phragmen** & YES & YES (weak) & NO (passes empty but not full) & NO \\ \hline
**Fully Proportional Representation** & YES & YES (weak) & NO (fails both empty and full) & NO \\ \hline
**Chamberlin-Courant Rule** & YES & YES (weak) & NO (passes empty but not full) & NO \\ \hline \end{tabular}
It is worth discussing the general patterns here. All of the already-existing methods that pass PRIL (max/var-Phragmen, Fully Proportional Representation and Chamberlin-Courant Rule) are only weakly monotonic and none of them pass IIB or IUAC. All of these methods are prone to failing to distinguish between results, which can give a lot of weak failures, although they do all pass Perfect Representation. They tend to give similar results to each other in the examples given. Fully Proportional Representation has the worst criterion compliance of these, failing IIB with empty ballots, which no other method discussed in this paper does. Chamberlin-Courant Rule has the best compliance, with only weak failures in the examples, ignoring any tie-break mechanism, but it is the most prone to ties.
PAV is the only strongly monotonic method of those discussed, also passing IIB, but it fails PRIL, as well as IUAC.
It might seem that IIB and IUAC are mere niceties compared with criteria such as PRIL and strong monotonicity. However, in many of these cases the failures are related. PAV's PRIL failure is related to its failure of IUAC, for example. A lack of decisiveness links max/var-Phragmen, Fully Proportional Representation and Chamberlin-Courant Rule's lack of strong monotonicity with their failures of IIB and IUAC. Because of this connectedness, focusing on finding a method that passes PRIL and one other criterion (in a strong manner) could yield success with the other criteria as a by-product.
The conclusion from this is that we did not even need to specify all four of these criteria to show the superiority of COWPEA and COWPEA Lottery. They perform better than any of the other methods if we specify PRIL and any one of the other three criteria. One caveat is that COWPEA itself doesn't pass IUAC as such, as the criterion is not applicable to it. But it does not behave in an undesirable way regarding it, so the claim still holds.
This is not a complete list of all approval-based proportional methods, so there may be others that perform better than the existing methods, but they are not as well known.
## 6 Other debatable criteria
It is now time to turn to the multi-winner version of Pareto efficiency and the consistency criterion, which turn out to be related. These were not included in the table in section 5 because it is not clear that passing them is a positive.
**Multi-winner Pareto efficiency**: For the winning set of candidates, there should not be another set for which every voter has approved at least as many elected candidates as they have in the winning set, and at least one voter has approved more.
**Consistency**: If two separate elections give the same result (or probability profile in a lottery method), then combining the ballots for a single election should give the same result (or probability profile).
### Multi-winner Pareto efficiency
We can see an example where COWPEA fails multi-winner Pareto efficiency.
**Example 15**: No specified number to elect
\(250n\) voters: \(AC\)
\(250n\) voters: \(AD\)
\(250n\) voters: \(BC\)
\(250n\) voters: \(BD\)
\(2n\) voters: \(C\)
\(2n\) voters: \(D\)
COWPEA would elect each of the candidates in roughly equal proportions. It would be approximately 0.248 each for \(A\) and \(B\), and 0.252 each for \(C\) and \(D\). Note 8 No candidate is Pareto dominated by any other, but as a pair, \(CD\) dominates \(AB\) in this multi-winner sense. According to the multi-winner version of Pareto efficiency, \(C\) and \(D\) should be elected with half the weight each.
This example can be seen as a 2-dimensional voting space with \(A\) and \(B\) at opposite ends of one axis and \(C\) and \(D\) at opposite ends of the other. No voter has approved both \(A\) and \(B\) or both \(C\) and \(D\). Viewed like this, electing only \(C\) and \(D\) seems restrictive and arguably does not make best use of the voting space. This potentially calls into question the utility of this multi-winner Pareto efficiency criterion.
Of the methods discussed, just PAV and Chamberlin-Courant Rule pass multi-winner Pareto efficiency, although Chamberlin-Courant Rule passes only weakly. The following examples demonstrate the other methods' failures and further call in to question the utility of the criterion, at least for method that elects a fixed number of candidates with equal weight.
**Example 16**: 2 to elect
\(5n\) voters: \(AC\)
\(4n\) voters: \(BC\)
\(n\) voters: \(B\)\(D\)
Without going into the numbers, the sniff test suggests that \(BC\) should be the winning set. However, that is not what is important here. Compare \(AB\) with \(CD\). If either set is elected then each voter would have approved \(1\) elected candidate, so they would be equal in this regard. But in the case of \(AB\) it would be \(5n\) voters for \(A\) and \(5n\) for \(B\), whereas in the case of \(CD\) it would be \(9n\) for \(C\) and just \(n\) for \(D\). So, assuming equal candidate weight, \(AB\) is more proportional as it gives Perfect Representation, unlike \(CD\). Under the \(CD\) result, one tenth of the voters effectively wield half of the power, so it is clear why this is disproportional, and an undesirable result for a proportional voting method. This example suggests that it is preferable for an individual to share their approved candidate with fewer people. This would mean that the assumption that we can measure satisfaction or utility purely by looking at the number of elected candidates a voter has approved is not valid, and that the name of the criterion is a misnomer. Adding a small proportion of \(C\)-only and \(D\)-only voters would then make \(CD\) Pareto dominate \(AB\) in this multi-winner sense:
**Example 17**: \(2\) to elect
\(50n\) voters: \(A\), \(C\)
\(40n\) voters: \(B\), \(C\)
\(10n\) voters: \(B\), \(D\)
\(n\) voters: \(C\)
\(n\) voters: \(D\)
Despite this dominance in terms of the number of elected candidates approved by each voter, a strong case can still be made for \(AB\) over \(CD\). Otherwise the supposed superiority of \(AB\) over \(CD\) in example 16 would only be of a tie-break nature. Of course, it is still the case that \(BC\) is still arguably the best set overall, so perhaps it doesn't matter. But we can easily force the election to be between \(AB\) and \(CD\) by adding some ballots:
**Example 18**: \(2\) to elect
\(100n\) voters: \(AC\)
\(100n\) voters: \(AD\)
\(100n\) voters: \(BC\)
In this example, every candidate has been approved by half of the voters, but no voter has approved both \(A\) and \(B\) or both \(C\) and \(D\). It is very similar to example 15, and so it can be seen as a \(2\)-dimensional voting space with \(A\) and \(B\) at opposite ends of one axis and \(C\) and \(D\) at opposite ends of the other. Electing \(AB\) or \(CD\) would mean that every voter would have approved one elected candidate. Any other pair would mean that a quarter of voters would have no representative and a quarter would have approved both elected candidates. This would not be optimal under any proportional method. All we now need to do is combine the ballots from examples 17 and 18:
**Example 19**: \(2\) to elect
\(150n\) voters: \(AC\)
\(100n\) voters: \(AD\)
\(140n\) voters: \(BC\)
\(110n\) voters: \(BD\)
\(n\) voters: \(C\)
\(n\) voters: \(D\)
The winning set must now be \(AB\) or \(CD\). If we elect \(BC\), then \(101n\) voters would be without any representation. None of our deterministic methods would elect \(BC\). The \(400n\) ballots from example 18 considered as a whole are neutral regarding \(AB\) and \(CD\), so it purely comes down to whether the Pareto dominance caused by the small number of \(C\)-only and \(D\)-only voters is enough to overturn the better-balanced result of \(AB\). To be clear what's at stake, candidates \(A\) and \(B\) are each approved by \(250n\) voters, distinct from each other, and adding up to \(500n\). Candidate \(C\) is approved by \(291n\), and candidate \(D\) by \(211n\), also distinct from each other, and adding up to \(502n\). Unless level of proportional representation is of only negligible or tie-break value, \(AB\) must be the better result, at least where candidates must be elected with equal weight. Of the methods discussed, only PAV and Chamberlin-Courant pass this form of Pareto efficiency and would elect \(CD\). Chamberlin-Courant does so by weighting \(C\) and \(D\) in the elected body accordingly, and so is exempt from the disproportionately. Max/var-Phragmen and Fully Proportional Representation would elect \(AB\).
With max-Phragmen and var-Phragmen, the \(AB\) result would mean that the load on \(500n\) of the \(502n\) voters would be \(\mathcal{V}_{(250n)}\). The load on the remaining \(2n\) would be zero. For \(CD\), the load on \(291n\) of the voters would be \(\mathcal{V}_{(291n)}\), and the load on the other \(211n\) would be \(\mathcal{V}_{(211n)}\). \(AB\) gives a lower maximum load (\(\mathcal{V}_{(250n)}\) compared with \(\mathcal{V}_{(211n)}\)) so this would be the winning candidate set under max-Phragmen. \(AB\) would also give a lower sum of squared loads (\({}^{0.080}\)% compared with \({}^{0.082}\)%), so \(AB\) would be elected under var-Phragmen.\({}^{\text{\sc Note\,9}}\)
With fully Proportional Representation, each candidate would represent \(251n\) voters. The \(AB\) result would allow \(500n\) voters to be represented by a candidate that they approved. For \(CD\), it would be \(251n+211n=462n\) meaning that \(AB\) would be elected.
For completeness, the approximate proportions that the candidates would be elected in under COWPEA is \(A\): 0.238; \(B\): 0.245; \(C\): 0.338; \(D\): 0.179.\({}^{\text{\sc Note\,10}}\)
Chamberlin-Courant fails to distinguish between strongly supported and weakly supported candidate sets where Perfect Representation is possible (e.g. electing \(A\) in example 3 would be optional), so it only weakly passes multi-winner Pareto efficiency in general, whereas PAV passes it strongly.
There are cases where the multi-winner Pareto efficiency criterion might make sense. For example, a group of friends might want to use a proportional voting method to decide where to go out to eat on several separate occasions. In this case, each voter would want to maximise the number of times they go somewhere they like, and it would make no difference to them how many people they share each individual preference with, because their meal is eaten as a whole and not shared. But even then, we can see from example 15 that this could lead to less variety. In any case, it is certainly not clear that this criterion is desirable when considering elections to public office. This is why it is not included in the table in section 5.
### Consistency Criterion
COWPEA also fails consistency. But as with multi-winner Pareto efficiency, it is not clear that it is a desirable criterion to pass.
**Example 20**: No specified number to elect
\(2n\) voters: \(AC\)
\(n\) voters: \(A\)
\(3n\) voters: \(B\)
**Example 21**: No specified number to elect
\(3n\) voters: \(A\)
\(2n\) voters: \(BC\)
\(n\) voters: \(B\)
In both examples 20 and 21 candidate \(C\) is Pareto dominated (by \(A\) in example 20 and by \(B\) in example 21), and COWPEA would elect \(A\) and \(B\) with half the weight each in both elections (\(C\) would have no weight). We can now combine the ballots:
**Example 22**: No specified number to elect
\(4n\) voters: \(A\)
\(4n\) voters: \(B\)
\(2n\) voters: \(AC\)
\(2n\) voters: \(BC\)
If COWPEA passed the consistency criterion, it would elect \(A\) and \(B\) with half the weight each, as in the previous examples. COWPEA actually elects \(A\) and \(B\) with \(4_{9}\) of the weight each and \(C\) with \(1_{9}\) of the weight, therefore failing consistency.Note 11 However, \(C\) is no longer Pareto dominated in this example, and it is perhaps worth noting that this result does not violate the multi-winner Pareto efficiency criterion either. Combining the ballots sets has changed \(C\)'s position within the electoral landscape. It does not seem unreasonable to elect \(C\) with some weight in this election, and it is therefore not clear that passing the consistency criterion is necessary for a proportional approval method.
As with the multi-winner version of Pareto efficiency, only PAV and Chamberlin-Courant Rule pass consistency. For Chamberlin-Courant Rule, this would sometimes mean putting together two ballot sets where it couldn't decide the result of either, resulting in a single election with the same indecision. But it is generally accepted as passing consistency (see Lackner & Skowron, 2023). We can see the consistency failures of the other methods by returning to example 19:
**Example 19**: 2 to elect
\(150n\) voters: \(AC\)
\(100n\) voters: \(AD\)
\(140n\) voters: \(BC\)
\(110n\) voters: \(BD\)
\(n\) voters: \(C\)
\(n\) voters: \(D\)
Max/var-Phragmen and Fully Proportional Representation would elect \(AB\) because of the imbalance between candidates \(C\) and \(D\), as previously stated. But we can now consider a "mirror image" set of ballots, with all the approvals for \(C\) and \(D\) swapped round.
**Example 23**: 2 to elect
150\(n\) voters: \(AD\)
100\(n\) voters: \(AC\)
140\(n\) voters: \(BD\)
110\(n\) voters: \(BC\)
\(n\) voters: \(D\)
\(n\) voters: \(C\)
This mirror image ballot set would also see \(AB\) elected for these methods, because the imbalance would still exist, except with the roles of \(C\) and \(D\) reversed. But now we can combine the two mirror ballot sets together, giving us:
**Example 15**: 2 to elect
250\(n\) voters: \(AC\)
250\(n\) voters: \(AD\)
250\(n\) voters: \(BC\)
250\(n\) voters: \(BD\)
2\(n\) voters: \(C\)
2\(n\) voters: \(D\)
Look at the example number. This takes us back to example 15 (except that the number to elect has been added), which was originally our example for Pareto efficiency, so the two criteria have become linked. Adding the mirror image ballot sets together neatly rebalances \(C\) and \(D\), and the extra \(4n\) voters that approve either just \(C\) or just \(D\) tip it in favour of \(CD\), meaning that this is the candidate set elected under max/var-Phragmen and Fully Proportional Representation. This means that they all fail the consistency criterion. PAV would elect \(CD\) in both examples 19 and 15, as it passes consistency. Chamberlin-Courant would elect \(C\) and \(D\) with reversed proportions (i.e. give different results) in examples 19 and 23 (its mirror image) respectively, so this isn't a test of its consistency.
If we accept from example 19 that passing the multi-winner Pareto efficiency criterion is not a necessary requirement, it follows that consistency is not either. They are inextricably linked.
### The complete criterion compliance table
The criterion compliance table in section 5 did not include multi-winner Pareto efficiency or consistency, because they are of debatable utility. It also did not include Perfect Representation because of its incompatibility with strong monotonicity (which is deemed more important) in methods that elect a fixed number of candidates. But for completeness, here is the table with them all included:
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Criterion** & **PRIL** & **Monotonicity** & **IIB** & **IUAC** & **Multi-Winner** & **Consistency** & **Perfect Representation** \\
**Method** & & & & & **Pareto efficiency** & & \\ \hline
**COWPEA** & YES & YES (strong) & YES & N/A & NO & NO & YES \\ \hline
**COWPEA Lottery** & YES & YES (strong) & YES & YES & NO & NO & NO & NO \\ \hline
**PAV** & NO & YES (strong) & YES & NO & YES (strong) & YES & NO \\ \hline
**Max/Var-Phragmen** & YES & YES (weak) & NO (passes empty but not full) & NO & NO & NO & YES \\ \hline
**Fully Proportional Representation** & YES & YES (weak) & NO (fails both empty and full) & NO & NO & NO & YES \\ \hline
**Chamberlin- Courant Rule** & YES & YES (weak) & NO (passes empty but not full) & NO & YES (weak) & YES & YES \\ \hline \end{tabular}
## 7 COWPEA versus Optimised PAV
It is perhaps worth speculating about an optimised version of PAV that can elect any number of candidates with potentially different weights, as well as a lottery version based on these weights. One way to approximate results would be to increase the number of seats to some large number and allow unlimited candidate clones. This Optimised PAV would pass the multi-winner Pareto efficiency and consistency criteria, in the same way that normal PAV does. Furthermore, PAV's failure of PRIL in example 2 was closely related to its failure of IUAC (example 1). An optimised version of PAV would work in the same way as COWPEA with regards to universally approved candidates, and these candidates would take all the weight in the elected body. Therefore PAV's IUAC failure would no longer apply, meaning that it could potentially pass PRIL. However, this is
not to say that it would be guaranteed to pass PRIL, as overlapping factions might still cause some problems.
As an optimised variable-candidate-weight method, if Optimised PAV does pass PRIL, it would automatically pass Perfect Representation as well. This means that it would complete the set of criteria in the table above, except for having an N/A by IUAC, but that is not a weakness. Its election of \(CD\) in example 19 would also allow different weights to be given to \(C\) and \(D\), meaning that its multi-winner Pareto efficiency is not at the cost of proportionality. The lottery version would pass IUAC, but not Perfect Representation, which is the same as for COWPEA Lottery. Whether or not this level of criterion compliance is a good thing, however, is open to debate.
It would be interesting to investigate Optimised PAV's proportionality, but it is complex and unwieldy to deal with. Trying a few simple (for computational purposes) examples did not cause a PRIL failure though. Here is one such example:
**Example 24**: No specified number to elect
\(2n\) voters: \(UA\)
\(2n\) voters: \(UB\)
\(n\) voters: \(A\)
\(n\) voters: \(B\)
\(6n\) voters: \(C\)
Optimised PAV seems to converge on \(A\), \(B\) and \(U\) getting \(\%\) of the seats each, and \(C\) getting \(\%\), which is a proportional result. Note 12 In example 2, a limited number of cross-factional candidates caused a disproportional result, but in the optimised version there is no limit to the weight of any candidate.
For comparison, COWPEA would give slightly different weights - \(A\) and \(B\) would each get \(\%_{36}\), \(U\) would get \(\%_{36}\) and \(C\) would get \(\%\) (\(18\%_{36}\)). This is also proportional and much easier to verify. Note 13
It is worth comparing the two methods to see where they differ and where the differences come from. We will start by returning to example 15:
**Example 15**: No specified number to elect
\(250n\) voters: \(AC\)
\(250n\) voters: \(AD\)
\(250n\) voters: \(BC\)
\(250n\) voters: \(BD\)
\(2n\) voters: \(C\)
\(2n\) voters: \(D\)
As discussed, in this example COWPEA elects candidates \(A\) and \(B\) with a proportion of approximately 0.248 each, and \(C\) and \(D\) with a proportion of 0.252 each. Optimised PAV elects \(C\) and \(D\) with half the weight each. \(A\) and \(B\) would have zero weight. This is because Optimised PAV passes the multi-winner Pareto criterion, and \(CD\) as a pair Pareto dominate \(AB\). However, this is a knife-edge result. Replace the C-only and D-only voters with A-only and B-only voters respectively, and suddenly \(A\) and \(B\) get all the weight between them, with \(C\) and \(D\) getting nothing.
COWPEA just swaps the 0.248s with the 0.252s. In this respect it is a more continuous method than Optimised PAV.
The only time when COWPEA experiences any sort of discontinuity is when a candidate goes from Pareto dominating one candidate to not doing so.
**Example 25**: No specified number to elect
\(99n\) voters: \(AB\)
\(n\) voters: \(A\)
In this example, both COWPEA and Optimised PAV would elect A with all the weight. Replacing the \(n\)\(A\)-only voters with \(B\)-only voters would cause a sudden switch to \(B\) being elected with all the weight, under both methods. However, in this example, \(A\) and \(B\) are virtual clones, so from a voting method's perspective, the discontinuity exists only in the name of the candidate being elected, not in any properties that they have. In that sense it is not a discontinuity in the same way as swapping between \(AB\) and \(CD\) in example 15. Very little has changed in terms of where in the voting space the representation comes from in modifying example 25. As discussed earlier in the paper, example 15 can be seen as having an \(AB\) axis and a \(CD\) axis, with \(A\) and \(B\) not correlated at all with \(C\) and \(D\). Optimised PAV just looks at its measure of voter satisfaction and doesn't "care" about the spread of representation across the voting space. Because of this, it doesn't recognise this as a discontinuity in its output.
COWPEA is also more resolute than Optimised PAV. The only time a tie-break is required under COWPEA is when two candidates are approved on exactly the same ballots. In that case they are simply elected with equal weight. In all other cases, the algorithm determines an exact result. This is not the case with Optimised PAV.
**Example 18**: No specified number to elect
\(100n\) voters: \(AC\)
\(100n\) voters: \(AD\)
\(100n\) voters: \(BC\)
\(100n\) voters: \(BD\)
COWPEA would elect all four candidates with equal weight, with no tie-break mechanism necessary. Optimised PAV only requires that \(A\) gets the same weight as \(B\), and that \(C\) gets the same weight as \(D\). The \(AB\) to \(CD\) ratio could be anything and the total satisfaction score would be the same. Optimised PAV would require a tie-breaking mechanism in this case. It could be as simple as saying that when groups of candidates are tied in this way, then the groups are elected with equal weight, but it is still a level of tie-breaking not required by COWPEA.
In summary, COWPEA is more resolute, less discontinuous, and arguably gives better representation by using more of the voting space. Optimised PAV gives multi-winner Pareto efficiency and consistency at the cost of these. Which of these properties are more desirable is arguably a choice rather than there simply being objectively mathematically better method. But this is a debate that can be had. It should also be reiterated than Optimised PAV has not yet been shown to be a truly proportional method, so this whole discussion is contingent upon that.
As well as results, there is also the calculation and meaning of the results. This might not have any bearing on the discussion of the mathematically best method, but it is still worth looking at. Any COWPEA result can be calculated using a closed-form expression. There is no known method for calculating an Optimised PAV result in such a manner in general. An Optimised PAV result can only be approximated by increasing the number of seats to some large number, allowing unlimited candidate clones, and then finding the proportion of the total seats each candidate gets in the result that gives the maximum sum of satisfaction scores.
With Optimised PAV, an exact result can be hard to determine, even with a fairly simple ballot profile:
**Example 26**: No specified number to elect
\(2n\) voters: _UA_
\(n\) voters: _UB_
\(2n\) voters: \(A\)
\(n\) voters: \(B\)
\(n\) voters: \(U\)
You might have seen example 24 and decided that Optimised PAV gives "nicer" results than COWPEA. Put that notion out of your head. In example 26, The COWPEA result is fairly simple to calculate. The weights for each candidate would be _A_: \(3\%\); _B_: \(5\%_{28}\); _U_: \(1\%_{28}\). This is an exact result. Note 14
Under Optimised PAV, the weights are approximately _A_: 0.442019; _B_: 0.192019; _U_: 365962. Note 15 It is unclear exactly what these numbers are, or whether they're even rational. It does appear likely that _A_'s weight is exactly 0.25 more than _B_'s weight, but other than that, all is unclear. Fractions that could work for these numbers are _A_: \(33\%_{733}\), _B_: \(5\%_{2932}\), _U_: \(10\%_{2932}\). But these numbers have no obvious significance, and it was inevitable that _some_ fractions could be found that were a close fit, so whether these are the exact proportions or not is pure speculation. Optimised PAV is like a black box. Put in some votes and it will throw out a result with no indication of what it means or where it has come from. It is worth noting that COWPEA and Optimised PAV don't give wildly different results here, however. To six decimal places, COWPEA's result is _A_: 0.428571; _B_: 0.178571; _U_: 0.392857.
Of course, in a large real-life election, COWPEA and Optimised PAV might both be too computationally expensive to actually be used. So perhaps it doesn't matter which is better than the other in that respect. However, the COWPEA proportions can always be approximated by running a COWPEA Lottery on the ballots multiple times. No such simple approximation method exists for Optimised PAV. And this brings us neatly onto a discussion of the lottery methods.
The big advantage of COWPEA Lottery over Optimised PAV Lottery is that elections can be run without calculating the full weightings. Optimised PAV Lottery would require a calculation of the full Optimised PAV weightings, and then the lottery would be run from these weightings. The weightings would also need to be recalculated after each candidate is elected. Optimised PAV Lottery is no simpler than Optimised PAV. Therefore, it is not clear that an Optimised PAV Lottery could ever be run in a real-life situation except in the simplest of cases. By contrast, a COWPEA Lottery election could be counted by hand. The difference in complexity is enormous.
COWPEA is conceptually a much simpler method than Optimised PAV, and it is very easy to demonstrate its proportionality and it is easy to understand on an intuitive level. Optimised PAV is
complex and unwieldy, and even if can be proven that it is proportional, it is likely not something that could be understood intuitively.
None of the other methods discussed in this paper would be appropriate for turning into an optimised method of this sort. They would be indifferent between any of the results that give Perfect Representation, which means that each voter could be represented by any of the candidates that they approved, or any combination in any proportion. There would be a continuum of indecisiveness under these methods.
Assuming that Optimised PAV is proportional (passes PRIL and Perfect Representation), then without the restrictions of a fixed number of candidates and equal weighting, and ignoring any computational limits, COWPEA and Optimised PAV are probably the two contenders for the mathematically ultimate form of proportional representation, or indeed perhaps the true Holy Grail, and they can also both be used with the Kotze-Pereira Transformation for a full score voting experience. It's time to pick a side.
## 8 Discussion and Conclusions
COWPEA and COWPEA Lottery are new proportional approval methods that are superior to other approval-based proportional methods in terms of their criterion compliance. (Optimised PAV and Optimised PAV Lottery are still unproven in this regard.) This superiority also extends to score or graded methods, since they can be straightforwardly modified to accommodate these ballots while retaining the same criterion compliance. COWPEA is a candidate for the method that produces the mathematically optimal candidate weights for a proportional approval election.
While most multi-winner elections are for a fixed number of representatives to be elected with equal weight, COWPEA can still have other uses, as well as being of theoretical interest. For example, it is still a useful tool to see an optimised result, to gauge the results of other methods. It can also be used with parties instead of individual candidates to determine what proportion of the candidates each party should have in the elected body.
Unlike COWPEA, COWPEA Lottery can be used in elections where a fixed number of representatives with equal weight are required. It has excellent criterion compliance, passing Perfect Representation In the Limit, strong monotonicity, Independence of Irrelevant Ballots, and Independence of Universally Approved Candidates, as well as being simple to run. It does not guarantee proportionality in an individual election, but if it is used to elect candidates to many constituencies in a national general election, the overall result would tend to be more proportional than those of a deterministic method. With a limited number of seats per constituency, proportional representation can only be fairly coarse-grained. For example, a party might have 10% of the national support but struggle to win the right number of seats if there are, say, 6 seats per constituency, because they keep just falling short. However, with a method such as COWPEA Lottery, while this party would be over-represented in some constituencies and under-represented in others, it should achieve about the right amount of representation nationally. In this sense, the method can be said to break down the walls between constituencies.
To be clear, the lack of determinism and lack of a guarantee of proportionality within a single election can actually be seen as a feature of the method, not a bug, if it is used for national elections, as COWPEA Lottery would give better national level proportional representation than deterministic multiple-constitucency methods.
National level proportional representation is normally achieved with party lists, but this takes some of the power away from voters, with independent candidates also suffering. Achieving proportional representation at a national level can also be unwieldy and complex. COWPEA Lottery can achieve this in a much simpler manner with voters able to vote for individual candidates rather than parties, and without the need to collate votes at a national level.
With lottery methods, there can be a risk of even very popular candidates not being elected. However, COWPEA Lottery is not as "knife-edge" as the simple Random Ballot method. In Random Ballot, voters vote for just one candidate, and in each constituency only one ballot is picked. By having multiple elected candidates per constituency, popular candidates are less at risk, with COWPEA Lottery. Furthermore, using Random Ballot, only as many ballots as there are elected candidates are looked at. In COWPEA Lottery, the number would likely be far higher, because if a selected ballot has approved more than one candidate, more ballots would be picked until a winner is found.
So while COWPEA Lottery is a lottery method, it is still much more consensual than Random Ballot, as well as giving more proportional results at national level than deterministic constituency methods. As it is non-deterministic, it also means that there would be no "safe seats", so candidates would always be incentivised to appeal to as many voters as possible. It would send the message that representing the people in parliament is a privilege, not an entitlement.
Being a lottery method, calculating a result is computationally very simple, or it could even be done by hand, subject to sufficient security. And while COWPEA Lottery might be considered too theoretical to be used in real-life political elections, at least for the time being, its simplicity and criterion compliance mean that it can be used right now by people wanting to run smaller elections. Groups of friends could use it as a way to decide on an activity, and it would ensure a certain fairness over time, without anyone having to keep track of previous decisions.
COWPEA and COWPEA Lottery can be simply converted to score or graded voting for cases where more preference information is desirable. In particular, the version using layers of approval is a very simple modification and can be used with e.g. 0 to 5 or A to E ballots just as easily. Elections using this version of COWPEA Lottery could still be counted by hand.
In conclusion, COWPEA and COWPEA Lottery are methods that bring a new level of criterion compliance to the landscape of proportional approval methods. They are of both theoretical and practical interest.
|
2304.01582 | Unitary Coined Discrete-Time Quantum Walks on Directed Multigraphs | Unitary Coined Discrete-Time Quantum Walks (UC-DTQW) constitute a universal
model of quantum computation, meaning that any computation done by a general
purpose quantum computer can either be done using the UC-DTQW framework. In the
last decade,s great progress has been done in this field by developing quantum
walk-based algorithms that can outperform classical ones. However, current
quantum computers work based on the quantum circuit model of computation, and
the general mapping from one model to the other is still an open problem. In
this work we provide a matrix analysis of the unitary evolution operator of
UC-DTQW, which is composed at the time of two unitary operators: the shift and
coin operators. We conceive the shift operator of the system as the unitary
matrix form of the adjacency matrix associated to the graph on which the
UC-DTQW takes place, and provide a set of equations to transform the latter
into the former and vice-versa. However, this mapping modifies the structure of
the original graph into a directed multigraph, by splitting single edges or
arcs of the original graph into multiple arcs. Thus, the fact that any unitary
operator has a quantum circuit representation means that any adjacency matrix
that complies with the transformation equations will be automatically
associated to a quantum circuit, and any quantum circuit acting on a bipartite
system will be always associated to a multigraph. Finally, we extend the
definition of the coin operator to a superposition of coins in such a way that
each coin acts on different vertices of the multigraph on which the quantum
walk takes place, and provide a description of how this can be implemented in
circuit form. | Allan Wing-Bocanegra, Salvador E. Venegas-Andraca | 2023-04-04T07:19:55Z | http://arxiv.org/abs/2304.01582v1 | # Unitary Coined Discrete-Time Quantum Walks on Directed Multigraphs
###### Abstract
Unitary Coined Discrete-Time Quantum Walks (UC-DTQW) constitute a universal model of quantum computation, meaning that any computation done by a general purpose quantum computer can either be done using the UC-DTQW framework. In the last decade,s great progress has been done in this field by developing quantum walk-based algorithms that can outperform classical ones. However, current quantum computers work based on the quantum circuit model of computation, and the general mapping from one model to the other is still an open problem. In this work we provide a matrix analysis of the unitary evolution operator of UC-DTQW, which is composed at the time of two unitary operators: the shift and coin operators. We conceive the shift operator of the system as the unitary matrix form of the adjacency matrix associated to the graph on which the UC-DTQW takes place, and provide a set of equations to transform the latter into the former and vice-versa. However, this mapping modifies the structure of the original graph into a directed multigraph, by splitting single edges or arcs of the original graph into multiple arcs. Thus, the fact that any unitary operator has a quantum circuit representation means that any adjacency matrix that complies with the transformation equations will be automatically associated to a quantum circuit, and any quantum circuit acting on a bipartite system will be always associated to a multigraph. Finally, we extend the definition of the coin operator to a superposition of coins in such a way that each coin acts on different vertices of the multigraph on which the quantum walk takes place, and provide a description of how this can be implemented in circuit form.
## 1 Introduction
Quantum walks, originally designed to model quantum phenomena [1, 2, 3, 4], are an advanced tool for building quantum algorithms (e.g. [5, 6, 7, 8, 9, 10, 11, 12, 13, 14]) and analyzing biological data [15] that has been shown to constitute a universal model of quantum computation [16, 17]. Quantum walks come in two variants: discrete and continuous in time [18].
A Discrete-Time Quantum Walk (DTQW) consists of a bipartite quantum state, \(|\psi\rangle=|v\rangle\otimes|c\rangle\), containing information about the position and coin states of a walker on a graph \(\mathcal{G}\), and a unitary _evolution operator_, \(U\), composed of the subsequent application of a _coin operator_, \(C\), and a _shift operator_, \(S\). \(U\) has an action on \(|\psi\rangle\)
such that each time it is applied, it produces transitions between pairs of quantum states associated to adjacent vertices on the graph \(\mathcal{G}\) where the DTQW takes place.
Different approaches have been made to address the problem of constructing evolution operators that are able to generate discrete-time quantum walks on different topologies, e.g.: Aharonov \(et\)\(al.\)[19] proposed the Reversible Arc model, Montanaro [20] proposed the Directed-Graph model, Portugal \(et\)\(al.\)[21] proposed the Staggered Quantum Walk model, Szegedy [22] proposed a model known by the author's name, Attal \(et\)\(al.\) proposed the Open Quantum Random Walk model [23], and Zhan [24] proposed a model called Vertex-Face Walk. Nevertheless, there are two problems with the evolution operators proposed in the existing models we will study in this work:
* Generally speaking, the design of evolution operators for running quantum walks on arbitrary topologies is a difficult task.
* Current quantum computer technology is based on the circuit model of quantum computation. Although it is known that the models of gate-based quantum computation and quantum walks are both universal, it remains unknown how to efficiently transform a quantum-walk based algorithm into a quantum gate-based algorithm and vice versa.
In this work, we consider the shift operator of a unitary quantum walk \(S\) as a unitary matrix with the information of the connections between the vertices of the graph on which we want to perform a quantum walk, similar to a left stochastic transition matrix in a random walk [13]. To induce the connectivity information in \(S\), we provide a system of equations with which we can map the transpose adjacency matrix, \(\mathcal{A}^{\intercal}\), of a certain graph, \(\mathcal{G}\), into \(S\) - \(\mathcal{A}^{\intercal}\) is an unnormalized version of the left stochastic transition matrix in a random walk. This tackles the first problem by standardizing the construction of the evolution operator of a quantum walk. However, even though \(S\) and \(\mathcal{A}^{\intercal}\) contain the same connectivity information, \(S\) is not associated to exactly the same graph \(\mathcal{G}\) as \(\mathcal{A}^{\intercal}\). Instead, \(S\) is associated to \(\mathcal{G}^{\prime}\), which is a mapping of \(\mathcal{G}\) into a multigraph that complies with unitarity conditions, that we will describe in the next chapter. This model can be seen as an extension of the Directed-Graph model proposed by Montanaro [20].
The proposed mapping comes from the fact that the quantum system we use to perform a DTQW is bipartite, i.e. \(|\psi\rangle=|c\rangle\otimes|v\rangle\), and any operator acting on a bipartite system has a block matrix form. In [25], Petz proves this property for unitary operators and shows that the block matrices that compose them are indeed Kraus operators, and that each column of the unitary block matrix forms a different set of Kraus operators. In this work, we will use this property to provide a link between the shift operator of a unitary coined discrete-time quantum walk, usually referred to as DTQW for simplicity, and the adjacency matrix of the graph, \(\mathcal{G}\), on which the quantum walk takes place, as well as a detailed description of the dynamics of a quantum walker on \(\mathcal{G}\).
The fact that any operator acting on a bipartite system has a block matrix form, has also been used to develop the theory of Open Quantum Walks on [23, 26, 27], which, as the name implies, use an Open Quantum System to represent the quantum state of a walker, although, in this case the evolution operator is not necessarily unitary. Unitarity conditions can be applied to the evolution operator of an open quantum walk to obtain a unitary quantum walk, although, given that the open quantum walk model does not contain a coin operator, we will obtain a unitary coinless discrete-time quantum walk. That is to say, the evolution operator of an Open Quantum Walk is an analog to the shift operator and not to the complete evolution operator described in this work.
To address the second problem associated to current evolution operators, we consider the inverse mapping used to create a multigraph \(\mathcal{G}^{\prime}\) out of a general graph \(\mathcal{G}\), and the inverse mapping used to create \(S\) out of \(\mathcal{A}^{\intercal}\). This leads to the fact that any unitary operator on a bipartite system can be used as the shift operator of a DTQW. And given that any quantum circuit has a unitary matrix form, we conclude that any quantum circuit acting on a bipartite quantum register can be suitable for perform a DTQW on a general purpose
quantum computer. The converse problem, i.e. constructing a quantum circuit given an evolution operator is not addressed in this work.
Finally, regarding the coin operator, we extend its form to a superposition of \(n\) coin operators, each acting on one of the \(n\) vertices of the multigraph on which the DTQW takes place. We do not restrict the form of the coin operators, i.e. they can be any unitary matrix acting on the coin register. This extension might be useful in cases in which we want the evolution operator to have a different behavior on selected vertices.
## 2 Basic Definitions on Graph Theory
In this section we present basic definitions about the type of graphs we use in this paper. The following definitions are mainly taken from [28].
**Definition 1.** A graph is an ordered triple \((V,E,f)\), where \(V\) is a non-empty set called the \(vertex\)\(set\), \(E\) is a set called \(edge\)\(set\), and \(f:E\to V\times V\) is a mapping, called \(incidence\)\(function\), which maps an edge \(e_{k}\in E\) into an ordered or unordered pair of vertices, i.e. \((v_{i},v_{j})\) or \(\{v_{i},v_{j}\}\in V\times V\), respectively, called \(end\)\(vertices\). The sets \(E\) and \(V\) are disjoint. This definition allows loops, i.e. the end vertices can be the same vertex for a given edge.
**Definition 2.** An undirected graph is a graph for which \(V\times V\) is a set of unordered pairs \(\{v_{i},v_{j}\}\). These graphs are called \(undirected\)\(edges\).
**Definition 3.** A \(directed\)\(graph\) or \(digraph\) is a graph for which \(V\times V\) is a set of ordered pairs \((v_{i},v_{j})\), i.e. \(f(e_{k})=(v_{i},v_{j})\). Then \(e_{k}\) is called a \(directed\)\(edge\) or \(arc\) and, \(v_{i}\) and \(v_{j}\) are called, \(tail\) and \(head\) of the arc \(e_{k}\)
Figure 1: Examples of different types of graphs along with their adjacency matrix representation. (a) Undirected graph. (b) Directed graph. (c) Multigraph. (d) Weighted graph. Graphs (a), (b) and (c) are unweighted and labeled with the element \(e_{k}\) that distinguishes each edge, while graph (d) is weighted and labeled with the weight \(w(e_{k})\) associated to each edge. Notice that the adjacency matrix is the same for graphs (a), (b) and (d), although their connections differ. This means the same adjacency matrix can represent multiple graphs.
respectively.
**Definition 4.** A multigraph is a graph for which more than one edge can be mapped into the same element \(V\times V\) under \(f\).
**Definition 5.** A directed multigraph is a digraph that allows parallel arcs.
**Definition 6.** A \(weighted\ graph\) is a graph whose edges are assigned and labeled with a numerical value, denoted by \(w(e_{k})\). An \(unweighted\ graph\) is a graph for which \(w(e_{k})=1\ \forall\ e_{k}\in E\).
**Definition 7.** The \(indegree\) of a vertex \(v_{i}\in V\) is the number of arcs coming into \(v_{i}\). Similarly, the \(outdegree\) of a vertex \(v_{i}\) is the number of arcs going out of \(v_{i}\). The \(degree\) of \(v_{i}\) is the sum of in- and out-degrees of \(v_{i}\).
**Definition 8.** An adjacency matrix is the matrix representation of a graph, and is denoted by \(\mathcal{A}\), where \(a_{ij}\in\mathcal{A}\) is the sum of the weights of all directed edges with tail \(v_{i}\) and head \(v_{j}\). If undirected edges connect \(v_{i}\) and \(v_{j}\), their weights will be added to both \(a_{ij}\) and \(a_{ji}\). This definition enables us to assign multiple graphs to the same adjacency matrix, as can be seen from figures 1(a), 1(b) and 1(d).
**Definition 9.** A graph is said to be connected if there exists a path of adjacent edges, i.e. edges that have a vertex in common, connecting all pairs of vertices.
## 3 Walks on Graphs
### Classical Random Walks
Let \(G=(V,E)\) be a connected graph with \(n\) nodes, arbitrarily labeled from \(0\) to \(n-1\), and \(m\) edges for each node. A random walk on \(G\) is defined as a process in which a walker starts at a vertex \(v_{0}\), and randomly moves on to one of the adjacent vertices. After \(t\) steps, the walker stops at node \(v_{t}\), and the path traced along the graph, i.e. the sequence of random nodes, is a Markov chain. To study the process in vector form, we define the walker's probability vector after \(t\) steps as
\[P_{t}=p_{0}^{t}\hat{e}_{0}+p_{1}^{t}\hat{e}_{1}+\cdots+p_{n-1}^{t}\hat{e}_{n-1 }=\begin{pmatrix}p_{0}^{t}\\ p_{1}^{t}\\ \vdots\\ p_{n-1}^{t}\end{pmatrix} \tag{1}\]
Where \(p_{i}^{t}\) is the probability of finding the walker at vertex \(i\) at step \(t\) and \(\hat{e}_{i}\) is the standard basis vector of \(\mathbb{R}^{n}\) that contains a \(1\) in the \(i\)th entry.
Now let \(\mathcal{A}\) be the adjacency matrix of \(G\) and \(D\) a diagonal matrix that contains the outdegrees of each vertex, i.e.:
\[D_{ii}=\sum_{j=0}^{n-1}\mathcal{A}_{ij} \tag{2}\]
We define the matrix of transition probabilities of this markov chain as
\[M=D^{-1}\mathcal{A} \tag{3}\]
Thus a random walk can be defined by a simple update rule:
\[P_{t+1}=M^{\intercal}P_{t} \tag{4}\]
and the probability distribution vector after \(t\) steps can be calculated in terms of only the transition matrix and the initial probability distribution vector of the system:
\[P_{t}=(M^{\intercal})^{t}P_{0} \tag{5}\]
The reason why we use \(M^{\intercal}\) rather than \(M\) to generate a random walk is due to directed edges and the relation between the basis vectors \(e_{i}\) and the nodes probabilities \(p_{i}^{t}\). If we apply \(M\) to \(P_{t}\) the probability transitions will correspond to a walk in the inverse direction of the directed edges. \(M^{\intercal}\) provides a walk in the correct direction. Different walks might result by associating the probability of node \(j\), \(p_{j}\), to basis a vector with a different index, i.e. \(e_{i}\), although in this work we will follow the definition presented in Eq. (1) for simplicity. This is important to mention given that the same rationale will be applied when defining a quantum walk.
### Discrete-Time Quantum Walks
Consider a general multigraph \(\mathcal{G}(V,E,f)\), according to definition 1. Associated to the vertices of \(\mathcal{G}\), there exists a set of \(|V|=n\) basis vectors \(\{|v_{j}\rangle:v_{j}\in\mathbb{Z}\}\) that span an \(n\)-dimensional Hilbert space \(H_{P}\), called the position space. Associated to the arcs of the graph, there exists a set of \(m\) basis vectors \(\{|c_{i}\rangle:c_{i}\in\mathbb{Z}\}\) that span an \(m\)-dimensional Hilbert space \(H_{C}\), called the coin space. Each coin state is associated to a different subset of arcs in \(\mathcal{G}\), e.g., in the DTQW on a line [18, 13] coin states \(|1\rangle\) and \(|0\rangle\) are associated to the subsets of arcs pointing to the left and to the right, respectively. Thus, the quantum state of a walker, defined as
\[|\psi(t)\rangle=\sum_{i}\sum_{j}b_{ij}|c_{i}\rangle\otimes|v_{i}\rangle \tag{6}\]
stores information of the walker's position and which are the arcs of \(\mathcal{G}\) through which the walker is allowed to move, given a certain position.
Next, we can group the states that share the same coin state to obtain
\[|\psi(t)\rangle=\sum_{i}|c_{i}\rangle\otimes|V_{i}(t)\rangle \tag{7}\]
where \(|V_{i}(t)\rangle=\sum\limits_{j}b_{ij}|v_{j}\rangle\). Each subvector \(|V_{i}(t)\rangle\) has a similar function than a probability vector in a random walk (Eq. (1)), with the difference that \(|V_{i}(t)\rangle\) stores probability amplitudes. If the bases for \(H_{C}\) and \(H_{P}\) are the standard bases, then the \(j\)th entry of \(|V_{i}(t)\rangle\) contains the probability amplitude corresponding to the walker with coin state \(|c_{i}\rangle\) and position state \(|v_{j}\rangle\). From a vector approach, \(|\psi(t)\rangle\) can be described as a \(n\cdot m\) column vector that is subdivided into \(m\) vectors of size \(n\). Explicitly
\[|\psi(t)\rangle=\begin{pmatrix}|V_{0}(t)\rangle\\ |V_{1}(t)\rangle\\ \vdots\\ |V_{m-1}(t)\rangle\end{pmatrix} \tag{8}\]
Similar to the transition matrix in the random walk model, there exists an operator that contains information about the dynamics of the walker on the graph, with the difference that this operator is represented by a unitary matrix instead of a stochastic one. We call it the evolution operator of the system, and is defined as
\[U=S(C\otimes I_{n}) \tag{9}\]
Where \(C\) is the coin operator, and it acts on \(|\psi(t)\rangle\) such that when \(C\otimes I_{n}\) is applied, it affects only the coin register, and sets coin states into superposition. \(S\) is the shift operator, and the objective when applied to \((C\otimes I_{n})|\psi(t)\rangle\) is to make the position states transition from a state \(|v_{j}\rangle\) to a state \(|v_{k}\rangle\), which corresponds to a step to an adjacent node in \(\mathcal{G}\). Notice that, in general, the shift operator might affect both the coin and position registers, but the coin register should not affect the position register, otherwise we would perform a quantum
walk on two different graphs, similar to applying two different transition matrices to the same probability vector in a random walk.
Let \(|\psi_{0}\rangle\) be the initial state of the system. The quantum walk starts when the operator \(U\) is applied on \(|\psi_{0}\rangle\). After \(t\) steps the state of the system is given by
\[|\psi(t)\rangle=U^{t}|\psi_{0}\rangle \tag{10}\]
In view of this description, we can conceive a DTQW as a superposition of walks that happen simultaneously on the different subgraphs of \(\mathcal{G}\) associated to each coin state of the system.
Finally, we define the measurement operator of the state \(|v_{j}\rangle\) as
\[M_{k}=\sum\limits_{i=0}^{m-1}|c_{i}\rangle\langle c_{i}|\otimes|v_{k}\rangle \langle v_{k}|=I_{m}\otimes|v_{k}\rangle\langle v_{k}|\]
The action of \(M_{k}\) on the state of a quantum walker after \(t\) steps, \(|\psi(t)\rangle=\sum\limits_{i}|c_{i}\rangle\otimes|V_{i}(t)\rangle\), is to leave it in a superposition of all composite states with the same position state regardless of their coin state, that is
\[M_{k}|\psi(t)\rangle=(I_{m}\otimes|v_{k}\rangle\langle v_{k}|)\left(\sum \limits_{i}|c_{i}\rangle\otimes|V_{i}(t)\rangle\right)\]
\[M_{k}|\psi(t)\rangle=\sum\limits_{i}|c_{i}\rangle\otimes|v_{k}\rangle\]
Thus, the probability of finding the state \(|v_{k}\rangle\) after \(t\) steps is given by
\(P(|v_{k}\rangle)=\langle\psi(t)|M_{k}^{\dagger}M_{k}|\psi(t)\rangle\)
## 4 Adjacency Matrix Decomposition
This section includes propositions and theorems obtained from the matrix study of the evolution operator of unitary coined discrete-time quantum walks. The first goal is to provide a mapping for a general graph \(\mathcal{G}\) and its adjacency matrix into a directed multigraph \(\mathcal{G}^{\prime}\) and a unitary operator, respectively. The quantum walk will take place on \(\mathcal{G}^{\prime}\) and the unitary operator form of the adjacency matrix will serve as the shift operator of the system. The second goal is to study the transformation the shift operator and the graph \(\mathcal{G}^{\prime}\) undergo when then coin operator is applied. Finally, as a third goal we study the link between a general unitary operator -- and as a consequence a general quantum circuit -- and quantum walks.
**Proposition 1**.: Edges can be transformed in the following manner:
1. A directed edge \((v_{i},v_{j},e)\) with weight \(w\) can be split into multiple directed edges \((v_{i},v_{j},e_{k})\) with weights \(w_{k}\) such that \(w=\sum\limits_{k}w_{k}\).
2. An undirected edge \(\{v_{i},v_{j},e\}\) with weight \(w\) can be split into two directed edges \((v_{i},v_{j},e_{1})\) and \((v_{j},v_{i},e_{2})\) each with weight \(w\). Furthermore, the directed edges can be split into multiple edges using statement 1.
For both statements, the converse is also true. Examples of each statement can be found in Fig. 2.
The matrix representation of both transformations is the same, given that the \(in-\) and \(out-\)\(degrees\) in both cases are preserved.
**Proposition 2**.: Any graph \(\mathcal{G}\) associated to an adjacency matrix \(\mathcal{A}\) can be mapped into a directed multigraph \(\mathcal{G}^{\prime}\) associated to the same adjacency matrix if and only if \(\mathcal{A}\) can be expressed as a sum of \(n\) matrices \(M_{i}\).
**Proof.**
Let \(\mathcal{A}\) be the adjacency matrix of a general graph \(\mathcal{G}(V,E,f)\). Let \(\mathcal{A}\) be decomposed into a sum of matrices, i.e. \(\mathcal{A}=\sum\limits_{i=0}^{n-1}M_{i}\). Now let \(\mathcal{G}_{i}(V,E_{i},f_{i})\) be a directed multigraph with adjacency matrix \(M_{i}\), in such a way that all graphs \(\mathcal{G}_{i}\) share the same vertex set, but not the same edge set \(E_{i}\) and each edge set is associated to an incidence function \(f_{i}\). Then, we define a new directed multigraph, \(\mathcal{G}_{i}\), from the union of all the individual multigraphs, in the following way
\[\mathcal{G}^{\prime}(V^{\prime},E^{\prime},f^{\prime})=\bigcup \limits_{i=0}^{n-1}\mathcal{G}_{i}(V,E_{i},f_{i}) \tag{11}\]
That is, \(V^{\prime}=V\), \(E^{\prime}=\bigcup\limits_{i=0}^{n-1}E_{i}\) and \(f^{\prime}=f_{i}\)\(\forall i\in[0,n-1]\).
Fig. 3, gives an example of how an undirected graph \(\mathcal{G}\) can be transformed into a multigraph \(\mathcal{G}^{\prime}\) following this method.
Proposition 2 is equivalent to applying both statements of proposition 1 in \(\mathcal{G}\) until no undirected edges are left. The new graph will be called \(\mathcal{G}^{\prime}\). \(\mathcal{G}^{\prime}\) will then be partitioned into \(k\) subsets and each subset will be associated to an adjacency matrix. Graph \(\mathcal{G}\) can be recovered by applying the inverse transformations to \(\mathcal{G}^{\prime}\).
**Theorem 1**.: The transpose of the adjacency matrix \(\mathcal{A}\in\mathbb{C}^{n\times n}\) associated to a graph \(\mathcal{G}\) of order \(n>1\) can be transformed into the shift operator \(S\) of a quantum walk if and only if there exist \(m^{2}\) operators \(\mathcal{B}^{\intercal}_{ij}\in\mathbb{C}^{n\times n}\)
Figure 2: Examples of edge transformations. (a) Transformation between a directed edge and multiple directed edges pointing in the same direction. (b) Transformation between an undirected edge and two directed edges pointing in opposite directions.
\(H_{p}\to H_{p}\) that satisfy the following equations simultaneously:
\[\mathcal{A}^{\intercal}=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1} \mathcal{B}^{\intercal}_{ij} \tag{12}\] \[\sum\limits_{i=0}^{m-1}(\mathcal{B}^{\intercal}_{ij})^{\dagger} \mathcal{B}^{\intercal}_{ik}=I_{n}\delta_{jk}\] (13) \[\sum\limits_{i=0}^{m-1}(\mathcal{B}^{\intercal}_{ji})^{\dagger} \mathcal{B}^{\intercal}_{ki}=I_{n}\delta_{jk}\] (14) \[S=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}|c_{i}\rangle \langle c_{j}|\otimes\mathcal{B}^{\intercal}_{ij} \tag{15}\]
where \(\{|c_{i}\rangle\}\) is the canonical basis of \(\mathbb{C}^{m}\). Operators \(\mathcal{B}^{\intercal}_{ij}\) are Kraus operators [29] and \(m\) is the rank of the quantum channel they represent.
**Proof.**
Let \(\mathcal{G}(V,E,f)\) be a graph of order \(n>1\) with adjacency matrix \(\mathcal{A}\), where the quantum walk takes place. Let \(|\psi\rangle=\sum\limits_{i}\sum\limits_{k}|c_{i}\rangle\otimes|v_{k}\rangle\) be the quantum state vector of a walker on \(\mathcal{G}\), where the sets \(\{|v_{k}\rangle:k\in\mathbb{Z}\}\) and \(\{|c_{i}\rangle:i\in\mathbb{Z}\}\) are the canonical bases of \(\mathbb{C}^{n}\) and \(\mathbb{C}^{m}\), and are used as the bases for the Hilbert spaces \(H_{P}\) and \(H_{C}\), respectively.
Suppose that the transpose adjacency matrix of \(\mathcal{G}\) has additive decomposition \(\mathcal{A}^{\intercal}=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1} \mathcal{B}^{\intercal}_{ij}\), constrained to the completeness relation \(\sum\limits_{i=0}^{m-1}\left(\mathcal{B}^{\intercal}\right)^{\dagger}_{ij} \mathcal{B}^{\intercal}_{ik}=I_{n}\delta_{jk}\), which implies that the matrices of the sets \(\{\mathcal{B}^{\intercal}_{ij}\}_{i=0}^{m-1}\), for fixed \(j\), form sets of Kraus operators. Now, we decompose \(\mathcal{G}(V,E,f)\) into subsets \(\mathcal{G}_{ij}(V,E_{ij},f_{i})\), each associated to an adjacency matrix \(\mathcal{B}_{ij}\). That is, \(\mathcal{G}_{ij}\) shares the same vertex set with the original graph \(\mathcal{G}\), but holds only the arcs indicated by the adjacency matrix \(\mathcal{B}_{ij}\). Then, we can associate a coin basis state \(|c_{j}\rangle\) to all of the arcs of each subgraph \(\mathcal{G}_{j}=\bigcup\limits_{i=0}^{m-1}\mathcal{G}_{ij}\), with adjacency matrix \(\mathcal{A}_{j}=\sum\limits_{i=0}^{m-1}\mathcal{B}_{ij}\), in such a way that a walker with coin \(|c_{j}\rangle\) can only move through arcs associated to that coin state. Thus, the most general shift operator that complies with this definition is the following
\[S=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}|c_{i}\rangle\langle c_{j}| \otimes\mathcal{B}^{\intercal}_{ij},\]
which explicitly is a block matrix of dimensions \(nm\times nm\), i.e.
\[S=\begin{pmatrix}\mathcal{B}^{\intercal}_{00}&\mathcal{B}^{\intercal}_{01}& \ldots&\mathcal{B}^{\intercal}_{0m-1}\\ \mathcal{B}^{\intercal}_{10}&\mathcal{B}^{\intercal}_{11}&\ldots&\mathcal{B} ^{\intercal}_{1m-1}\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{B}^{\intercal}_{m-10}&\mathcal{B}^{\intercal}_{m-11}&\ldots&\mathcal{ B}^{\intercal}_{m-1m-1}\end{pmatrix} \tag{16}\]
Furthermore, notice that due to the completeness relation in Eq. (13), each of the sets of Kraus operators mentioned before contained in each column of \(S\), which implies that \(S\) is unitary. To prove the unitarity of \(S\) consider
\[S^{\dagger}S=\left(\sum\limits_{k=0}^{m-1}\sum\limits_{l=0}^{m-1}|c_{l} \rangle\langle c_{k}|\otimes(\mathcal{B}^{\intercal}_{kl})^{\dagger}\right) \left(\sum\limits_{r=0}^{m-1}\sum\limits_{s=0}^{m-1}|c_{r}\rangle\langle c_{s}| \otimes\mathcal{B}^{\intercal}_{rs}\right)\]
Which simplifies to
\[S^{\dagger}S=\sum\limits_{l=0}^{m-1}\sum\limits_{s=0}^{m-1}|c_{l}\rangle\langle c _{s}|\otimes\sum\limits_{k=0}^{m-1}(\mathcal{B}_{kl}^{\intercal})^{\dagger}( \mathcal{B}_{ks}^{\intercal})\]
Using the fact that \(\sum\limits_{k=0}^{m-1}(\mathcal{B}^{\intercal})_{kl}^{\dagger}\mathcal{B}_{ ks}^{\intercal}=I_{n}\delta_{ls}\), it follows that
\[S^{\dagger}S=\sum\limits_{l=0}^{m-1}\sum\limits_{s=0}^{m-1}|c_{l}\rangle \langle c_{s}|\otimes I_{n}\delta_{ls}\]
\[=\sum\limits_{s=0}^{m-1}|c_{s}\rangle\langle c_{s}|\otimes I_{n}\]
Finally, because the set \(\{|c_{s}\rangle\}\) is complete, \(\sum\limits_{s=0}^{m-1}|c_{s}\rangle\langle c_{s}|=I_{m}\). Thus,
\[S^{\dagger}S=I_{m}\otimes I_{n}=I_{nm}.\]
The transpose of a unitary is also unitary, thus repeating the same process for \(S^{\intercal}\), we get to the conclusion that eq. (14) is necessary. That is, the rows of the shift operator also form sets of Kraus operators each.
To study the action of \(S\), consider the general state of a quantum walker, given by Eq. (7). Applying \(S\) to \(|\psi(t)\rangle\), we obtain
\[S|\psi\rangle=\left(\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}|c_{i} \rangle\langle c_{j}|\otimes\mathcal{B}_{ij}^{\intercal}\right)\left(\sum \limits_{s}|c_{s}\rangle\otimes|V_{s}(t)\rangle\right)\]
Which simplifies to
\[S|\psi\rangle=\sum\limits_{i}|c_{i}\rangle\otimes\sum\limits_{j}\mathcal{B}_{ ij}^{\intercal}|V_{i}(t)\rangle\]
Given that \(\mathcal{B}_{ij}^{\intercal}:H_{p}\to H_{p}\), the operation \(\mathcal{B}_{ij}^{\intercal}|V_{i}(t)\rangle\) yields to a new superposition of position states, \(\mathcal{B}_{ij}|V_{i}(t)\rangle=\sum\limits_{k}b_{ik}^{\prime}|v_{k}\rangle\). The interpretation is that Kraus operators, \(\mathcal{B}_{ij}\), allow walkers to shift between adjacent vertices of the subgraphs \(\mathcal{G}_{ij}\) they are associated with, similar to the action of the transition matrix in a random walk (see Eq. (4)).
In view that \(S\) is a unitary operator composed of Kraus operators that act on the position register, we conclude that \(S\) is suitable to be the shift operator of a quantum walk, which proves the conditional statement.
For the converse statement, suppose \(S\) is a unitary matrix of size \(nm\times nm\), with \(n,m>1\). Because of its dimensions, \(S\) can be subdivided in \(m^{2}\) blocks matrices of size \(n\) each, and be written as
\[S=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}|c_{i}\rangle\langle c_{j}| \otimes\mathcal{B}_{ij}^{\intercal} \tag{17}\]
where \(|c_{i}\rangle\) and \(|c_{j}\rangle\) are vectors of the canonical basis in \(\mathbb{C}^{m}\).
The unitarity condition of \(S\), implies Eq. (13). Considering
\[S^{\dagger}S=\left(\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}|c_{j}\rangle \langle c_{i}|\otimes(\mathcal{B}_{ij}^{\intercal})^{\dagger}\right)\left(\sum \limits_{k=0}^{m-1}\sum\limits_{l=0}^{m-1}|c_{k}\rangle\langle c_{l}|\otimes \mathcal{B}_{kl}^{\intercal}\right)\]
\[=\sum\limits_{j=0}^{m-1}\sum\limits_{l=0}^{m-1}|c_{j}\rangle\langle c_{l}| \otimes\sum\limits_{i=0}^{m-1}(\mathcal{B}_{ij}^{\intercal})^{\dagger}\mathcal{ B}_{il}^{\intercal}=I_{nm}\]
This equation can only be true if
\[\sum\limits_{i=0}^{m-1}\left(\mathcal{B}^{\intercal}\right)_{ij}^{\dagger} \mathcal{B}_{il}^{\intercal}=I_{n}\delta_{jl}\]
which means that the set of matrices \(\mathcal{B}_{ij}^{\intercal}\) of each column in \(S\) forms a set of Kraus operators. Again, the same process can be followed for \(S^{\intercal}\) to obtain Eq. (14).
Given that the operators \(\mathcal{B}_{ij}\) are square matrices of the same dimension, each of them can be associated to a graph \(\mathcal{G}_{ij}\). The graph of the system is then defined by \(\mathcal{G}=\mathcal{G}(V,\bigcup\limits_{i=0}^{m-1}\bigcup\limits_{j=0}^{m- 1}E_{ij},f)\), i.e. \(\mathcal{G}\) is composed of the union of arc sets of all the subgraphs \(\mathcal{G}_{ij}\), keeping the same vertex set for all of them, as described in Proposition 2. The adjacency matrix of \(\mathcal{G}\) is then given by
\[\mathcal{A}=\sum\limits_{i=0}^{m-1}\sum\limits_{j=0}^{m-1}\mathcal{B}_{ij}. \tag{18}\]
The transformation of the converse statement is not unique, given that we can vary the dimensions of the basis vectors \(|c_{i}\rangle\) and block matrices \(\mathcal{B}_{ij}^{\intercal}\), with the only condition that the dimension of \(S\) remains fixed, that is the new dimensions \(n^{\prime}\) and \(m^{\prime}\) of the position and coin spaces must comply with \(n^{\prime}m^{\prime}=nm\). This completes the proof.
A consequence of the additive decomposition method is that it might change the distribution of weights in the original graph and even induce new connections between vertices in order to comply with the unitarity conditions, e.g. two originally non-connected vertices might be connected after being mapped into two parallel arcs pointing in the same direction with positive and negative weights of the same magnitude.
Next we present a corollary that follows from Theorem 1.
**Corollary 1.**
If \(\mathcal{A}^{\intercal}\) can be decomposed as the sum of \(m\) unitary matrices \(\mathcal{B}_{i}^{\intercal}\), the shift operator of the system is given by
\[S=\sum\limits_{i=0}^{m-1}|c_{i}\rangle\langle c_{i}|\otimes\mathcal{B}_{i}^{ \intercal}, \tag{19}\]
or explicitly
\[S=\begin{pmatrix}\mathcal{B}_{0}^{\intercal}&0&\dots&0\\ 0&\mathcal{B}_{1}^{\intercal}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\mathcal{B}_{m-1}^{\intercal}\end{pmatrix} \tag{20}\]
**Proof.**
This is the special case of Theorem 1 in which the sets of Kraus operators \({\cal B}^{\intercal}_{ij}\) consist of a unitary matrix for equal indices, and a zero matrix for different indices, i.e. \({\cal B}^{\intercal}_{i}={\cal B}^{\intercal}_{ij}\delta_{ij}\). In other words, the Kraus rank for all the sets of Kraus operators is 1.
This corollary is specially useful for the quantum circuit implementation of shift operators, and was studied by Montanaro [20] and Zhan [24] for the particular case where the Kraus operators are permutation matrices.
Now that we have analyzed in detail the matrix form of the shift operator of a quantum walk, the link between this operator and quantum circuits emerges as a corollary, which is presented next.
**Corollary 2.** Any quantum circuit acting on a bipartite system is suitable to be the shift operator in a coined quantum walk.
**Proof.**
Any unitary matrix has a quantum gate representation. A quantum circuit is no more than a set of consecutive applications of quantum gates, which is in turn unitary. Thus, we can always apply Theorem 1 to any quantum circuit that acts on a bipartite quantum register, the circuit to a multigraph to perform a DTQW.
Fig. 4 presents two examples that together summarize Theorem 1 and both corollaries. In Fig. 4(a), we display the circuit proposed by Douglas and Wang [30] for a DTQW on a complete graph of four vertices, which uses four qubits. Next to the circuit we also display the unitary matrix, \(S\), associated to it. If we consider the total quantum register to be composed of two subregisters, then \(S\) will be automatically split into block matrices. Consider the case where the two bottom qubits correspond to the coin register and the two upper ones to the position register, then \(S\) will be a \(4\times 4\) block matrix where each block will be of size \(4\times 4\). In Fig. 4(a) the lines in \(S\) make explicit this partition, and Fig. 4(c) displays the graph and adjacency matrix associated to \(S\), and in consequence to the circuit. Fig. 4(b) corresponds to the mapping when the coin and position registers consist of 1 and 3 qubits, and 4(d) corresponds to the mapping when the coin and position registers consist of 3 and 1 qubits, respectively.
In Fig. 4(e) we present an alternative version of the circuit for a DTQW on a complete graph of four vertices, but in this case the associated unitary matrix \(S\) consists of a block diagonal matrix, thus perfectly exemplifies corollary 1. Likewise, if we consider the coin and position registers to be both of size 2, then we obtain the partition indicated by the lines in \(S\). Fig. 4(g) is the associated mapping of this partition. In the case where the coin and position registers have sizes 1 and 3, we obtain the mapping of Fig. 4(f), and when the coin and position registers have sizes 3 and 1, we obtain Fig. 4(h), respectively.
**Proposition 3**.: _Given a general shift operator \(S\), associated to a graph \(\mathcal{G}\) as described in Theorem 1, the evolution operator of a coined quantum walk can be completed by choosing a general coin of the form_
Figure 4: (a) Quantum circuit along with its unitary matrix representation. The sizes of the coin (\(m\)) and position (\(n\)) registers can vary, holding \(n+m=4\). This partitions \(S\) into \(2^{m}\) matrices of sizes \(2^{n}\). \(S\) is subdivided for the case \(m=2,n=2\). (b), (c) and (d) are multigraphs and adjacency matrices associated to \(S\) when \(m=1\), \(n=3\); \(m=2\), \(n=2\) and \(m=3\), \(n=1\), respectively. Colors are used to distinguish the subgraphs associated to different sets of Kraus operators. Red, blue, green and black arcs in (c) are associated to the first, second, third and forth columns in the example operator \(S\) in (a). Each color is associated to a different coin state. Notice that when \(m=2\) and \(n=2\) the quantum circuit corresponds to the shift operator a quantum walk on a complete graph of four nodes with self-loops as proposed in [30].
\[{\cal C}=\sum_{k=0}^{n-1}C_{k}\otimes|v_{k}\rangle\langle v_{k}| \tag{21}\]
where all operators \(C_{k}\) are unitary of size \(m\times m\).
Furthermore, if all operators \(C_{k}\) are the same, we obtain the usual form of the coin operator, i.e.
\[{\cal C}=C\otimes I \tag{22}\]
**Proof.**
It is convenient to start the analysis with the simplest case to comprehend the general idea. Let \(C\) be an arbitrary unitary matrix, then the operator \(C\otimes I_{n}\) takes the following form in explicit matrix notation
\[C\otimes I_{n}=\left(\begin{array}{cccc}c_{00}I_{n}&c_{01}I_{n}&\ldots&c_{0 m-1}I_{n}\\ c_{10}I_{n}&c_{11}I_{n}&\ldots&c_{1m-1}I_{n}\\ \vdots&\vdots&\ddots&\vdots\\ c_{m-10}I_{n}&c_{m-11}I_{n}&\ldots&c_{m-1m-1}I_{n}\end{array}\right) \tag{23}\]
where weights \(c_{ij}\) are the entries of \(C\).
Now let the shift operator of a quantum walk take the form of Eq. (16). Then, the evolution operator can be written in block matrix notation as
\[U=S(C\otimes I_{n})=\left(\begin{array}{cccc}\sum\limits_{i=0}^{m-1}c_{i0}{ \cal B}_{0i}^{\sf T}&\sum\limits_{i=0}^{m-1}c_{i1}{\cal B}_{0i}^{\sf T}&\ldots &\sum\limits_{i=0}^{m-1}c_{im-1}{\cal B}_{0i}^{\sf T}\\ \sum\limits_{i=0}^{m-1}c_{i0}{\cal B}_{1i}^{\sf T}&\sum\limits_{i=0}^{m-1}c_{ i1}{\cal B}_{1i}^{\sf T}&\ldots&\sum\limits_{i=0}^{m-1}c_{im-1}{\cal B}_{1i}^{ \sf T}\\ \vdots&\vdots&\ddots&\vdots\\ \sum\limits_{i=0}^{m-1}c_{i0}{\cal B}_{m-1i}^{\sf T}&\sum\limits_{i=0}^{m-1}c_ {i1}{\cal B}_{m-1i}^{\sf T}&\ldots&\sum\limits_{i=0}^{m-1}c_{im-1}{\cal B}_{m- 1i}^{\sf T}\end{array}\right) \tag{24}\]
Similar to the case of the shift operator described in Theorem 1, the complete evolution operator has also a block matrix form, whose columns constitute sets of Kraus operators, and walkers with coin state \(|c_{j}\rangle\) move through the subgraph, \({\cal G}_{j}\), whose transpose adjacency matrix, \({\cal A}_{j}^{\sf T}\), is obtained by the addition of all block elements of column \(j\). This becomes evident when considering the explicit vector form of the state of a walker (eq. 8), from which we can see that in \(U|\psi(t)\rangle\), the elements of column \(j\) act on the substate \(|V_{j}(t)\rangle\). The transpose adjacency matrix of \({\cal G}_{j}\) has the following form
\[{\cal A}_{j}^{\sf T}=\sum\limits_{k=0}^{m-1}\sum\limits_{i=0}^{m-1}c_{kj}{ \cal B}_{ik}^{\sf T} \tag{25}\]
That is, each time the \(U\) operator is applied on \(|\psi(t)\rangle\), all the position states with coin state \(|c_{j}\rangle\) will be modified by the action of \({\cal A}_{j}^{\sf T}\), which is a weighted sum of all the Kraus operators that compose the original matrix \({\cal A}^{\sf T}\) (see Eq. 12).
For the case of the general coin, suppose the \(k\)th coin \(C_{k}\) has entries \(c_{ij}^{k}\). Then, eq. (21) can also be written as a block matrix
\[\mathcal{C}=\begin{pmatrix}D_{00}&D_{01}&\dots&D_{0m-1}\\ D_{10}&D_{11}&\dots&D_{1m-1}\\ \vdots&\vdots&\ddots&\vdots\\ D_{m-10}&D_{m-11}&\dots&D_{m-1m-1}\end{pmatrix} \tag{26}\]
Where
\[D_{ij}=\begin{pmatrix}c^{0}_{ij}&0&\dots&0\\ 0&c^{1}_{ij}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&c^{m-1}_{ij}\end{pmatrix} \tag{27}\]
Thus,
\[U=\mathcal{C}=\begin{pmatrix}\sum\limits_{i=0}^{m-1}\mathcal{B}^{\intercal}_{0 i}D_{i0}&\sum\limits_{i=0}^{m-1}\mathcal{B}^{\intercal}_{0i}D_{i1}&\dots&\sum \limits_{i=0}^{m-1}\mathcal{B}^{\intercal}_{0i}D_{im-1}\\ \sum\limits_{i=0}^{m-1}\mathcal{B}^{\intercal}_{1i}D_{i0}&\sum\limits_{i=0} ^{m-1}\mathcal{B}^{\intercal}_{1i}D_{i1}&\dots&\sum\limits_{i=0}^{m-1} \mathcal{B}^{\intercal}_{1i}D_{im-1}\\ \vdots&\vdots&\ddots&\vdots\\ \sum\limits_{i=0}^{m-1}\mathcal{B}^{\intercal}_{m-1i}D_{i0}&\sum\limits_{i=0 }^{m-1}\mathcal{B}^{\intercal}_{m-1i}D_{i1}&\dots&\sum\limits_{i=0}^{m-1} \mathcal{B}^{\intercal}_{m-1i}D_{im-1}\end{pmatrix} \tag{28}\]
The operation \(\mathcal{B}^{\intercal}_{ij}D_{ji}\) scales the whole \(l\)th column of \(\mathcal{B}^{\intercal}_{ij}\) by the factor \(c^{l}_{ji}\in D_{ji}\) and, the \(l\)th column of \(\mathcal{B}^{\intercal}\) contains the weights of all the arcs that come out of the \(l\)th node towards other nodes.
Similar to the former case, the walker with coin state \(|c_{j}\rangle\) evolves according to the matrix
\[\mathcal{A}^{\intercal}_{j}=\sum\limits_{k=0}^{m-1}\sum\limits_{i=0}^{m-1} \mathcal{B}^{\intercal}_{ik}D_{kj} \tag{29}\]
In principle, there are no restrictions on the values the scaling factors can take in both cases studied if the coin operator is unitary. Thus we conclude that any unitary matrix applied on the coin register is suitable to complete the evolution operator of the quantum walk.
The action of applying a coin operator to a shift operator is to split each of the arcs of the of the graph associated to \(S\) into \(2^{m}\) new arcs, one associated to a different coin state. If we consider the subsets of edges associated to the same coin state, we find that each subset forms a graph with a similar structure as the one associated to the shift operator. In other words, this can be interpreted as the action of the evolution operator is to create a superposition of \(2^{m}\) graphs with the same connections as the one associated to \(S\) but modifying its weights.
To exemplify this, consider the quantum circuit of Fig. 4(e), if we take the first two qubits of the register to represent the position state and the last two qubits to represent the coin state, the associated graph to this system is given by Fig. 4(g). Now see from Fig. 5, that if we apply a Hadamard coin to the circuit shown in 4(e) to complete the evolution operator, the associated graph will be a superposition of 4 like graphs, with the difference that the weights of the new graphs will be modified, according to the entries of the coin operator matrix. Notice that as a consequence, given that the coin operator modifies the shift operator, if Theorem 1 is applied to construct \(S\), the sum of all the Kraus operators in \(U\) will not yield back the original adjacency matrix given that the coin operator modifies the weights in \(S\).
The circuit form of a general coin operator is displayed in Fig. 6. In this circuit, we consider the white dots as 0's and the black dots as 1's, in such a way that we can see the sequence of white and black dots as binary code, where the less significant bit is given by the uppermost dot. Each operator \(C_{i}\) acts only on the position state \(|v_{k}\rangle\) whose index \(k\) matches the binary representation of the sequence of white and black dots.
Figure 5: (a) Displays the full evolution operator in both circuit and matrix form of a quantum walk on a complete graph of four nodes with self-loops, for the case where the position and coin register have sizes of \(m=2\) and \(n=2\), respectively. (b) Displays the subgraphs associated to each of the columns of the operator \(U\). The top-left, top-right, bottom-left and bottom-right graphs correspond to the first, second, third and fourth columns of \(U\), respectively. The graph associated to \(U\) is given by the union of the edges of the four subgraphs, using the same vertex set for all of them, as shown in Fig. 3. Notice that the subgraph associated to each column of \(U\) is similar to the total graph associated to the shift operator \(S\) in Fig. 4(g), changing only the weights of the arrows according to the entries of the matrix representation of the coin operator.
## 5 Conclusion
In this paper we have studied studied unitary coined DTQW from a matrix approach, from which we have been able to prove that the shift operator of a DTQW can be thought of as a quantum mechanical version of the transition matrix of a random walk. We did this by proving a set of necessary and sufficient equations with which we can map the transpose adjacency matrix associated to a graph \(\mathcal{G}\) into the unitary operator of a bipartite quantum system, which we call shift operator. The shift operator can be applied on a quantum state of the bipartite system, modifying the probability amplitudes of the state, and, thus, generating a quantum walk. The bipartite state of a quantum walk can then be conceived as a scaled up version of a stochastic vector, which holds the information of multiple walks happening simultaneously on a multigraph version of \(\mathcal{G}\), called \(\mathcal{G}^{\prime}\). A general description of the mapping from \(\mathcal{G}\) to \(\mathcal{G}^{\prime}\) and of the dynamics of a quantum walker on \(\mathcal{G}^{\prime}\) was also given.
The importance of the set of equations (12)-(15) resides in the fact that, in general, it is difficult to obtain the evolution operator of a quantum walk for a specific graph. Now, given the adjacency matrix, \(\mathcal{A}\), of the graph on which we want to perform the quantum walk, the process for mapping \(\mathcal{A}^{\intercal}\) into \(\mathcal{S}\) is standardized and can be automated.
To complete the evolution operator, we also extended the idea of a quantum coin, in such a way that any controlled or non-controlled unitary operator can be suitable to be the coin operator of a DTQW. Although this fact might result in a biased DTQW in some cases, we do not see it as a reason to impose further restrictions on
Figure 6: (a) Consider black dots to be 1’s and white dots to be 0’s. The sequence of a general coin is completed by controlling \(2^{n}\) gates using all binary strings. If less than \(2^{n}\)\(C_{i}\) gates are fully controlled to generate a coin, the remaining ones will be the identity by default. (b) In the case in which all the multi-control gates have the same target gate \(C_{0}\), the circuit can be reduced to the application of one single gate \(C_{0}\) to the coin register.
the coin operator. With this extension we can even choose a different coin for each vertex in the most extreme case, or make combinations of coins. That will depend on the purpose of the algorithm we want to run.
This way, we make progress on both of the problems stated in the first section of the paper, i.e. we present present a method to build the evolution operator of a unitary coined quantum walk from scratch, and describe the link between bipartite quantum circuits and quantum walks. Nevertheless, the set of graphs for which this work is applicable is only the one whose adjacency matrices can be decomposed as presented in Eq. (12). Regarding the link between quantum circuits and evolution operators of quantum circuits, we only provided a way to go from quantum circuits to evolution operators but not the converse, although the converse problem reduces to mapping a unitary matrix into a set of quantum gates, which is a problem that has been studied in different works [31, 32].
## Acknowledgments
Both authors acknowledge the financial support provided by Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias and Consejo Nacional de Ciencia y Tecnologia (CONACyT). SEVA acknowledges the support of CONACyT-SNI [SNI number 41594]. SEVA acknowledges the unconditional support of his family.
## Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
## Conflict of Interest
All authors declare that they have no conflicts of interest.
|
2308.08115 | Solving and Completing the Rabi-Stark Model in the Ultrastrong Coupling
Regime | In this work,we employ a unitary transformation with a suitable parameter to
convert the quantum Rabi-Stark model into a Jaynes-Cummings-like model.
Subsequently, we derive the analytical energy spectra in the ultrastrong
coupling regime. The energy spectra exhibit a phenomenon known as spectral
collapse, indicating the instability of the model due to the unboundedness of
its energy from below at higher coupling parameters. To stabilize the
Rabi-Stark model, we introduce a nonlinear photon-photon interaction term. We
then compare the modified model with the original model in the classical
oscillator (CO) limit. Interestingly, we observe a regular "staircase" pattern
in the mean photon number of the ground state. This pattern exhibits a fixed
slope and equal step width, which we determine analytically. Moreover, we
analytically determine the phase boundary, which slightly differs from that in
the original Rabi-Stark model. These findings offer insights into the
investigation of those superradiant phase transitions that are unbounded from
below due to the phenomenon of spectral collapse. | Gen Li, Hao Zhu, Guo-Feng Zhang | 2023-08-16T03:01:19Z | http://arxiv.org/abs/2308.08115v1 | # Solving and Completing the Rabi-Stark Model in the Ultrastrong Coupling Regime
###### Abstract
In this work, we employ a unitary transformation with a suitable parameter to convert the quantum Rabi-Stark model into a Jaynes-Cummings-like model. Subsequently, we derive the analytical energy spectra in the ultrastrong coupling regime. The energy spectra exhibit a phenomenon known as spectral collapse, indicating the instability of the model due to the unboundedness of its energy from below at higher coupling parameters. To stabilize the Rabi-Stark model, we introduce a nonlinear photon-photon interaction term. We then compare the modified model with the original model in the classical oscillator (CO) limit. Interestingly, we observe a regular "staircase" pattern in the mean photon number of the ground state. This pattern exhibits a fixed slope and equal step width, which we determine analytically. Moreover, we analytically determine the phase boundary, which slightly differs from that in the original Rabi-Stark model. These findings offer insights into the investigation of those superradiant phase transitions that are unbounded from below due to the phenomenon of spectral collapse.
## I Introduction
The quantum Rabi model [1; 2], which describes the interaction between a two-level system and a single quantized harmonic oscillator, has been extensively studied in various fields such as trapped ion systems [3], cold atoms [4], and cavity quantum electrodynamics [5]. Its Hamiltonian is given by
\[\hat{H}_{\text{R}}=\omega a^{\dagger}a+\frac{\Delta}{2}\sigma_{z}+g(a^{ \dagger}+a)\sigma_{x}, \tag{1}\]
where \(a\) and \(a^{\dagger}\) are the annihilation and creation operators of the oscillator with frequency \(\omega\), respectively. \(\Delta\) represents the resonant frequency of the two-level system, \(\sigma_{i}\) (with \(i=x,y,z\)) denotes the Pauli operators, and \(g\) is the linear interaction strength. The presence of counter-rotating wave terms makes the exact solution of the quantum Rabi model challenging. To address this difficulty, Jaynes and Cummings [6] introduced the rotating wave approximation (RWA) and established a connection between the quantum Rabi model and the idealized Jaynes-Cummings model which, with U(1) symmetry, can be easily solved by considering a finite-dimensional subspace of the Hilbert space. However, the RWA may lead to a loss of accuracy especially in large coupling region when applied to the quantum Rabi model. To overcome these limitations and retain the physical information of the Hamiltonian, various transformation methods have been proposed. By employing a series of transformations and operations, it is possible to convert the original Rabi model into a Jaynes-Cummings-like model under specific conditions [7]. This approach provides an alternative method for solving the quantum Rabi model, instead of relying solely on the rotating wave approximation.
Several models, derived from the quantum Rabi model, exhibit distinct physical properties compared to the original Rabi model [8; 9; 10; 11]. One such model is the quantum Rabi-Stark model, which incorporates a freely adjustable nonlinear term. The Rabi-Stark model has been extensively studied in theoretical research, revealing numerous novel features [12; 13; 14; 15; 16]. Notably, this model exhibits a quantum phase transition when a specific critical value of the model's parameter is exceeded [12]. Traditionally, the study of quantum phase transitions in infinite-component systems requires the thermodynamic limit. **However, recent research**[17; 18; 19; 20; 21]**has shown that the classical oscillator (CO) limit can replace the thermodynamic limit in finite-component systems. This is because both the CO limit and the thermodynamic limit realize the same mean-field transition when considering a quantum phase transition through mean-field theory [19]. In the CO limit, characterized by an infinite ratio of qubit frequency to field frequency, quantum phase transitions can be observed even in the simplest quantum Rabi model and the Jaynes-Cummings model [20; 21]. These findings contributed to a deeper understanding of quantum phase transitions.
Apart from the novel phase transition features of the Rabi-Stark model, its energy spectrum is equally captivating. In the ultrastrong coupling regime, characterized by a relatively large value of the interaction strength \(g^{22}\) that has been experimentally achieved [23; 24; 25; 26; 27; 28], the Rabi-Stark model exhibits the phenomenon of spectral collapse [29; 30] at a critical point where the entire negative energy level collapses. Upon crossing this critical point, the system's lowest energy decreases indefinitely, rendering the Hamiltonian unbounded from below and revealing incomplete physical characteristics [31]. While numerous studies have primarily focused on the energy spectrum properties within the well-defined parameter regime of this model, further investigation is required to understand the instability induced by the absence of the ground state in the Rabi-Stark model.
In this paper, we employ a similar methodology to solve the quantum Rabi model by simplifying the Rabi-Stark model into a Jaynes-Cummings-like model using an appropriate parameter. We compare our obtained results with numerical simulations, where the Hilbert space is truncated to a finite dimension, and we also investigate the sources of error. To address the incomplete characteristics of the Rabi-Stark model, we introduce a nonlinear photon-photon interaction term to eliminate the spectral collapse, thereby stabilizing the model [32; 31]. Furthermore, we conduct additional investigations to explore the physical properties of the modified Rabi-Stark model and analyze the impact of the newly introduced term in comparison to the original model.
The structure of this paper is organized as follows: In Section II, we introduce the Rabi-Stark model along with our proposed completed Rabi-Stark model. The method employed to
solve both models is presented in Section III, where the obtained results are also provided. Finally, the conclusions are drawn in Section IV.
## II Models
As one of the simplest and most fundamental theoretical models in quantum optics, the quantum Rabi model holds significant influence on the development of other models. The approach used to solve the quantum Rabi model can also be applied, to some extent, in solving these related models. In this section, we introduce the models under study, which share similarities with the quantum Rabi model.
### Rabi-Stark Model
Grimsmo and Parkins [8] proposed a scheme in which the system comprises two stable hyperfine ground states of a multi-level atom, an optical cavity mode, and two additional laser fields. This scheme offers a generalization of the quantum Rabi model, referred to as the Rabi-Stark model, by introducing an extra nonlinear term to its effective Hamiltonian. The corresponding Hamiltonian for the Rabi-Stark model is given by
\[\begin{split}\hat{H}_{\text{RS}}=&\hat{H}_{\text{R }}+\frac{U}{2}a^{\dagger}a\sigma_{z}\\ =&\omega a^{\dagger}a+\frac{\Delta}{2}\sigma_{z}+g \left(a^{\dagger}+a\right)\sigma_{x}+\frac{U}{2}a^{\dagger}a\sigma_{z}.\end{split} \tag{2}\]
In this model, the first three terms align with the quantum Rabi model. The final term in the Hamiltonian represents the nonlinear coupling between the atom and the field. Here, \(U\) denotes the interaction strength associated with the dynamical Stark shift, which serves as the quantum counterpart of the classical Bloch-Siegert shift. Notably, in the experimental arrangement of the aforementioned quantum Rabi-Stark model, the nonlinear coupling strength \(U\) is freely adjustable, distinguishing it from the typical dynamical Stark shift.
### Completed Rabi-Stark Model
As mentioned in the introduction and elaborated upon in the following section, the Rabi-Stark model is prone to instability due to the spectral collapse phenomenon occurring when the nonlinear coupling strength \(U\) exceeds the critical value of \(2\omega\). This phenomenon suggests that the system's energy becomes unbounded from below for large \(U\) values, resulting in the absence of a ground state--a highly counterintuitive spectral feature.
Motivated by the concept of the completed Buck-Sukumar model [31] and the quest for potential stabilization methods, we introduce a variant of the Rabi-Stark model named as the completed Rabi-Stark model. In this model, we incorporate a nonlinear photon term \(\kappa(a^{\dagger}a)^{2}\), specifically, a photon-photon interaction term. This addition enables the generation of topological photon pairs with robust transport properties [32], which have attracted attention in experimental study [33; 34; 35]. The modified Hamiltonian is given by the expression
\[\begin{split}&\hat{H}_{\text{cRS}}=\hat{H}_{\text{RS}}+\kappa(a^{ \dagger}a)^{2}=\hat{H}_{\text{R}}+\frac{U}{2}a^{\dagger}a\sigma_{z}+\kappa(a^{ \dagger}a)^{2}\\ &=\omega a^{\dagger}a+\frac{\Delta}{2}\sigma_{z}+g\left(a^{ \dagger}+a\right)\sigma_{x}+\frac{U}{2}a^{\dagger}a\sigma_{z}+\kappa(a^{ \dagger}a)^{2}.\end{split} \tag{3}\]
In the presented Hamiltonian expression, the final term represents the interaction between photons. The coupling parameter \(\kappa\), governing the nonlinear photon-photon interaction, is small enough to preserve the unique physical characteristics of the original model. Introducing this nonlinear photon-photon interaction term, we demonstrate that the modified model successfully mitigates the occurrence of spectral collapse and exhibits notable deviations from the original model in terms of both energy spectrum and phase transition.
To ensure clarity and precision, we assign specific names to various coupling parameters in this article. The linear interaction strength, denoted by \(g\), is referred to as the Rabi coupling. The nonlinear coupling strength, denoted by \(U\), is referred to as the Stark coupling. Finally, the parameter \(\kappa\) governing the nonlinear photon coupling is referred to as the photon coupling.
## III Method and Results
The organization of this section is as follows: In Section A, we use an approximation method to solve the quantum Rabi-Stark model under specific conditions, obtaining an approximate analytical expression for its energy in the ultrastrong coupling regime. Section B focuses on the Rabi-Stark model in the CO limit, where quantum phase transitions commonly occur. In Section C, we stabilize the quantum Rabi-Stark model by introducing a nonlinear photon-photon interaction term and discuss the energy spectra properties of the completed Rabi-Stark model. Section D explores the completed Rabi-Stark model in the CO limit, studying its physical features and comparing them with the original Rabi-Stark model.
### Energy Spectrum
We first start with a rotation of the Rabi-Stark Hamiltonian [Eq.(2)] around the \(y\)-axis, the Hamiltonian becomes
\[\hat{H}_{\text{E}}=\left(\frac{\Delta}{2}+\frac{U}{2}a^{\dagger}a\right)\sigma _{x}+\omega a^{\dagger}a-g\left(a^{\dagger}+a\right)\sigma_{z}. \tag{4}\]
Then we perform a unitary transformation with \(\hat{A}=\exp\!\left[\lambda\sigma_{z}(a^{\dagger}-a)\right]\) and transform the Hamiltonian into the representation of \(\sigma_{x}\) (i.e., \(\sigma_{x}\left|\pm x\right>=\pm\left|\pm x\right>\) and \(\sigma_{x}=\tau_{z}\)).
When the parameter \(\lambda\) is chosen to satisfy that
\[\frac{\lambda\omega+g}{\lambda e^{-2\lambda^{2}}}+\frac{\Delta+U\lambda^{2}+Un}{n +1}L_{n}^{1}(4\lambda^{2})=\frac{U}{2}L_{n}(4\lambda^{2})T_{z}, \tag{5}\]
where \(T_{z}\) is a real parameter that takes on value of \(\pm 1\). The detailed derivations can be found in Appendix A. The effective Hamiltonian takes the form of a Jaynes-Cummings-like model, which is predominantly diagonal except for its final term in the Hamiltonian,
\[H_{\rm E}=\omega a^{\dagger}a+\frac{\tilde{\Delta}}{2}\tau_{z}+ \frac{U}{2}\tau_{z}f(a^{\dagger}a)+\tilde{C}+\tilde{g}(\tau_{+}a+\tau_{-}a^{ \dagger}), \tag{6}\]
where
\[\begin{split} f(a^{\dagger}a)&=e^{-2\lambda^{2}}L_ {n}(4\lambda^{2})a^{\dagger}a\\ &+2\lambda^{2}e^{-2\lambda^{2}}\left[\frac{L_{n}^{1}(4\lambda^{2 })a^{\dagger}a}{n+1}-\frac{aL_{n+1}^{1}(4\lambda^{2})a^{\dagger}}{n+2}\right],\\ \tilde{\Delta}&=(\Delta+U\lambda^{2})L_{n}(4\lambda ^{2})e^{-2\lambda^{2}},\\ \tilde{C}&=2g\lambda+\omega\lambda^{2},\\ \tilde{g}&=\lambda\omega+g-\frac{\lambda\left( \Delta+U\lambda^{2}+Un\right)L_{n}^{1}(4\lambda^{2})e^{-2\lambda^{2}}}{n+1}\\ &-\frac{U\lambda}{2}L_{n}(4\lambda^{2})e^{-2\lambda^{2}}T_{z}. \end{split} \tag{7}\]
From the Hamiltonian expression, the energy spectra can be obtained within the subspace \(\left\{\left|+x,n\right\rangle,\left|-x,n+1\right\rangle\right\}\), \(n=0,1,2\cdots\), given by
\[H_{\rm E}=\left(\begin{array}{cc}H_{11}&H_{12}\\ H_{21}&H_{22}\end{array}\right). \tag{8}\]
In the experimental setup where the Rabi coupling \(g\) is relatively small(\(g<0.5\omega\)) in the ultrastrong coupling regime, the parameter \(\lambda\) is also small [7]. The Laguerre polynomial and the associated Laguerre polynomial can be expanded up to the zero-order term, \(L_{n}(4\lambda^{2})\simeq 1\) and \(L_{n}^{1}(4\lambda^{2})\simeq n+1\), respectively, leading to an approximate analytical solution
\[\begin{split}&\lambda\simeq\\ &-\frac{g}{\omega+(\Delta\pm\frac{U}{2}+Un)\exp\biggl{\{}\left[-2 \left(\frac{g}{\omega+\Delta\pm U/2+Un}\right)^{2}\right]\biggr{\}}}.\end{split} \tag{9}\]
Furthermore, since the Hamiltonian [Eq.(6)] adopts the Jaynes-Cummings form, the energy expectation value of \(\left|-x,0\right\rangle\) can be directly obtained as
\[E_{0}=\omega\lambda^{2}+2\lambda\;g-\frac{\Delta-U\lambda^{2}+4U\lambda^{4}}{ 2}e^{-2\lambda^{2}}, \tag{10}\]
the state \(\left|-x,0\right\rangle\) corresponds to the ground state of the system before the level crossing. It is worth noting that when the Stark coupling \(U\) is zero, these results align with those of the quantum Rabi model [7].
The Hamiltonian of the Rabi-Stark model is now simplified to a two-dimensional form, allowing for the obtainment of its approximate solution with convergence in the ultrastrong coupling regime through this method. **Similar to the Jaynes-Cummings model, the energy levels can be labeled by the excitation number, denoted as \(E_{\left|n,\pm x\right\rangle}\), implying that this model exhibits near "superintegrability"[36] for smaller values of the Rabi coupling, within the ultrastrong coupling
Figure 2: (Color online) The errors \(\delta E\) of the ground state energy versus the Rabi coupling \(g\) and the Stark coupling \(U\) in units of \(\omega\). The error is calculated as \(\delta E=\left|E_{\rm g}-E_{\rm BS}\right|\), where \(E_{\rm g}\) is the ground state energy obtained through our analytical method, and \(E_{\rm RS}\) is the ground state energy obtained through truncated numerical simulations. The red cross symbols are obtained by calculating the positions where two energy levels intersect, resulting in a twofold degenerate ground state.
Figure 1: (Color online) The ground-state energy and the first five excited-state energy of the Rabi-Stark model are plotted as a function of the Rabi coupling \(g\) in units of \(\omega\), with \(\Delta=\omega\) and \(U=\omega\). The red triangle symbols represent the truncated numerical results, and the blue lines depict the results obtained using our method.
regime. However, for larger values of the Rabi coupling, an additional "good quantum number" that uniquely labels each energy level cannot be found, except for parity and energy [14], indicating the limitations of our analytical method.** To validate our results, a comparison with numerical simulations will be conducted, and any discrepancies will be clearly observed in the subsequent figures.
In Fig. 1, we present the energy levels of the Rabi-Stark model, including the ground state energy and the first five excited states, as a function of the Rabi coupling \(g\). Our results exhibit good agreement with the direct numerical simulations, except for a potential loss of accuracy when the Rabi coupling \(g\) approaches approximate \(0.5\omega\). Furthermore, noticeable discrepancies between the analytical and numerical results are observed in the immediate vicinity of the level crossing or avoided level crossing positions [14] depicted in Fig. 1. These results suggest that the relative error in our analytical method is primarily influenced by larger values of the Rabi coupling \(g\) and the occurrence of level crossings.
The validity of our analytical method is further demonstrated by plotting the error of the ground state energy in Fig. 2. In region I, the analytical solution of the ground state energy is given by Eq. (10), while in region II, it is determined by taking the minimum eigenvalue of Eq. (8). The red cross in Fig. 2 represents the exact position where the ground state energy intersects with the "first excited state". For a given value of \(U\), the error initially increases with an increasing Rabi coupling \(g\) before reaching a local maximum at the level crossing. Subsequently, the error slightly decreases before rapidly increasing again as \(g\) continues to increase. These observations suggest that large values of \(g\), \(U\), and the presence of a level crossing are the primary factors leading to inaccuracies in our analytical method. This aligns with the nature of our perturbation-based approach, which inherently faces challenges near level crossings. Overall, Fig. 2 provides insight into the effective range of our analytical method, specifically in region I and the lower left portion of region II. Additionally, it is noteworthy that our method remains valid for smaller values of \(g\) (approximately \(g\lesssim 0.3\omega\)), even in the presence of a level crossing with a large \(U\), as illustrated in Fig. 2.
Fig. 3 depicts the energy spectra obtained using both the truncated numerical method and our analytical approach for the Rabi-Stark model under varying Stark couplings \(U\). Remarkably, our analytical method consistently produces energy spectra that exhibit excellent agreement with the truncated numerical results. Importantly, this accuracy is maintained even when considering additional energy levels in the plot. Thus, our approach proves effective in investigating the Rabi-Stark model for smaller values of \(g\) within the ultra-strong coupling regime. Fig. 3 also highlights the steepening slope of the corresponding energy level with respect to \(U\), ultimately approaching infinity. This observation signifies that as the value of the Stark coupling \(U\) surpasses the critical point \(U=2\omega\), marking a quantum phase transition [12], the energy level \(E_{|n,-x\rangle}\) associated with the ground state no longer exists, and the corresponding value of \(n\) becomes infinite. This phenomenon arises from the unbounded nature of the spectrum in this regime, which lacks a ground state and is counterintuitive in physics.
In Fig. 3, the energy levels exhibits a notable degeneracy due to the collapse of negative branches when \(U=2\omega\). To further investigate this behavior, we present in Fig. 4 the ground state energy as a function of the Rabi coupling \(g\) while maintaining a fixed Stark coupling of \(U=2\omega\), using the truncated numerical method. The inset of Fig. 4 shows the first ten lowest energy levels as the Hamiltonian cutoff is var
Figure 3: (Color online) The energy spectrum of the Rabi-Stark model is shown as a function of the Stark coupling \(U\) in units of \(\omega\), with \(\Delta=\omega\) and \(g=0.2\omega\). The positive and negative branches of the energy are represented by solid and dotted lines, respectively. The red cross symbols correspond to the truncated numerical results of the first thirty lowest energy levels displayed in the figure. Both the solid and dotted lines depict the results obtained through our analytical method.
Figure 4: (Color online) The ground state energy is plotted as a function of the Rabi coupling \(g\) for a fixed Stark coupling \(U=2\omega\) using the truncated numerical method. Inset: the first ten lowest energy are shown as a function of the number of cutoffs for the Stark coupling \(U=2\omega\), with \(\Delta=\omega\) and \(g=0.2\omega\).
ied, demonstrating that the energy levels collapse to a convergent point. This convergence signifies the presence of a pronounced degeneracy at this specific value of the Stark coupling, indicating the existence of a well-defined ground state energy.
### Original Model in the CO Limit
**The presence of spectrum collapse phenomenon at finite frequency ratios motivates us to analytically investigate its causes and determine whether it disappears in the classical oscillator limit (CO limit), where the quantum phase transition is usually considered. To achieve this, we employ common decoupling methods in the CO limit to derive a clear analytical expression for the excitation energy.** We start by considering the Jaynes-Cummings-like model Hamiltonian given by Eq. (6) in the CO limit. **This limit implies an infinitely large ratio between the qubit frequency and the field frequency, denoted as \(\omega\ll\Delta\), resulting in outcomes that are equivalent to those attained in the thermodynamic limit.** Within the ultrastrong coupling regime, which encompasses the range \(0.1\omega<g<\omega\), Eq. (6) can be simplified as
\[\begin{split} H_{\text{eff}}&=\omega a^{\dagger}a +\frac{\Omega(n)}{2}\tau_{z}+\omega\lambda^{2}+2\lambda g+\frac{U}{2}\tau_{z}a ^{\dagger}a\\ &\quad+\Gamma(n)(\tau_{+}a+\tau_{-}a^{\dagger}),\end{split} \tag{11}\]
where \(\Omega(n)=(\Delta-2U\lambda^{2})e^{-2\lambda^{2}}\simeq\Delta\), \(\Gamma(n)=g+\lambda(G-Un-\Delta)\), \(G=\omega-UT_{z}/2\), and the limit condition on \(\lambda\) is transformed to
\[\lambda\simeq-\frac{g}{Un+\Delta+G}. \tag{12}\]
To decouple the interaction induced by the atomic operator, we employ a unitary transformation using the operator \(\hat{B}=\exp[\Gamma(n)/\Omega(n)(a^{\dagger}\tau_{-}-a\tau_{+})]\). This transformation allows us to express Eq. (11) as
\[\begin{split} H_{\text{eff}}\simeq\\ \omega a^{\dagger}a+\frac{\Delta}{2}\tau_{z}+\omega\lambda^{2}+2 \lambda g+\frac{U}{2}\tau_{z}a^{\dagger}a+\frac{\Gamma(n)^{2}}{\Delta}\tau_{z} a^{\dagger}a,\end{split} \tag{13}\]
in the normal phase, our focus lies on the low energy state, which means \(n\) can be considered small. By projecting Eq. (13) to the lower energy level of the two-level system subspace, we get
\[H_{\text{eff}}=(\omega-\frac{U}{2}-C)a^{\dagger}a-\frac{\Delta}{2}+\omega \lambda^{2}+2\lambda g, \tag{14}\]
where \(C=4g^{2}G^{2}/[\Delta(Un+\Delta+G)^{2}]\simeq 0\), \(E_{\text{np}}=-\Delta/2+\omega\lambda^{2}+2\lambda g\simeq-\Delta/2+2\lambda g\) represents the ground state energy without excitation, consistent with Eq. (10) in the CO limit. the excitation energy is thus \(\epsilon_{\text{np}}=\omega-U/2-C\). It is important to note that Eq. (14) holds valid only under the CO limit and in the ultrastrong coupling regime. Consequently, the final term in \(\epsilon_{\text{np}}\) contributes little, allowing us to rewrite the excitation energy as \(\epsilon_{\text{np}}=\omega-U/2\). When the excitation energy \(\epsilon_{\text{np}}=0\), it indicates that the field mode is macroscopically occupied, signifying the solution of \(U=2\omega\), This condition serves as the phase boundary between the normal phase and the superradiant phase.
The above solution can be explained by the Hamiltonian expression in Eq. (2). By projecting \(\sigma_{z}\) onto the lower energy level of the two-level system subspace with Stark coupling \(U=2\omega\), the Hamiltonian can be expressed as \(\hat{H}_{\text{RS}}=-\Delta/2+g(a^{\dagger}+a)\sigma_{x}\). In the ultrastrong coupling regime, the latter term involving \(g\) is considerably weaker compared to the former term in the CO limit, Consequently, this term \(g(a^{\dagger}+a)\sigma_{x}\) can be disregarded. Thus, the photon occupation number has minimal impact on the overall energy at \(U=2\omega\), This observation signifies the emergence of high degeneracy at \(U=2\omega\), which explains the occurrence of spectral collapse. When \(U>2\omega\) and the excitation energy \(\epsilon_{np}\) is negative, the energy decreases with an increasing number of photons. In such a scenario, there is no value of \(n\) that minimizes the energy, resulting in an unbounded Hamiltonian from below.
From this analysis, we observe that although the spectrum collapse phenomenon persists in the CO limit, we gain a deeper comprehension of its underlying mechanisms. **Notably, these findings align with the behavior exhibited by the Rabi model**[20] when \(U=0\) in the regime of ultrastrong coupling and CO limit, where the Rabi model simplifies to a trivial form. However, our results possess greater physical significance for \(U\neq 0\), as they provide valuable insights into the causes of the spectral collapse phenomenon.** It is worth noting that the analytical derivation in the so-called superradiant phase, where the ground state is absent, is meaningless and thus omitted in this discussion.
### Adding Nonlinear Photon-Photon Interaction Term
From the energy spectra shown in Fig. 3, it is evident that the quantum Rabi-Stark model experiences spectral collapse and demonstrates an incomplete physical characteristic in the ultrastrong coupling regime when the Stark coupling \(U\) surpasses the critical point \(U=2\omega\). To stabilize this system, ensure a bounded Hamiltonian from below, and eliminate the occurrence of the spectral collapse, we introduce a nonlinear term quadratic in \(a^{\dagger}a\), representing photon-photon interaction, to the original Rabi-Stark model Hamiltonian. The modified Hamiltonian is provided in Eq. (3).
Utilizing the solution derived in the previous section, we employ a similar approach to solve the present model. The
resulting transformed effective Hamiltonian takes the form
\[H_{\text{E}}^{{}^{\prime}} =H_{\text{d}}+H_{\text{nd}}+H_{sd}+H_{\kappa nd} \tag{15}\] \[=\omega a^{\dagger}a+\omega\lambda^{2}+2\lambda g+(\frac{\Delta+U \lambda^{2}}{2})\tau_{z}G_{0}(n)\] \[+\frac{U}{2}\lambda\tau_{z}[F_{1}(n)a^{\dagger}a-aF_{1}(n+1)a^{ \dagger}]+\frac{U}{2}\tau_{z}G_{0}a^{\dagger}a\] \[+\kappa[\lambda^{4}+\lambda^{2}+4\lambda^{2}a^{\dagger}a+a^{ \dagger}aa^{\dagger}a]+(\tau_{+}a+\tau_{-}a^{\dagger})\] \[\quad\times(\lambda\omega+g+2\kappa\lambda^{3}+2\kappa\lambda a^ {\dagger}a-\kappa\lambda T_{z}-R_{+}(\lambda)),\]
where \(\lambda\) satisfies the equation:
\[\lambda\omega+g+2\kappa\lambda^{3}+2\kappa\lambda a^{\dagger}a+\kappa\lambda T _{z}+R_{-}(\lambda)=0, \tag{16}\]
Furthermore, by considering the \(2\times 2\) subspace, similar to Eq. (8), the excited state energy can be determined as
\[H_{11}^{{}^{\prime}} =H_{11}+\kappa[\lambda^{4}+\lambda^{2}+4\lambda^{2}n+n^{2}], \tag{17}\] \[H_{12}^{{}^{\prime}} =H_{12}+2\kappa\lambda^{3}+\kappa\lambda(2n+1),\] \[H_{21}^{{}^{\prime}} =H_{21}+2\kappa\lambda^{3}+\kappa\lambda(2n+1),\] \[H_{22}^{{}^{\prime}} =H_{22}+\kappa[\lambda^{4}+\lambda^{2}+4\lambda^{2}(n+1)+(n+1)^{ 2}],\]
Moreover, the energy expectation value of \(|-x,0\rangle\) can be expressed directly as
\[E_{0}^{{}^{\prime}} =\omega\lambda^{2}+2\lambda g \tag{18}\] \[+\left(\frac{U\lambda^{2}-\Delta}{2}-2U\lambda^{4}\right)e^{-2 \lambda^{2}}+\kappa\lambda^{2}(1+\lambda^{2}).\]
Fig. 5 displays the energy spectra of the completed Rabi-Stark model as a function of the Stark coupling \(U\). Our analytical results demonstrate good agreement with the truncated numerical results. In contrast to the original Rabi-Stark model shown in Fig. 3, the lowest energy level no longer exhibits divergence when the value of the Stark coupling \(U\) exceeds the original critical point. This implies the existence of a specific ground state energy level in the system. The inclusion of a nonlinear photon term ensures the system remains well-defined even for \(U>2\omega\). Notably, the first energy level crossing occurs after the original critical point without the presence of high degeneracy, signifying the elimination of spectral collapse.
### Completed Model in the CO Limit
With the successful avoidance of spectrum collapse through the introduction of the nonlinear term, our attention now turns to the behavior of the completed Rabi-Stark model in the classical oscillator (CO) limit and the presence of a possible phase transition.
The mean photon number, as shown in Fig. 6, exhibits a distinctive "staircase" pattern as a function of the Stark coupling \(U\) in the CO limit, i.e., \(\omega\ll\Delta\). In this case, each step of the "staircase" shows a uniform width, and as the parameter \(\kappa\) increases, the first step, representing the intersection point of the ground and first excited energy levels, shifts towards larger values of \(U\), contrary to the critical point at \(U=2\omega\) in the original model. Conversely, as \(\kappa\) decreases, the step widths in the staircase pattern become narrower, resulting in a steeper staircase, and ultimately diverging to infinity as \(\kappa\) approaches zero. This divergence signifies the transition of the completed Rabi-Stark model back to the original Rabi-Stark model, where the phase transition occurs.
**Considering both \(\Delta\) needs to be large enough in the CO limit and \(\kappa\) needs to be small enough to preserve some physical features of original model such as phase transition, we set \(\kappa=1/\Delta\) and plot the renormalized mean photon number, \(a^{\dagger}a/\Delta\) in Fig. 7 as a function of \(U\) while keep \(\Delta/\omega\to\infty\). Interestingly, it is observed that the latter part of the mean photon number function displays a linear behavior with a fixed slope.**
In Fig. 7(c), employing a logarithmic y-axis for clarity, it
Figure 5: (Color online) The energy spectrum of the completed Rabi-Stark model is shown as a function of the Stark coupling \(U\) with \(\Delta=\omega\) and \(g=0.2\omega\): (a) \(\kappa=0.1\omega\); (b) \(\kappa=0.01\omega\). The solid and dashed lines correspond to the positive and negative branches of energy obtained using our method. The red cross symbols represent the numerical results of the first thirty lowest energy levels obtained through the truncated numerical method.
Figure 6: (Color online) The mean photon number of the ground state in the completed Rabi-Stark model is shown as a function of the Stark coupling \(U\) for different photon coupling \(\kappa\) in units of \(\omega\), with \(g=0.1\omega\) and \(\Delta=200\omega\).
becomes evident that as \(\Delta\) increases, the mean photon number of the ground state exhibits a sharper change in the vicinity of the critical point when \(\kappa=1/\Delta\). The mean photon number of the ground state serves as an order parameter and this behavior bears resemblance to the second-order transition observed in the Jaynes-Cummings model within the classical oscillator limit (CO limit) [21].
Next, we analyze the features shown above using analytical methods. In the ultrastrong coupling regime, the completed Rabi-Stark model can be approximately reduced to a \(2\times 2\) Hilbert subspace as shown in Eq. (17). In the CO limit, where the parameter \(\Delta/\omega\) becomes infinitely large, the constraint condition for the parameter \(\lambda\) can be approximated as,
\[\lambda\simeq-\frac{g}{Un+\Delta+G+2\kappa n-\kappa T_{z}}, \tag{19}\]
By considering terms up to first-order precision in the matrix elements of the Hamiltonian within the \(2\times 2\) subspace and neglecting the term that contributes insignificantly compared to other terms in the expression for the eigenenergies, the lower eigenenergy of this energy matrix can be written as
\[E_{|n,-x\rangle}=-\frac{1}{2}nU-\frac{\Delta}{2}+n(n\kappa+\omega)+2\lambda g. \tag{20}\]
where \(n=1,2,\ldots\). Noting that the Eq. (18) is consistent with this equation by taking \(n=0\) in CO limit, the negative branches of the spectrum can be approximated by this unified equation with \(n=0,1,2,\ldots\).
From this equation, we can obtain that the level crossing position as
\[U=2\omega+2\kappa+4n\kappa,\qquad n=0,1,2\cdots. \tag{21}\]
When n = 0, the first intersection position is \(U=2\omega+2\kappa\), which illustrates the influence on the phase boundary of the adding nonlinear photon term. Moreover, the distance between neighboring intersection positions becomes \(U=4\kappa\), which explains the equal step width in Fig. 6.
The linear relationship between the renormalized mean photon number and the Stark coupling \(U\) can also be further confirmed by considering Eq. (21) in the form \(U=2\omega+2\kappa+4n/\Delta\) under the condition \(\kappa=1/\Delta\), which yields a slope of \(l=1/4\). Additionally, a more detailed analysis using a geometric approach in Appendix B leads to the same result.
In summary, within the ultrastrong coupling regime and for small photon coupling \(\kappa\), as the Stark coupling \(U\) exceeds a certain threshold, the mean photon number of the ground state transitions from \(0\) to \(1\) instead of diverging to infinity. This transition is characterized by a "staircase" pattern in the function of the mean photon number. As the photon coupling \(\kappa\) increases, the step width in the staircase pattern, which analytically written at \(4\kappa\), widens. This observation indicates that larger values of photon coupling \(\kappa\) have a significant influence on the behavior of the mean photon number for Stark coupling values beyond the critical point, impeding photon transitions between energy levels. Furthermore, the completed Rabi-Stark model exhibits a phase boundary shifted by \(2\kappa\) compared to the original model, resulting in a higher critical point. Notably, the mean photon number demonstrates a linear relationship with the Stark coupling \(U\), featuring a fixed slope of \(l=1/4\) as the photon coupling \(\kappa\) approaches \(1/\Delta\).
## IV Conclusions
In this study, we present an analytical approach to approximate the quantum Rabi-Stark model in the ultrastrong coupling regime by transforming it into a solvable Jaynes-Cummings-like model. We validate the effectiveness of our analytical method by comparing our results with direct numerical calculations. By analyzing the energy spectra shown in Fig. 3, we observe the occurrence of the spectral collapse phenomenon at the critical point \(U=2\omega\), which marks the boundary of the superradiant phase transition. When the Stark coupling \(U>2\omega\), the system lacks a ground state, resulting in an unbounded energy. To address this instability, we introduce a nonlinear photon term \(\kappa(a^{\dagger}a)^{2}\), representing photon-photon interaction, to the Hamiltonian. Remarkably, this additional term stabilizes the model and eliminates the spectral collapse phenomenon, allowing the system to possess a well-defined ground state and to be completed.
Besides, we investigate both the completed Rabi-Stark model and the original Rabi-Stark model in the classical oscillator (CO) limit. By neglecting higher-order terms with negligible contributions to the energy, we derive a simplified analytical expression for the energy spectra. We provide a theoretical explanation for the macroscopic occupation of the ground state and the occurrence of spectral collapse in the original model. For the completed Rabi-Stark model in the CO limit, we discover that the energy spectrum of the negative branches exhibits a linear relationship, and all energy lev
Figure 7: (Color online) The renormalized mean photon number of the ground state in the completed Rabi-Stark model is shown as a function of the Stark coupling \(U\) in units of \(\omega\), with \(g=0.1\omega\), \(\kappa=1/\Delta\) and (a) \(\Delta=200\omega\), (b) \(\Delta=1000\omega\). (c) The mean photon number of the ground state in the completed Rabi-Stark model versus the Stark coupling \(U\) for different \(\Delta\) in units of \(\omega\), with \(g=0.1\omega\), \(\kappa=1/\Delta\). The behavior of the mean photon number becomes sharper near the critical point as the ratio \(\Delta/\omega\) increases.
els can be described by a unified expression [Eq. (20)]. The mean photon number of the ground state displays a "staircase" pattern with a constant step width of \(4\kappa\), as demonstrated analytically. **The photon coupling \(\kappa\) influences photon transitions between energy levels.** Specifically, when \(\kappa\) approaches \(1/\Delta\), the mean photon number of the ground state exhibits a fixed slope of \(l=1/4\), which distinguishes it from the behavior of the original Rabi-Stark model. **Moreover, the mean photon number undergoes a second-order transition, reminiscent of the behavior observed in the Jaynes-Cummings model within the CO limit.** We also investigate the impact of the additional photon-photon interaction term on the quantum phase transition. **The phase boundary is shifted by \(2\kappa\), leading to a new critical point at \(U=2\omega+2\kappa\).** This term transforms the superradiant phase into a "completed" phase throughout the coupling regime, in contrast to the unstable superradiant phase of the original Rabi-Stark model. Our findings shed light on the study of the unbounded-from-below superradiant phase transition caused by the spectral collapse phenomenon.
###### Acknowledgements.
This work was supported by NSFC under grants Nos. 12074027.
## Appendix A Simplification of the Rabi-Stark Model
We apply a unitary transformation using the operator \(\exp\!\left[\lambda\sigma_{z}(a^{\dagger}-a)\right]\) to the Hamiltonian[Eq.(4)], where \(\lambda\) is a parameter constrained by the following condition. Following this transformation, the effective Hamiltonian can be divided into four distinct parts,
\[\hat{H}_{\rm E}=\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{3}+\hat{H}_{4}, \tag{16}\]
where
\[\hat{H}_{1} =\omega a^{\dagger}a-\lambda\omega\sigma_{z}\left(a^{\dagger}+a \right)+\omega\lambda^{2},\] \[\hat{H}_{2} =\frac{\Delta}{2}\{\sigma_{x}\cosh[2\lambda(a^{\dagger}-a)]+i \sigma_{y}\sinh[2\lambda(a^{\dagger}-a)]\},\] \[\hat{H}_{3} =-g[\sigma_{z}\left(a^{\dagger}+a\right)-2\lambda],\] \[\hat{H}_{4} =\frac{U}{2}\cosh[2\lambda(a^{\dagger}-a)][\sigma_{x}(\lambda^{2 }+a^{\dagger}a)+i\sigma_{y}\lambda(a^{\dagger}+a)]\] \[+\frac{U}{2}\sinh[2\lambda(a^{\dagger}-a)][\lambda\sigma_{x}(a^{ \dagger}+a)+i\sigma_{y}(\lambda^{2}+a^{\dagger}a)]. \tag{17}\]
Here the terms \(\cosh[2\lambda(a^{\dagger}-a)]\) and \(\sinh[2\lambda(a^{\dagger}-a)]\) can be expanded by the expansion formula of \(\cosh(x)\) and \(\sinh(x)\). These two terms can be written as
\[\cosh[2\lambda(a^{\dagger}-a)] =G_{0}(N)+G_{1}(N)a^{\dagger 2}+a^{2}G_{1}(N)+\cdots,\] \[\sinh[2\lambda(a^{\dagger}-a)] =F_{1}(N)a^{\dagger}-aF_{1}(N)\] \[+F_{2}(N)a^{\dagger 2}-a^{2}F_{2}(N)+\cdots,\]
where \(G_{i}(N)(i=0,1,2,...)\) and \(F_{j}(N)(j=1,2,...)\) are associated to the parameter \(\lambda\) and the photon number \(n\). Therefore, other terms in [Eq.(16)] can be simplified by
\[\sinh [2\lambda(a^{\dagger}-a)](a^{\dagger}+a)=\] \[F_{1}(N)a^{\dagger 2}+F_{1}(N)a^{\dagger}a-aF_{1}(N)a^{\dagger}-aF_{1 }(N)a+\cdots,\] \[\cosh [2\lambda(a^{\dagger}-a)](a^{\dagger}+a)=G_{0}(N)a^{\dagger}+G_{0 }(N)a+\cdots,\] \[\sinh [2\lambda(a^{\dagger}-a)]a^{\dagger}a=[F_{1}(N)a^{\dagger}-aF_{1 }(N)]a^{\dagger}a+\cdots,\] \[\cosh [2\lambda(a^{\dagger}-a)]a^{\dagger}a=G_{0}(N)a^{\dagger}a+G_{1 }(N)a^{\dagger 3}a+\cdots. \tag{18}\]
The high-order terms of \(a\) and \(a^{\dagger}\) are neglected since they primarily impact the far off-diagonal elements of the Hamiltonian. These terms make minimal contributions to the total energy, indicating their relevance within perturbation theory. As a result, the effective Hamiltonian can be simplified to
\[\hat{H}_{\rm E} =\hat{H}_{1}+\hat{H}_{2}+\hat{H}_{3}+\hat{H}_{4}\] \[=\omega a^{\dagger}a-(\lambda\omega+g)\sigma_{z}\left(a^{\dagger }+a\right)+\omega\lambda^{2}+2\lambda g\] \[+\left(\frac{\Delta}{2}+\frac{U}{2}\lambda^{2}\right)\left\{ \sigma_{x}G_{0}(n)+i\sigma_{y}\left[F_{1}(n)a^{\dagger}-aF_{1}(n)\right]\right\}\] \[+\frac{U}{2}\sigma_{x}\left[\lambda F_{1}(n)a^{\dagger}a-\lambda aF _{1}(n)a^{\dagger}+G_{0}(n)a^{\dagger}a\right]\] \[+\frac{iU}{2}\sigma_{y}\left\{\left[F_{1}(n)a^{\dagger}-aF_{1}(n )\right]a^{\dagger}a+\lambda G_{0}(n)(a^{\dagger}+a)\right\}, \tag{19}\]
where \(G_{0}(n)\) and \(F_{1}(n)\) can be calculated directly, with \(G_{0}(n)=\bra{n}G_{0}(N)\ket{n}=\bra{n}\cosh[2\lambda(a^{\dagger}-a)]\ket{n}=L_{ n}(4\lambda^{2})\exp\!\left(-2\lambda^{2}\right)\), and \(F_{1}(n)=\bra{n+1}\sinh[2\lambda(a^{\dagger}-a)]\ket{n}=2\lambda L_{n}^{1}(4 \lambda^{2})\exp\!\left(-2\lambda^{2}\right)/(n+1)\). In the expression above \(L_{n}(x)\) is the Laguerre polynomial, and \(L_{n}^{1}(x)\) is the associated Laguerre polynomial with superscript 1.
After performing a transformation of the Hamiltonian into the representation defined by \(\sigma_{x}\) (i.e. \(\sigma_{x}\ket{\pm x}=\pm\ket{\pm x}\)), where \(\sigma_{x}=\tau_{z},\quad\sigma_{y}=-i\left(\tau_{+}-\tau_{-}\right),\quad \sigma_{z}=-\left(\tau_{+}+\tau_{-}\right)\), the effective Hamiltonian can be simplified as the sum of diagonal and non-diagonal terms, denoted by \(H_{\rm d}\) and \(H_{\rm nd}\), respectively.
\[H_{\rm d}=\omega a^{\dagger}a+\omega\lambda^{2}+2\lambda g+( \frac{\Delta}{2}+\frac{U}{2}\lambda^{2})\tau_{z}G_{0}\left(n\right)\] \[+\frac{U}{2}\lambda\tau_{z}[F_{1}(n)a^{\dagger}a-aF_{1}(n+1)a^{ \dagger}]+\frac{U}{2}\tau_{z}G_{0}(n)a^{\dagger}a,\] \[H_{\rm nd}=\left(\tau_{+}a+\tau_{-}a^{\dagger}\right)(\lambda \omega+g-R_{+}(\lambda))\] \[\qquad\qquad\qquad+(\tau_{+}a^{\dagger}+\tau_{-}a)(\lambda\omega +g+R_{-}(\lambda)), \tag{20}\]
with
\[R_{+}\left(\lambda\right) =\frac{1}{2}F_{1}(n)\left(\Delta+U\lambda^{2}+Un\right)+\frac{U}{2}G _{0}(n)\lambda T_{z},\] \[R_{-}\left(\lambda\right) =\frac{1}{2}F_{1}(n)\left(\Delta+U\lambda^{2}+Un\right)-\frac{U}{2} G_{0}(n)\lambda T_{z}. \tag{10}\]
In the nondiagonal term, \(T_{z}\) takes values of \(\pm 1\) corresponding to the different energy levels of the spin operator associated with \(\tau_{z}\). The parameter \(\lambda\) is chosen to satisfy
\[\lambda\omega+g-\frac{U}{2}G_{0}\left(n\right)\lambda T_{z}+\frac{F_{1}\left(n \right)}{2}\left(\Delta+U\lambda^{2}+Un\right)=0. \tag{11}\]
The resulting effective Hamiltonian takes the form of a Jaynes-Cummings-like model, exhibiting diagonal elements except for its final term in the Hamiltonian
\[H_{\text{E}} =\left(\omega+\frac{U}{2}\tau_{z}G_{0}(n)\right)a^{\dagger}a+ \left(\frac{\Delta}{2}+\frac{U}{2}\lambda^{2}\right)\tau_{z}G_{0}(n)\] \[+(2g+\omega\lambda)\lambda+\frac{U}{2}\lambda\tau_{z}\left[F_{1} (n)a^{\dagger}a-aF_{1}(n+1)a^{\dagger}\right]\] \[+\left(\tau_{+}a+\tau_{-}a^{\dagger}\right)\left[\lambda\omega+g -R_{+}(\lambda)\right]. \tag{12}\]
In the experimental setup where the Rabi coupling \(g\) is relatively small (\(g<0.5\omega\)) in the ultrastrong coupling regime, the parameter \(\lambda\) is also small [7]. The Laguerre polynomial and the associated Laguerre polynomial can be expanded up to the zero-order term, \(L_{n}(4\lambda^{2})\simeq 1\) and \(L_{n}^{1}(4\lambda^{2})\simeq n+1\),respectively, leading to an approximate analytical solution
\[\lambda\simeq\] \[-\frac{g}{\omega+(\Delta\pm\frac{U}{2}+Un)\exp\biggl{\{}\left[-2 \left(\frac{g}{\omega+\Delta\pm U/2+Un}\right)^{2}\right]\biggr{\}}}. \tag{13}\]
From this Hamiltonian expression in Eq. (12) mentioned above, the energy spectrum can be obtained within the subspaces \(\{\left|+x,n\right\rangle,\left|-x,n+1\right\rangle\}\), \(n=0,1,2\cdots\),
\[H_{\text{E}}=\left(\begin{array}{cc}H_{11}&H_{12}\\ H_{21}&H_{22}\end{array}\right), \tag{14}\]
where
\[H_{11} =n\omega+2g\lambda+\lambda^{2}\omega+e^{-2\lambda^{2}}L_{n}\left( 4\lambda^{2}\right)\left(\frac{\Delta+nU+\lambda^{2}U}{2}\right)+\lambda^{2} Ue^{-2\lambda^{2}}\left[\frac{L_{n+1}^{1}\left(4\lambda^{2}\right)}{n+2}-\frac{L_{n}^{1 }\left(4\lambda^{2}\right)}{n+1}\right],\] \[H_{12} =\sqrt{n+1}\left[g+\lambda\omega-\frac{1}{2}\lambda Ue^{-2 \lambda^{2}}L_{n}\left(4\lambda^{2}\right)-\frac{\lambda e^{-2\lambda^{2}}L_{ n}^{1}\left(4\lambda^{2}\right)\left(\Delta+nU+\lambda^{2}U\right)}{n+1}\right],\] \[H_{21} =\sqrt{n+1}\left[g+\lambda\omega+\frac{1}{2}\lambda Ue^{-2 \lambda^{2}}L_{n+1}\left(4\lambda^{2}\right)-\frac{\lambda e^{-2\lambda^{2}}L _{n+1}^{1}\left(4\lambda^{2}\right)\left(\Delta+(n+1)U+\lambda^{2}U\right)}{ n+2}\right],\] \[H_{22} =(n+1)\omega+2g\lambda+\lambda^{2}\omega-e^{-2\lambda^{2}}\left\{ L_{n+1}(4\lambda^{2})\frac{\Delta+(n+1+\lambda^{2})U}{2}+U\lambda^{2}\left[ \frac{L_{n+2}^{1}\left(4\lambda^{2}\right)}{n+3}-\frac{L_{n+1}^{1}\left(4 \lambda^{2}\right)}{n+2}\right]\right\}.\] |
2308.07226 | Converting a triplet Cooper pair supercurrent into a spin-signal | Superconductivity with spin-polarized Cooper pairs is known to emerge by
combining conventional spinless superconductors with materials that have
spin-dependent interactions, such as magnetism and spin-orbit coupling. This
enables a dissipationless and conserved flow of spin. However, actually
utilizing the spin-polarization of such supercurrents have proven challenging.
Here, we predict an experimental signature of current-carrying triplet Cooper
pairs in the form of an induced spin-signal. We show that a supercurrent
carried only by triplet Cooper pairs induces a non-local magnetization that is
controlled by the polarization direction of the triplet Cooper pairs. This
provides a measurement protocol to directly use the spin-polarization of the
triplet Cooper pairs in supercurrents to transfer spin information in a
dissipationless manner. | Sigrid Aunsmo, Jacob Linder | 2023-08-14T16:01:30Z | http://arxiv.org/abs/2308.07226v1 | # Converting a triplet Cooper pair supercurrent into a spin-signal
###### Abstract
Superconductivity with spin-polarized Cooper pairs is known to emerge by combining conventional spinless superconductors with materials that have spin-dependent interactions, such as magnetism and spin-orbit coupling. This enables a dissipationless and conserved flow of spin. However, actually utilizing the spin-polarization of such supercurrents have proven challenging. Here, we predict an experimental signature of current-carrying triplet Cooper pairs in the form of an induced spin-signal. We show that a supercurrent carried only by triplet Cooper pairs induces a non-local magnetization that is controlled by the polarization direction of the triplet Cooper pairs. This provides a measurement protocol to directly use the spin-polarization of the triplet Cooper pairs in supercurrents to transfer spin information in a dissipationless manner.
## I Introduction.
Substantial efforts have been made to find evidence of dissipationless currents carried by spin-polarized Cooper pairs in superconductors [1; 2], also referred to as triplet Cooper pairs. This includes measurements of how Gilbert damping is renormalized in Josephson junctions and superconducting bilayers subject to ferromagnetic resonance [3; 4; 5; 6; 7; 8], as well as long-ranged supercurrent flow through magnetic materials [9; 10; 11]. Spin-pumping in ferromagnet/superconductor (F/S) hybrid structures [12; 13], where triplet pairing will affect the spin accumulation in the superconductor, has recently been subject of increased theoretical interest [14; 15; 16; 17; 18; 19; 20; 21].
However, a long-ranged supercurrent through magnetic materials is in itself not necessary useful. Supercurrents through normal metals carried by spinless, singlet Cooper pairs are also long-ranged when flowing through a normal metal. Thus, to unlock the potential of spin-polarized supercurrents with regard to potential cryogenic devices, it is necessary to find a way to utilize their spin-polarization directly. We here predict that a supercurrent carried by spin-polarized Cooper pairs induces a non-local magnetization. Both the polarization direction and magnitude of this magnetization is directly controlled by the spin degree of freedom of the triplet Cooper pairs. This shows how spin supercurrents can be used for low-dissipation information transfer by inducing spin-signals.
The supercurrent flow is converted into a non-local magnetization, occuring in a region where the supercurrent does not flow, by allowing it to interact with a Rashba spin-orbit coupled interface. This can be probed in the setup shown in Fig. 1. We consider the scenario experimentally realized in _e.g._ Refs. [10; 11]: a spin-triplet charge supercurrent generated by a magnetic multilayered structure. In the experiments, the spin-polarized nature of the Cooper pairs carrying the current was not directly measured, but rather inferred from its slow decay as a function of the length of the ferromagnetic bridge in a Josephson junction. In contrast, we provide a way to directly convert the spin of the triplet supercurrent into a spin-signal. The induced magnetization \(\mathbf{M}\) in the normal metal (N) changes direction depending on the spin-polarization of triplet pairs carrying the supercurrent. Without current, the non-local magnetization vanishes. The induced magnetization \(\mathbf{M}\) vanishes for certain spin-polarization directions of the pairs relative the Rashba interface normal. The predicted effect provides a way to directly use the spin-polarization of the triplet Cooper pairs in supercurrents to transfer spin information in a dissipationless manner.
## II Theory
The quasiclassical theory of superconductivity [22; 23; 24] is documented to compare well with experimental results, even quantitatively, for measurements done in mesoscopic superconducting hybrid structures. Our starting point is the Usadel equation which can be used in the diffusive limit of transport where the length scale hierarchy \(\lambda_{F}\ll l_{\text{mfp}}\ll\xi\) applies, with \(\lambda_{F}\) being the Fermi wavelength, \(l_{\text{mfp}}\) the electronic mean free path, and \(\xi\) the superconducting coherence length. It is effectively an equation of motion for the Green function matrix \(\hat{g}\) in Keldysh space and takes the form
\[D\nabla(\hat{g}\nabla\hat{g})+i[E\hat{\rho}_{3}+\hat{\mathbf{h}}+\hat{\Delta}, \hat{g}]=0. \tag{1}\]
Here, \(D\) is the diffusion coefficient, \(E\) is the quasiparticle energy, \(\hat{\mathbf{h}}=\text{diag}(\mathbf{h}\cdot\mathbf{\tau},\mathbf{h}\cdot\mathbf{\tau}^{*})\) where \(\mathbf{h}\) describes the spin-splitting field, \(\mathbf{\tau}\) is a vector with Pauli matrices as its components, \(\hat{\rho}_{3}=\text{diag}(1,1,-1,-1)\), and \(\hat{\Delta}\) describes the superconducting order parameter. The quasiclassical Green function \(\hat{g}\) is an 8\(\times\)8 matrix in Keldysh space and thus has a retarded, advanced, and
Figure 1: (Color online) Sketch of the proposed experimental protocol for converting a supercurrent carried by spin-polarized triplet Cooper pairs into a non-local spin signal. The supercurrent flows in a region underneath a thin heavy metal layer. Due to interfacial Rashba spin-orbit coupling, the triplet pairs induce a magnetization in a normal metal where no supercurrent is flowing. The induced magnetization depends sensitively on the polarization direction of the triplet pairs.
Keldysh component [25]. Since we will consider a system in equilibrium, \(\hat{g}\) is entirely determined by the retarded Green function \(\hat{g}\) which is a \(4\times 4\) matrix in Nambu-spin space. This retarded Green function depends on both the normal Green function \(\underline{g}\), which is a \(2\times 2\) matrix in spin space determining spectral properties such as the density of states, and the anomalous Green function \(\underline{f}\), which contains information about the superconducting correlations.
To solve this equation, it needs to be complemented by boundary conditions. In our system, we will have both magnetic and spin-orbit coupled interfaces. For the magnetic (also referred to as spin-active) interfaces we have
\[\hat{g}_{L}\partial_{z}\hat{g}_{L} =G_{0}[\hat{g}_{L},\hat{g}_{R}]+G_{1}[\hat{g}_{L},\hat{m}\hat{g}_ {R}\hat{m}]\] \[+G_{MR}[\hat{g}_{L},\{\hat{g}_{R},\hat{m}\}]-iG_{\varphi}[\hat{g}_ {L},\hat{m}]. \tag{2}\]
Here, \(\hat{m}=\text{diag}(\mathbf{m}\cdot\mathbf{\tau},\mathbf{m}\cdot\mathbf{\tau}^{*})\) where \(\mathbf{m}\) is a unit vector that describes the interface magnetization, \(G_{0}\) describes the ratio between the barrier resistance and the normal-state resistance of material \(L\), \(G_{MR}\) describes the magnetoresistance effect of the interface, while \(G_{1}\) originates from the spin-dependent transmission probabilities of a spin-active interfaces. Finally, \(G_{\,\phi}\) describes the effect of quasiparticles picking up spin-dependent phase shifts as they scatter at the interface. The boundary conditions for \(\hat{g}_{R}\) is obtained by interchanging \(L\) and \(R\) and multiplying the entire right hand side with \((-1)\). For an in-depth discussion and derivation of these boundary conditions, see [1; 26; 27]. We will also need the boundary conditions for a spin-orbit coupled interface [28; 29]. Defining \(\mathbf{\tau}_{\parallel}=(\rho_{x},0,\rho_{z})\), these read:
\[D\bar{g}_{R}\partial_{y}\bar{g}_{R} =T_{0}^{2}[\bar{g}_{L},\bar{g}_{R}]-\tfrac{3}{2}T_{1}^{2}P_{F}^{2 }[\bar{g}_{R},\mathbf{\tau}_{\parallel}\bar{g}_{L}\mathbf{\tau}_{\parallel}]\] \[-mDT_{0}T_{1}[\bar{g}_{R},\{\mathbf{\tau}_{\parallel,x},\bar{g}_{L} \partial_{z}\bar{g}_{L}\}]\] \[+Dda^{2}[\rho_{x},\bar{g}_{R}\rho_{x}\bar{g}_{R}]+Dda^{2}[\rho_{z},\bar{g}_{R}\rho_{z}\bar{g}_{R}] \tag{3}\]
where \(T_{0}\) and \(T_{1}\) are phenomenological interface parameters describing respectively spin-independent tunneling and spin-flip tunneling induced by the interfacial Rashba spin-orbit coupling, \(m\) is the electron mass, \(p_{F}\) is the Fermi momentum, \(\alpha\) quantifies the spin-orbit coupling strength, and \(d\) is the thickness of the spin-orbit coupled interface.
## III Observables
Our primary goal is to determine how triplet Cooper pair supercurrents can induce a magnetization in a normal metal by scattering on a spin-orbit coupled interface. To do so, we need the expressions for magnetization and current in quasiclassical theory. In this section, the quasiclassical expressions for the magnetization as well as spin and charge currents are presented. To simplify the analytical study later on we also present the observables expressed in the singlet-triplet decomposition in the weak proximity regime.
### Currents
The quasiclassical expression for current can be found by using the continuity equation \(\partial_{t}\rho+\nabla\cdot\mathbf{j}=S\), where \(\mathbf{j}\) is the current of a quantity, \(S\) is any source-term present, and \(\rho\) is the density of the quantity such as charge or spin. In a normal metal, the source term vanishes for the charge current.
For charge, we have that the density can be written as \(\rho=e\left<\psi^{\dagger}\hat{\rho}_{3}\psi\right>\), and for spin the spin density can be written as \(\langle\psi^{\dagger}\tfrac{1}{2}\hat{\rho}_{3}\pi\psi\rangle\) where \(e<0\) is the electron charge and we set \(\hbar=1\). By using the Heisenberg equation of motion for the creation and annihilation operators \(\psi^{\dagger},\psi\) and writing the expression in terms of the quasiclassical expression we get for the charge current density [30; 31]
\[\mathbf{J}=\frac{eN_{0}D}{4}\int\text{d}E\text{Tr}\left[\hat{\rho}_{3}(\bar{g} \bar{\nabla}\bar{g})^{K}\right]. \tag{4}\]
The spin current density is a tensor, with a direction of flow in real space and a polarization direction in spin space, obtained by replacing \(\hat{\rho}_{3}\) with \(\tfrac{1}{2}\hat{\rho}_{3}\tau\)
As mentioned, we will write the expressions in terms of the singlet-triplet decomposition terms \(f_{s}\) and \(\mathbf{d}\): \(\underline{f}=(f_{s}+\mathbf{d}\cdot\underline{\sigma})i\sigma_{y}\). By linearizing in \(\hat{f}\), assuming \(\hat{g}=\hat{\rho}_{3}+\hat{f}\), using the relation \(\hat{g}^{A}=-\hat{\rho}_{3}(\hat{g}^{R})^{\dagger}\hat{\rho}_{3}\) and that in equilibrium we have \(\hat{g}^{K}=(\hat{g}^{R}-\hat{g}^{A})\tanh(\frac{\beta E}{2})\) we get the following expressions for the current density \(\mathbf{J}\) and spin current density \(\mathbf{J}_{\mathbf{\xi}}\)
\[\mathbf{J} =J_{0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\xi\tanh(\frac{ \beta E}{2})\text{Re}\big{(}[f_{s}\nabla\vec{f}_{s}-d_{z}\nabla\vec{d}_{z}\] \[-d_{x}\nabla\vec{d}_{x}-d_{y}\nabla\vec{d}_{y}]-[\cdot\cdot\cdot]\big{)}, \tag{5}\] \[\mathbf{J}_{s_{x}} =J_{s0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\xi\tanh(\frac {\beta E}{2})\text{Im}\big{(}[d_{y}\nabla\vec{d}_{z}-d_{z}\nabla\vec{d}_{y}]+[ \cdot\cdot\cdot\cdot]\big{)},\] (6) \[\mathbf{J}_{s_{y}} =J_{s0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\xi\tanh(\frac {\beta E}{2})\text{Im}\big{(}[d_{z}\nabla\vec{d}_{x}-d_{x}\nabla\vec{d}_{z}]+[ \cdot\cdot\cdot]\big{)},\] (7) \[\mathbf{J}_{s_{z}} =J_{s0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\xi\tanh(\frac {\beta E}{2})\text{Im}\big{(}[d_{x}\nabla\vec{d}_{y}-d_{y}\nabla\vec{d}_{x}]+[ \cdot\cdot\cdot\cdot]\big{)}, \tag{8}\]
where \(J_{0}=2eN_{0}D\Delta_{0}/\xi\) and \(J_{s0}=N_{0}D\Delta_{0}/\xi\). We have written the integral in terms of the dimensionless variable \(E/\Delta_{0}\), where \(\Delta_{0}=\Delta(T=0)\) is the zero temperature energy gap, and the dimensionless spatial coordinate \(z/\xi\). This means that \(J/J_{0}\) are now dimensionless and can be used in the numerical study.
In the following sections, we will also use the charge current divided into which component carries the current. Therefore we introduce the notation
\[\mathbf{J}_{f_{s}} =J_{0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\tanh(\frac{ \beta E}{2})\text{Re}\big{(}[f_{s}\nabla\vec{f}_{s}]-[\cdot\cdot\cdot]\big{)}, \tag{9}\] \[\mathbf{J}_{d_{i}} =-J_{0}\int_{0}^{\infty}\frac{\text{d}E}{\Delta_{0}}\tanh(\frac{ \beta E}{2})\text{Re}\big{(}[d_{i}\nabla\vec{d}_{i}]-[\cdot\cdot\cdot\cdot]\big{)}, \tag{10}\]
where \(\mathbf{J}_{f_{s}}\) is the charge current carried by the singlet component and \(\mathbf{J}_{d_{i}}\) is carried by the \(d_{i}\) component.
### Magnetization
The quasiclassical expression for magnetization reads [31]
\[\mathbf{M}=\frac{g\mu_{B}N_{0}}{8}\int\mathrm{d}E\mathrm{Tr}(\hat{\tau}\hat{g}^{K}). \tag{11}\]
where \(g\) is the Lande \(g\)-factor, \(N_{0}\) is the normal-state density of states at the Fermi level, and \(\mu_{B}\) is the Bohr-magneton. It should be mentioned that this expression does not take into account the contribution from the entire Fermi sea, and is thus not suitable to compute the magnetization of a ferromagnetic metal. It is, on the other hand, suitable for computing any spin-magnetization arising in otherwise non-magnetic materials such as normal metals or conventional superconductors. This holds both for spin accumulations in non-equilibrium systems and proximity-induced equilibrium spin magnetizations.
Once more we want to express the magnetization in terms of the singlet-triplet decomposed components. This can be done similarly to the current, by using the expression for \(\hat{g}^{A}\) and the equilibrium expression for \(\hat{g}^{K}\). There are no first-order contributions in the anomalous Green function so we have to take into account the normalization condition, \((\hat{g}^{R})^{2}=1\). By this method, it can be shown that the magnetization, to the second order in the anomalous Green function, reads
\[M_{x} =M_{0}\int_{0}^{\infty}\frac{\mathrm{d}E}{\Delta_{0}}\tanh(\frac{ \beta E}{2})\mathrm{Re}(\tilde{d}_{x}f_{s}-d_{x}\tilde{f}_{s}), \tag{12}\] \[M_{y} =M_{0}\int_{0}^{\infty}\frac{\mathrm{d}E}{\Delta_{0}}\tanh(\frac{ \beta E}{2})\mathrm{Re}(\tilde{d}_{y}f_{s}-d_{y}\tilde{f}_{s}),\] (13) \[M_{z} =M_{0}\int_{0}^{\infty}\frac{\mathrm{d}E}{\Delta_{0}}\tanh(\frac{ \beta E}{2})\mathrm{Re}(\tilde{d}_{z}f_{s}-d_{z}\tilde{f}_{s}), \tag{14}\]
where \(M_{0}=g\mu_{B}N_{0}\Delta_{0}\).
## IV Riccati parametrization
One particularly convenient way of parametrization the Green function is the Riccati parametrization [32; 33; 34; 35]. The Riccati parametrization is advantageous for numerical computation because the parameters are bounded between 0 and 1. For the purpose of studying systems numerically we will here briefly outline the derivation of the Riccati parametrized Usadel equation as well as giving a detailed derivation of the Riccati parametrized boundary equation in the Appendix, including the effect of a spin-orbit coupled interface, since the latter is not present in existing literature.
The retarded Green function is defined via parameters \(N\) and \(\gamma\) as follows.
\[\hat{g}=\begin{pmatrix}N(1+\gamma\tilde{\gamma})&2N\gamma\\ -\tilde{N}\tilde{\gamma}&-\tilde{N}(1+\tilde{\gamma}\gamma)\end{pmatrix} \tag{15}\]
\(N\) and \(\gamma\) will only be used for \(2\times 2\) matrices, so we do not use any special notation to indicate their matrix nature. By the normalization condition \(\hat{g}^{2}=1\) it is seen that \(N=(1-\gamma\tilde{\gamma})^{-1}\) and \(\tilde{N}=(1-\tilde{\gamma}\gamma)^{-1}\).
A couple of useful identities can be found
\[N\gamma=\gamma\tilde{N},\ \ \tilde{N}\tilde{\gamma}=\tilde{\gamma}N. \tag{16}\]
Notice also that
\[\gamma\tilde{\gamma}=1-N^{-1}. \tag{17}\]
When writing the Usadel equation and the boundary conditions, in particular including the role of spin-orbit coupling, we will have to deal with derivatives. To simplify the notation we therefore introduce \(\gamma^{\prime}=\partial_{z}\gamma\). The following way of writing derivatives will also be useful
\[\partial_{z}N =N(\gamma^{\prime}\tilde{\gamma}+\gamma\tilde{\gamma}^{\prime})N \tag{18}\] \[\partial_{z}\tilde{N} =\tilde{N}(\tilde{\gamma}^{\prime}\gamma+\tilde{\gamma}\gamma^{ \prime})\tilde{N}\] (19) \[\partial_{z}(N\gamma) =N(\gamma^{\prime}+\gamma\tilde{\gamma}^{\prime}\gamma)\tilde{N}\] (20) \[\partial_{z}(\tilde{N}\tilde{\gamma}) =\tilde{N}(\tilde{\gamma}^{\prime}+\tilde{\gamma}\gamma^{\prime }\tilde{\gamma})N \tag{21}\]
all of which can be found by using the identities above.
The general approach of how to identify the Riccati parameterized Usadel equations and the Kuprianov-Lukichev boundary conditions is thoroughly described in [35]. The method consists of writing the terms in Usadel in \(4\times 4\) matrix notation and then taking the upper-right \(2\times 2\) matrix minus the upper-left \(2\times 2\) matrix multiplied by \(\gamma\). More specifically, one takes \(\frac{1}{2}N^{-1}([\dots]_{12}-[\dots]_{11}\gamma)\) of the matrix equation \([\dots]\), where the subscript indicates a block matrix. Doing so, one manages to eliminate terms such that \(\partial_{z}^{2}\gamma\) or \(\partial_{z}\gamma\) terms can be separated out.
The final result for the Riccati parameterized Usadel equation for a ferromagnet reads
\[\partial_{z}^{2}\gamma=-2iE\gamma-i\mathbf{h}\cdot(\tau\gamma-\gamma\tau^{*})-2 \gamma^{\prime}\tilde{N}\tilde{\gamma}\gamma^{\prime}, \tag{23}\]
where for a normal metal, the only adjustment needed is to set \(\mathbf{h}=0\). It can be shown that the Riccati parameterized bulk BCS singlet superconductor solution is
\[\gamma_{\mathrm{BCS}}=\begin{pmatrix}0&be^{i\phi}\\ -be^{i\phi}&0\end{pmatrix}, \tag{24}\]
where \(\phi\) is the phase and
\[b=\begin{cases}\frac{\Delta}{E\times\sqrt{\Delta^{2}-E^{2}}},&\mathrm{for}|E|< \Delta\\ \frac{\Lambda\mathrm{sgn}(E)}{|E|+\sqrt{E^{2}-\Delta^{2}}}&\mathrm{for}|E|> \Delta.\end{cases} \tag{25}\]
### Riccati parametrization of boundary conditions
When deriving the Riccati parametrizing of the specific boundary conditions used in this work, it is useful to note that many of the terms both in the spin-orbit coupling and spin-active boundary conditions have the same form. Here a practical method for Riccati parametrizing this type of terms is presented in order to simplify the calculations.
The form, which many of the terms take, is
\[[\hat{g}_{L},\hat{U}], \tag{26}\]
where \(\hat{U}\) is a matrix whose exact form depends on the specific boundary conditions. In general, we write \(\hat{U}\) as
\[\hat{U}=\begin{pmatrix}\hat{U}_{11}&\hat{U}_{12}\\ \hat{U}_{21}&\hat{U}_{22}\end{pmatrix}. \tag{27}\]
By Riccati parametrization, it can be found from the left-hand side of the boundary equation \(\hat{g}\partial_{z}\hat{g}\) that
\[\frac{1}{2}N_{L}^{-1}([\hat{g}_{L}\partial_{z}\hat{g}_{L}]_{12}-[\hat{g}_{L} \partial_{z}\hat{g}_{L}]_{11}\gamma_{L})=\partial_{z}\gamma_{L}, \tag{28}\]
as seen in [35]. The subscript tells which block of the matrix that is meant, as in Equation (27).
To obtain the complete boundary conditions, we also have to perform the same operation on the right-hand side as on the left side. This means we need to take \(\frac{1}{2}N_{L}^{-1}([\dots]_{12}-[\dots]_{11}\gamma_{L})\) of every term on the right-hand side of the boundary conditions. Thus we start by finding a procedure for all terms that comes in the form of Eq. (26).
\[\frac{1}{2}N_{L}^{-1}([\hat{g}_{L},\hat{U}]^{1,2}-[\hat{g}_{L}, \hat{U}]^{1,1}\gamma_{L})\] \[=\frac{1}{2}N_{L}^{-1}(\underline{g}_{L}\hat{U}_{12}+\underline{ f}_{L}\hat{U}_{22}-\hat{U}_{11}\underline{f}_{L}+\hat{U}_{12}\underline{\tilde{g}}_{L}\] \[-(\underline{g}_{L}\hat{U}_{11}+\underline{f}_{L}\hat{U}_{21}- \hat{U}_{11}\underline{g}_{L}+\hat{U}_{12}\underline{\tilde{f}}_{L})\gamma_{ L}).\]
As a next step we collect the terms with the same \(U_{ij}\) matrices and insert \(\underline{f}=2N\gamma\), \(\underline{g}=2N-1\). The \(\hat{U}_{11}\) terms can be written as
\[\frac{1}{2}N_{L}^{-1}(-\hat{U}_{11}\underline{f}_{L}-\underline{g}_{L}\hat{U }_{11}\gamma_{L}+\hat{U}_{11}\underline{g}_{L}\gamma_{L})\] \[=\frac{1}{2}N_{L}^{-1}(-\hat{U}_{11}2N_{L}\gamma_{L}-(2N_{L}-1) \hat{U}_{11}\gamma_{L}+\hat{U}_{11}(2N_{L}-1)\gamma_{L})\] \[=-\hat{U}_{11}\gamma_{L}. \tag{29}\]
In the same manner the \(\hat{U}_{12}\) terms becomes
\[\frac{1}{2}N_{L}^{-1}(\underline{g}_{L}\hat{U}_{12}+\hat{U}_{12} \underline{\tilde{g}}_{L}-\hat{U}_{12}\underline{\tilde{f}}_{L}\gamma_{L})\] \[=\frac{1}{2}N_{L}^{-1}((2N_{L}-1)\hat{U}_{12}+\hat{U}_{12}(2\hat{ N}_{L}-1)-\hat{U}_{12}2\hat{N}_{L}\tilde{\gamma}_{L}\gamma_{L})\] \[=\hat{U}_{12}. \tag{30}\]
The \(\hat{U}_{21}\) term should also be written in terms of \(\gamma\) as
\[\frac{1}{2}N_{L}^{-1}(-\underline{f}_{L}\hat{U}_{21}\gamma_{L}) \tag{31}\] \[=-\gamma_{L}\hat{U}_{21}\gamma_{L}.\]
Finally, the \(\hat{U}_{22}\) term can be written as
\[\frac{1}{2}N_{L}^{-1}(\underline{f}_{L}\hat{U}) \tag{32}\] \[=\gamma_{L}\hat{U}_{22}.\]
Putting everything together we get
\[\frac{1}{2}N_{L}^{-1}\left([\hat{g}_{L},\hat{U}]^{1,2}-[\hat{g}_{L },\hat{U}]^{1,1}\gamma_{L}\right)=-\hat{U}_{11}\gamma_{L}+\hat{U}_{12}\] \[-\gamma_{L}\hat{U}_{21}\gamma_{L}+\gamma_{L}\hat{U}_{22}.\]
We note that this also can be used for the Kuprianov-Lukichev boundary conditions [36], where one would have \(\hat{U}=G_{0}\hat{g}_{R}\). From this, and using the identities in Eqs. (16) and (17), the Kuprianov-Lukichev boundary conditions can be found to be
\[\partial_{z}\gamma_{L}=G_{0}(1-\gamma_{L}\tilde{\gamma}_{R})N_{R }(\gamma_{R}-\gamma_{L}) \tag{33}\] \[\partial_{z}\gamma_{R}=G_{0}(1-\gamma_{R}\tilde{\gamma}_{L})N_{L }(\gamma_{R}-\gamma_{L}). \tag{34}\]
In the Appendix, both the spin-orbit coupling and spin-active boundary conditions will be written in the Riccati parameterized form using the method described here.
## V Setup
The system we are studying is illustrated in Fig. 2. The region in which the supercurrents will flow is drawn to the left and will therefore be referred to as (L). Similarly, the region where induced magnetization will be studied is drawn to the right and is called (R). In this study, we let the region (R) be a normal metal, and (L) will either be a normal metal or a ferromagnet depending on the situation. The \(z\) position
Figure 2: (Color online) The system in which supercurrents and induced magnetization is investigated. The material to the left, (L), is the material in which the supercurrent will flow. This material will be either a ferromagnet or a normal metal. To get the current flowing, two conventional superconductors are connected to (L), and a phase difference between them is applied. In between the superconductors and (L), spin active interfaces are included for the purpose of creating triplet Cooper pairs and thereby also triplet supercurrents. A normal metal, (R), is connected to (L) through a spin-orbit coupled interface. This material borders to vacuum at \(y=l\).
where (R) is connected to (L) is called \(z_{0}\). In most situations, we use \(z_{0}=l/2\), and it will be specified when other values are used. The (L) and (R) regions are chosen to have the same length, but this is not required for the effects predicted here. The grey region between (L) and (R) is the spin-orbit coupled material, for which the spin-orbit coupling boundary conditions will be used. At \(z=0\) and \(z=l\), conventional BCS superconductors, S1 and S2, are attached to the (L) material. These superconductors are the sources for Cooper pairs in the rest of the system. Between (L) and the superconductor, spin-active interfaces are introduced and are marked as grey regions in the figure. The interface magnetizations of the spin-active interfaces are described by unit vectors \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\).
To create a pure singlet charge current, the interface magnetizations are switched off (\(\mathbf{m}_{1}=\mathbf{m}_{2}=0\)) which is equivalent to using the regular Kuprianov-Lukichev boundary conditions. Furthermore, (L) is a normal metal in the singlet current case. As the superconductors S1 and S2 only contain singlet Cooper pairs, no triplets will be induced in (L) in this scenario. We want a supercurrent to flow through the (L) region. This is achieved by applying a phase difference, \(\phi\), between S1 and S2.
To create the triplet charge current and the spin current, the interface magnetizations are switched on. The triplet charge current is created in the same way as in the singlet case: by applying a phase difference.
For the purpose of discovering effects caused solely by triplets, a ferromagnetic exchange field is included in the material (L). Because of the exchange field, the singlet becomes short-ranged and dies out rapidly in the (L) region. In an experimental setup, it would be of importance to separate the intrinsic magnetization coming from an exchange field and the magnetization induced by supercurrents. Therefore the exchange field is modeled to be spatially varying in (L) such that it is zero in the middle region, but large at the sides as shown in Fig. 4. In practice, this can be realized by attaching thin ferromagnetic regions with a strong exchange field right next to the superconductors and then having a long normal metal region separating the ferromagnets. In this way, the singlets and short-range triplets are filtered out by the thin, strongly polarized ferromagnetic regions, whereas the long-range triplets produced in the ferromagnetic region remain and can propagate through the normal metal. More specifically, it is the triplet component that is spin-neutral in the exchange field orientation \(\mathbf{d}||\mathbf{h}\) that is short-ranged, and the others are long-ranged.
As we will discuss in the following analytical study, the difference between the \(d_{z}\) and \(d_{y}\) component is quite insignificant relative the spin-orbit coupled interface, and it is instead the \(d_{x}\) component that is the most relevant. Therefore, we focus most of the discussion on the case where the interface magnetizations lie in the \(xy\)-plane, and the exchange field points in the \(z\)-direction, \(\mathbf{h}=(0,0,h(z))\). We do, however, include rotation of both \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) with an angle \(\alpha\) around the \(z\)-axis, and the angle between \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) which we call \(\theta\). These angles are illustrated in Fig. 3. We also note here that the directions chosen are advantageous for an experimental setup, as rotating the interface magnetizations in-plane is a simpler task than driving them out of the \(xy\)-plane.
We also remark that rotating both interface magnetizations, \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\), by an angle \(\alpha\) is equivalent to rotating material (L) around the \(z\)-axis. Thus, rotating the interface magnetization or attaching the (R) region to the (R) region at different angles corresponds to the same physical system.
In the numerical study, we have used the interface parameters \(\frac{T_{0}^{2}}{D}=0.2\), \(\frac{2}{3}\frac{T_{1}^{2}p_{x}^{2}}{D}=mT_{0}T_{1}=d\alpha=0.1\), \(G_{0}=1/\xi_{s}\). To create pure charge currents, we set \(P=0\) and \(G_{\phi}=3G_{0}\). We also checked that the results are qualitatively similar if one instead uses \(P\neq 0,G_{\phi}=0\). If both \(P\) and \(G_{\phi}\) are finite, spin supercurrents in addition to charge supercurrents flow in the system. Furthermore, a combination of the length of material (L), \(l\), that is short enough to preserve some of the triplets, and strength of \(\mathbf{h}\) that is strong enough to remove the singlet had to be found. Setting the length of material \(l\) to eight times the bulk superconducting coherence length \(l=8\xi_{s}=8\sqrt{D/\Delta_{0}}\), and \(h(z)\) as in Figure 4 was found to work well.
Finally, we emphasize that our main aim here is to determine the qualitative behavior of the spin-signal induced by the charge and spin currents, such as when it exists and how it changes depending on the polarization of the triplet Cooper pairs, and not predict its precise magnitude, which will depend on the material choices.
### Numerical method
In general, the materials on both sides of an interface are affected by each other. However, here it is assumed for simplicity that the inverse proximity effect that material (L) induces in S1 and S2 is negligible, and only the effect from the superconductor on the material (L) is considered. The superconductors are thus assumed to be the bulk superconductors. Such an approximation is valid when one of the materials is much more disordered than the other [37]. The bulk superconductor Green function can by this assumption be used directly in the boundary conditions used to solve system (L). Similarly, the effect from (R) on (L) is neglected, and the solution for system (L) at \(z=z_{0}\) is used directly into the spin-orbit coupling boundary conditions for (R). Furthermore, the materials are individually modeled as one dimensional, meaning that material (L) is extended in the \(z\)-direction and (R) in the \(y\)-direction.
Figure 3: (Color online) The figure shows the definition of the angles \(\alpha\) and \(\theta\). \(\alpha\) is defined as the angle between the \(x\)-axis and the interface magnetization of in the first interface \(\mathbf{m}_{1}\). \(\theta\) is the angle between the two interface magnetizations.
The system which has to be solved is now the one-dimensional Usadel equation in two regions. This is a second-order differential equation for two variables \(\gamma\) and \(\tilde{\gamma}\), and to solve it numerically we rewrite it as four first-order differential equations writing
\[\begin{pmatrix}\gamma^{\prime}\\ \gamma^{\prime\prime}\\ \tilde{\gamma}^{\prime\prime}\end{pmatrix}=f\begin{pmatrix}\gamma\\ \gamma^{\prime}\\ \tilde{\gamma}\\ \tilde{\gamma}^{\prime}\end{pmatrix}, \tag{35}\]
where \(f(\ldots)\) is a function that returns the derivative of the input. The function will thus return \(\gamma^{\prime}\) as the derivative for \(\gamma\) and use the Riccati parameterized Usadel equation to find the derivative of \(\gamma^{\prime}\) and similarly for \(\tilde{\gamma}\). Thus we have a system of 16 complex connected differential equations, four elements in each of the matrices \(\gamma\), \(\gamma^{\prime}\), \(\tilde{\gamma}\), and \(\tilde{\gamma}^{\prime}\). The boundary conditions give restrictions to \(\gamma^{\prime}\) and \(\tilde{\gamma}^{\prime}\) on each side of the material.
To solve the system we have used the boundary value problem solver from SciPy [38]. To stabilize the solver the real and imaginary part are split such that the 16 complex equations become 32 real ones. To increase numerical stability, inelastic scattering is also included by adding a small imaginary component to the energy, which here is set to \(\delta/\Delta_{0}=0.01\) as used in [39]. This imaginary component is referred to as the Dynes parameter and is often used to model experiments [40]. In essence, it has the effect of broadening the spectral features, such as the peaks of the Green functions that occur at \(E=0\) and \(E=\Delta\). As mentioned, the interface magnetizations are rotated in the \(xy\)-plane in the triplet cases. A small trick was made for solving the system with these rotations. To save computation time, the Usadel equation only has to be solved once in material (L) for one given \(\theta\) and one given set of interface parameters. The angle \(\alpha\) can simply be taken into account by rotating the triplet components \(d_{x,L}\) and \(d_{y,L}\), such that
\[\begin{split} d_{x,L}(\alpha)&=d_{x,L}(0)\cos(\alpha )+d_{y,L}(0)\sin(\alpha),\\ d_{y,L}(\alpha)&=-d_{x,L}(0)\sin(\alpha)+d_{y,L}(0) \cos(\alpha).\end{split} \tag{36}\]
This means that instead of solving the system in (L) for every \(\alpha\), we solve it once and then rotate the solution to proceed studying the material (R). For the material (R), however, the Usadel equation needs to be solved separately for every value of \(\alpha\). We have verified the integrity of our numerical method by _e.g._ checking that current conservation is satisfied and that we reproduce known results from the literature, such as the induced magnetization from a singlet charge current in [29].
## VI A brief analytical study
To gain more physical insight before proceeding to the numerical results, we here analyze the weak proximity effect regime where the equations can be linearized in the anomalous Green function. We will see that there is a clear relation between singlet charge current, \(J_{f_{s}}\), and the induced magnetization in the \(x\)-direction, \(m_{x}\). A similar relation also exists between a triplet charge current carried by the \(d_{x}\) triplet, \(J_{d_{x}}\), and \(m_{x}\).
In the weak proximity regime, we assume that the quasiclassical Green function is close to the normal metal solution, \(\hat{g}_{N}=\hat{\rho}_{3}\), but with a small superconducting part, \(\hat{f}\), induced by the proximity superconductors. We thereby assume that we can use the weak proximity solution \(\hat{g}\approx\hat{\rho}_{3}+\hat{f}\). Using the singlet-triplet decomposition, the Rashba SOC boundary conditions can be written as follows, keeping only the terms of the first order in \(\hat{f}\)
\[\partial_{y}f_{s,R}= -mT_{0}T_{1}\hat{\rho}_{c}d_{x,L} \tag{37}\] \[-2(\frac{T_{0}^{2}}{D}-2\frac{T_{1}^{2}P_{F}^{2}}{D})\left(f_{s, L}-f_{s,R}\right)\] \[\partial_{y}d_{z,R}= -(8d\alpha-2\frac{T_{0}^{2}}{D}+4\frac{2}{3}\frac{T_{1}^{2}P_{F} ^{2}}{D})d_{z,R}\] \[-(2d_{z,L})(\frac{T_{0}^{2}}{D}+2\frac{3}{3}\frac{T_{1}^{2}P_{F} ^{2}}{D})\] (38) \[\partial_{y}d_{x,R}= -4\partial_{z}f_{s,L}mT_{0}T_{1}-(4\frac{2}{3}\frac{T_{1}^{2}P_{F }^{2}}{D}+4d\alpha-2\frac{T_{0}^{2}}{D})d_{x,R}\] \[-(2d_{x,L})\frac{T_{0}^{2}}{D}\] (39) \[\partial_{y}d_{y,R}= -(4\frac{2}{3}\frac{T_{1}^{2}P_{F}^{2}}{D}+4d\alpha-2\frac{T_{0}^ {2}}{D})d_{y,R}-2\frac{T_{0}^{2}}{D}d_{y,L} \tag{40}\]
From this, we see that there is a link between \(f_{s,R}\) and \(\partial_{z}d_{x,L}\) and the other way around between \(d_{x,R}\) and \(\partial_{z}f_{s,L}\). On the other hand, to first order, \(d_{y,R}\) can only be induced by \(d_{y,L}\) and \(d_{z,R}\) only by \(d_{z,L}\).
Furthermore, we can use the solution to the linearized Usadel equation in a normal metal:
\[\begin{split} f_{s}&=A_{\text{s}}\mathrm{e}^{-ky}+B_ {s}\mathrm{e}^{ky},\\ d_{x}&=A_{x}\mathrm{e}^{-ky}+B_{x}\mathrm{e}^{ky}, \end{split} \tag{41}\]
where \(k=\sqrt{-2iE/D}\) and \(A_{s}\), \(A_{x}\), \(B_{s}\), and \(B_{s}\) are constants that have to be determined using the boundary conditions. If we assume that \(l\to\infty\), the \(B\) factors have to be zero in order to avoid a diverging function.
Figure 4: (Color online) The spatial profile of the exchange field function \(h(z)\) which is applied to the left material (L) in the triplet current cases. This ensures that only the long-ranged triplet component survives at the contact with the normal metal through the Rashba spin-orbit coupled interface. In an experiment, one would use a strong ferromagnet/normal metal bilayer to achieve an exchange field spatial profile serving the same purpose. In this work, the spatial profile of \(h\) is continuous rather than abrupt for numerical convenience, as one can then evaluate the Usadel equation in a single layer.
### Singlet current
We start by discussing the singlet charge current case. In this scenario, there are no triplet components present in (L). Thus, there is no induced \(d_{y}\) or \(d_{z}\) on the right side of the interface. Furthermore, there is no \(d_{x}\) and thus no \(\partial_{y}d_{x}\) on the left side. Assuming for simplicity \(l\rightarrow\infty\), from Eq. (37) and Eq. (39) we then see that
\[f_{s,R} \propto f_{s,L}, \tag{42}\] \[d_{x,R} \propto\partial_{z}f_{s,L}. \tag{43}\]
since in this case \(\partial_{y}f_{s,R}\propto f_{s,R}\) and \(\partial_{y}d_{x,R}\propto d_{x,R}\). Note that the tilde-conjugated components have the same proportionality between the left and right sides. Thus, it follows that
\[f_{s,L}\partial_{z}\tilde{f}_{s,L}-\tilde{f}_{s,L}\partial_{z}f_{s}\propto f_ {s,R}\tilde{d}_{x,R}-\tilde{f}_{s,R}d_{x,R}. \tag{44}\]
From Eq. (9) and Eq. (12) we see that the real part of the left side in the expression above is exactly what occurs in the expression for a singlet current in material (L), \(J_{f_{s}}\). We also see that the real part of the right side in the expression is exactly what occurs in the induced magnetization expression \(m_{x,R}\).
From the analytical expression, we conclude that there is a clear connection between induced magnetization and the singlet charge current. We note, however, that \(f_{s,L}\partial_{z}\tilde{f}_{s,L}-\tilde{f}_{s,L}\partial_{z}\tilde{f}_{s,L}\) and \(f_{s,R}\tilde{d}_{x,R}-\tilde{f}_{s,R}d_{x,R}\) might not have the same phase, such that the real part of these expressions might not be directly proportional.
### Triplet charge current
Next, considering a triplet current we see that the current \(J_{d_{y}}\) and \(J_{d_{z}}\), which contains varying \(d_{y}\) and \(d_{z}\) components in (L), do not induce a singlet in (R), at least to the first order in \(f\). We can thus conclude that \(d_{y}\) and \(d_{z}\) induce no magnetization in (R). For the last type of triplet charge current, \(J_{d,x}\) the same argumentation as for the singlet charge current applies. The difference is that in this case we have
\[d_{x,R} \propto d_{x,L}, \tag{45}\] \[f_{s,R} \propto\partial_{z}d_{x,L}. \tag{46}\]
This gives
\[d_{x,L}\partial_{z}\tilde{d}_{x,L}-\tilde{d}_{x,L}\partial_{z}d_{x}\propto f_ {s,R}\tilde{d}_{x,R}-\tilde{f}_{s,R}d_{x,R}. \tag{47}\]
Seemingly, \(J_{d_{x}}\) and \(J_{f_{s}}\) then induce the same magnetization.
A triplet charge current does, however, not have to be carried by a pure \(d_{x}\), \(d_{y}\), or \(d_{z}\). In general, we can have an arbitrary \(\mathbf{d}\)-vector carrying a pure charge current. Keeping to the situation where the triplet vector \(\mathbf{d}\) points in the \(xy\)-plane, we would get a current carried partially by \(d_{x}\) and partially by \(d_{y}\). In this case, the part carried by \(d_{x}\) induces a singlet component in (R), whereas the \(d_{y}\) part induces a \(d_{y}\) in (R) as well. By this argument, a magnetization in the \(y\)-direction would also be induced. Thus we see that with a triplet charge current, magnetization can also be induced in more than one direction. This is contrary to the singlet charge current.
### Spin current
We also briefly comment on how the induced magnetization changes when there is not only a charge supercurrent, but also a spin supercurrent flowing in (L). From section III we see that there is no \(d_{x}\) component involved in the \(x\)-polarized current \(J_{s_{x}}\). From this spin current, no singlet can be induced in (R), and thus also no magnetization. Both for the \(y\)-polarized spin current, \(J_{s_{y}}\), and the \(z\)-polarized spin current \(J_{s_{z}}\), a \(d_{x}\) component is involved. Since the \(d_{y}\) and \(d_{z}\) components are treated similarly by the spin-bit coupled interface, we settle for only studying \(J_{s_{z}}\). Presumably, \(J_{s_{y}}\) would induce a rotated, but similar magnetization.
The \(z\)-polarized spin current, \(J_{s_{z}}\) is seen in Eq. (8) to be dependent on the imaginary part of the following expression
\[d_{x,L}\partial_{z}\tilde{d}_{y,L}-d_{y,L}\partial_{z}\tilde{d}_{x,L}+\tilde{ d}_{x,L}\partial_{z}d_{y,L}-\tilde{d}_{y,L}\partial_{z}d_{x,L}. \tag{48}\]
Notice that from the linearized spin-orbit coupled boundary conditions, it is found that
\[d_{y,L}\partial_{z}\tilde{d}_{x,L}\propto d_{y,R}\tilde{f}_{s,R}. \tag{49}\]
Notice also that the real part of the right side also occurs in the \(m_{y}\) expression. The relation here is much less direct than in the charge current and \(m_{x}\) case. In the expression for \(m_{y}\) the term and the tilde-conjugated of the term are subtracted from each other, but in the \(J_{s,z}\) expression they are added together. Furthermore, \(m_{y}\) depends on the real part of the expression whereas \(J_{s_{z}}\) depends on the imaginary part. It therefore follows that no clear connection exists between a pure spin current and an induced magnetization via the spin-orbit coupled interface.
### Linearized spin-active boundary conditions
Before proceeding to the numerical study, we also look at the linearized spin-active boundary conditions in order to understand how the currents are created. These boundary conditions will only be used in interfaces where one side is a singlet superconductor. We consider below for concreteness the right interface and thus remove all triplets on the right side of the interface of the equations below. The linearized equations in the singlet-triplet decomposition notation read
\[\partial_{z}f_{s,L}= -2\mathbf{m}^{2}(f_{s,L}+f_{s,R})G_{1}+2G_{0}f_{s,R}-2G_{0}f_{s,L}\] \[-2G_{\phi}(d_{x,L}m_{x}+d_{y,L}m_{y}+d_{z,L}m_{z}). \tag{50}\] \[\partial_{z}d_{x,L}= (-\mathbf{m}^{2}G_{1}-2G_{0})d_{x,L}+(4id_{y,L}m_{z}-4id_{z}m_{y})G_{\rm MR}\] \[-(2G_{\phi})f_{s,L}m_{x},\] (51) \[\partial_{z}d_{y,L}= (-2\mathbf{m}^{2}G_{1}-2G_{0})d_{y,L}+(4id_{z}m_{x}-4im_{z}d_{x})G_{\rm MR}\] \[-(2G_{\phi})f_{s,L}m_{y},\] (52) \[\partial_{z}d_{z,L}= (-2\mathbf{m}^{2}G_{1}-2G_{0})d_{z,L}+(4id_{x}m_{y}-4id_{y}m_{x})G_{\rm MR}\] \[-(2G_{\phi})f_{s,L}m_{z}. \tag{53}\]
For studying the triplet scenarios, material (L) will be a ferromagnet. Except for when we want to determine the effect
of a current \(J_{d_{z}}\), the exchange field will be oriented in the \(z\)-direction which induces a \(d_{z}\) component in (L). As mentioned, this \(d_{z}\) triplet will then be short-ranged, and thus we focus this discussion on the \(d_{x}\) and \(d_{y}\) components induced by the spin-active boundary.
The terms that create the \(d_{x}\) and \(d_{y}\) triplets are the \(G_{\varphi}\)- and \(G_{\text{MR}}\)-terms. If we only have the \(G_{\phi}\) term, we see that the induced \(\mathbf{d}\) is parallel to the interface magnetization. If we however, turn off \(G_{\phi}\) but turn on \(G_{\text{MR}}\) (by letting \(P\neq 0\)) we see that \(\mathbf{d}\) is orthogonal to the interface magnetization, since \(d_{z}\) will already be present because of the exchange field in (L).
If we thus let the interface magnetization be parallel to each other and only include one of the terms such that we have either \(G_{\phi}=0\) or \(P=0\) the behavior of the current closely resembles that of a conventional Josephson junction, except that it is only the triplet and not the singlet that is long-ranged. Applying a phase difference between the BCS superconductors S1 and S2, it is reasonable to expect that in this case we can create a pure triplet charge current in (L). As we, in that case, are able to create only one long-ranged triplet component we know there can not be any spin current. When both \(G_{\phi}\) and \(G_{\text{MR}}\) are present, or \(\theta\neq 0\), we see that both \(d_{x}\) and \(d_{y}\) will be created, and we can in general also find a spin supercurrent.
## VII Numerical results
### Singlet charge current
The proximity effect and magnetization induced by the singlet supercurrent is explored by removing the interface magnetizations and the exchange field in (L). We numerically determine the magnetization in the full proximity effect regime, using the non-linear Riccati-parametrized equations, and compare it to the results expected from the analytical treatment. Further, the role of the symmetry of the anomalous Green function under the \(\hat{\dots}\) operation is discussed. Finally, we discuss the spatial dependence of the magnetization induced in the normal metal (R) and also check the temperature dependency.
The current and magnetization as a function of \(\phi\) are shown in Figs. 5 and 6. The magnetization shown in the figure is evaluated at \(y=0\). As expected from the analytical study, no magnetization was induced in the \(y\) or \(z\) direction. Furthermore, it is seen from the figure that the induced \(m_{x}\) in (R) is proportional to the singlet current in (L). We remark that this is consistent with Ref. [29], who found from using an effective model for the Green function in (L), that \(m_{x}\propto J\). The effective model used in the paper has a disappearing derivative of \(f_{s}\) when \(J=0\) as it does not consider that the absolute value of \(f_{s}\) can be decaying. Inside a normal metal, however, we expect the Cooper pair wavefunction to decay away from the superconductor interfaces. Even when the current is zero, the singlet component can therefore still have a finite derivative. Therefore, it is natural to ask whether the disappearance of the magnetization at \(J=0\) is caused by the spatial gradient of the \(f_{s,L}\) and \(d_{x,L}\) components vanishing at \(\phi=0,\pi\) or caused by the symmetry properties under the \(\hat{\dots}\) operation of the triplet and singlet component. We have performed this analysis in the Appendix, and the conclusion is the \(\hat{\dots}\)-operation symmetry, which is influenced by whether or not a supercurrent flows, causes the magnetization to vanish at \(\phi=0,\pi\). Note that the absence of induced magnetization does not necessarily imply the absence of a triplet being induced. The supercurrent-induced magnetization decays montonically in the (R) normal metal, as shown in Fig. 7.
Finally, we consider the temperature dependence of the supercurrent-induced magnetization. The quasiclassical magnetization from Eq. (12) has a factor \(\tanh(\beta E/2)\) in the integrand. Varying the temperature changes how the contribution
\[\text{Re}(f_{s}(E)\tilde{d}_{x}(E)-\tilde{f}_{s}(E)d_{x}(E)) \tag{54}\]
is weighted with respect to energy. A relevant question to ask is therefore how the temperature affects the magnetization. The temperature-dependence of the energy gap must be taken into account. We chose a standard interpolation formula which is valid for \(T\in(0,T_{c})\)[41]:
Figure 5: (Color online) The supercurrent induced in material (L) as a function of the phase difference between S1 and S2, \(\phi\).
Figure 6: (Color online) The induced magnetization in (R), right next to the interface SOC interface at \(y=0\), in the presence of a supercurrent flow in region (L) due to a phase difference \(\phi\)
\[\Delta(T)=\Delta(0)\tanh\left(1.74\sqrt{\frac{T_{c}}{T}-1}\right), \tag{55}\]
where \(T_{c}\) is the critical temperature. We also use the BCS relation between the zero temperature gap and the critical temperature, \(\frac{\Delta_{0}}{T_{c}}=1.76\).
Fig. 8 shows the magnetization evaluated at \(y=l\) as a function of temperature. Although the integrand in general oscillates as a function of energy \(E\), similarly to the spectral supercurrent, the total magnetization shows a monotonic decay with temperature.
### Triplet charge current
In the following, the case with a pure triplet charge current is explored and compared to the singlet charge current case. To reduce the number of parameters to vary, this discussion only considers the zero temperature case. We nevertheless expect a monotonic decay of the induced magnetization as the temperature approaches \(T_{c}\).
To create a pure triplet charge current, as discussed in section VI, the interface magnetizations have to be parallel. Moreover, we also have to set either the polarization or the spin-mixing angles to zero. Otherwise, a spin supercurrent will also flow through the junction [42]. **Both these situations are here explored.**
We first consider the \(P=0\) case, with a finite spin-mixing angle \(G_{\phi}\neq 0\). This means only considering spin-independent transmission amplitudes from the proximate superconductor but with spin-dependent reflection terms.
We consider a situation where the \(f_{s}\) component is negligible compared to the triplets in (L). This is achieved via the exchange field profile discussed in section V. Fig. 9 show that both \(f_{s}\) and \(d_{z}\) die out over a short range into the material (L) when \(\mathbf{m}_{1}=\mathbf{m}_{2}=(m,0,0)\) and \(\mathbf{h}(z)=(0,0,h(z))\). The singlet and the \(d_{z}\) triplet oscillate rapidly and decay quickly [37; 31]. In the middle region, \(d_{z}\) and \(f_{s}\) are many orders of magnitude smaller than \(d_{x}\) and we conclude that the results from this section are pure triplet effects.
The triplet charge current can be divided into three components, \(J_{d_{x}},J_{d_{y}}\) and \(J_{d_{z}}\), which from Sec. VI are expected to give different results. We explore all these currents here. A \(J_{d_{x}}\) current is created by using \(\mathbf{m}_{1}=\mathbf{m}_{2}=(m,0,0)\) and \(\mathbf{h}=(0,0,h(z))\). In the same manner, a \(J_{d_{y}}\) current is created by using \(\mathbf{m}_{1}=\mathbf{m}_{2}=(0,m,0)\) and \(\mathbf{h}=(0,0,h(z))\), and a \(J_{d_{z}}\) current is created with \(\mathbf{m}_{1}=\mathbf{m}_{2}=(0,0,m)\) and \(\mathbf{h}=(h(z),0,0)\). The absolute value of the induced magnetization in (R) is plotted for the three different triplet cases as a function of \(\phi\) in Fig. 10. Consistent with the analysis in Sec. VI only the \(d_{x}\) carried current induces a magnetization in (R), which confirms the analytical results. The induced magnetization can thereby be utilized to distinguish \(J_{d_{x}}\) from the other triplet currents. In other words, the induced spin-signal is strongly dependent on the polarization of the triplet Cooper pair supercurrent.
Figure 8: (Color online) The magnetization evaluated at \(y=l\) as a function of temperature. We used \(\phi=\pi/4\).
Figure 7: (Color online) The supercurrent-induced magnetization in (R) as a function of phase difference \(\phi\) and position \(y\) in the zero temperature case.
Figure 9: (Color online) The absolute value of the triplet components evaluated for \(E/\Delta_{0}=0.14\). The figure shows that \(f_{s}\) and \(d_{z}\) are many orders of magnitude smaller than the long-ranged \(d_{x}\) component.
A triplet current does however not have to be carried by a pure \(d_{x},d_{y}\) or \(d_{z}\), but could just as well be carried by any combination of \(\mathbf{d}\)-triplet components. Therefore, rotation of the interface magnetization in \(xy\)-plane is investigated. First, we consider the behavior of the charge current in material (L), shown in Fig. 11 as a function of the phase difference \(\phi\). The current shows a standard current-phase relation and is independent on the value of \(\alpha\). As seen in Sec. VI the \(G_{\phi}\)-term creates a \(\mathbf{d}\)-vector proportional to the interface magnetization. At \(\alpha=0\) the interface magnetizations are \(\mathbf{m}_{1}=\mathbf{m}_{2}=(m,0,0)\), so it is natural that the charge current then will be carried only by the \(d_{x}\) triplet. As \(\alpha\) changes, the current will gradually be carried more and more by the \(d_{y}\) component, until it at \(\alpha=\pi/2\) is carried only by the \(d_{y}\) components. Thus, as \(\alpha\) is varied, the total current stays unchanged, however, \(J_{d_{x}}\) does change, as shown in Fig. 12.
The induced magnetization in material (R) in the triplet current case is plotted in Figs. 13 and 14. As discussed in Sec. VI, there exists a clear relation between \(J_{d_{x}}\) and \(m_{x,R}\). Fig. 13 confirms this relation as the induced \(m_{x}\) has the same form as \(J_{d_{x}}\) from Fig. 12.
The magnetization plots also show differences from the singlet current case. First of all, the induced \(m_{x}\) varies with \(\alpha\). In the singlet case there is no interface magnetization, however as varying \(\alpha\) in the triplet case is the same as rotating the material (R) around material (L), it still makes sense to compare the \(\alpha\) dependency of the singlet and the triplet case. What this means is that in the singlet case one would measure the same induced magnetization [in axes relative to (R)], no matter in what direction (R) was connected to (L). However, in the triplet case, the angle, in which (R) is connected to (L), makes a difference.
Besides the \(\alpha\) dependency in \(m_{x}\), an additional new signature arises in the triplet case, namely a magnetization component in the \(y\)-direction seen in Fig. 14. In the singlet current case, only \(m_{x}\) was induced, and thus the induced \(m_{y}\) contributes to making the triplet and singlet currents distinguishable. From the figure it is seen that \(m_{y}\) has a different \(\alpha\) dependency than the \(m_{x}\). This can be explained by realizing that we expect \(d_{x,L}\) and thus \(\partial_{z}d_{x,L}\) to have a \(\cos(\alpha)\) dependency. This would according to the linearized spin-orbit coupled boundary conditions give \(d_{x,R}\) and \(f_{s,R}\) both a \(\cos(\alpha)\) dependency. Thus, the induced magnetization \(m_{x}\) in (R), which is a product of \(d_{x,R}\) and \(f_{s,R}\), naturally acquires a \(\cos^{2}(\alpha)\) dependency as in the figure. Furthermore, \(d_{y,L}\) and thus also \(d_{y,R}\) we should expect to have a \(\sin(\alpha)\) dependency. The magnetization in \(y\)-direction, a product of \(f_{s,R}\) and \(d_{y,R}\), should therefore, consistent with the figure, have a \(\sin(\alpha)\cos(\alpha)\) dependency.
If we compare the singlet current case to the triplet current at \(\alpha=0\) the induced magnetization in (R) looks the same. However, by either growing the normal metal on the (L) region at different crystallographic orientations, or alternatively growing
Figure 11: (Color online) Charge current as a function of \(\phi\) in material (L). The green line in (a) shows the total current and the orange shows the singlet current. It is clear that the singlet current is negligible compared to the triplet current. The magnitude of the triplet charge current is invariant when changing \(\alpha\).
Figure 12: (Color online) The supercurrent carried by \(d_{x}\) triplet Cooper pairs, which induce a magnetization in (R) through the spin-orbit coupled interface, as both the phase difference \(\phi\) and interface angles in the \(xy\)-plane, \(\alpha\) is varied.
Figure 10: (Color online) The absolute value of the magnetization in material (R) induced by \(d_{x}\), \(d_{y}\), and \(d_{z}\) carried charge current. It is seen that it is only the \(d_{x}\) carried charge current that induces a magnetization. As seen, for certain triplet pair polarizations, a supercurrent (\(J_{d_{y}}\) and \(J_{d_{z}}\)) does not induce magnetization in any direction.
two normal metals to (L), both through a spin-orbit coupling interface, but at different angles, the induced magnetization would change strongly in the triplet case.
Similarly, as in the singlet case, we have checked that the disappearance of the magnetization at \(\phi=0\) and at \(\phi=\pi\) can be explained by the symmetry properties under the \(\cdot\cdot\cdot\) operation of the components. Thus, we do not expect that a spatial gradient in the triplet correlations is sufficient to induce a magnetization: only when a supercurrent is flowing, thus providing the triplet correlations with the correct \(\cdot\cdot\cdot\)-symmetry, is a magnetization induced.
To check the robustness of the results another parameter set was also investigated. The interface magnetizations are kept parallel, however, the polarization is turned on, adding the magnetoresistive and the \(G_{1}\) term. Instead the spin-mixing is turned off, \(G_{\phi}=0\). From section VI we see that the \(G_{\text{MR}}\) term creates a triplet \(\mathbf{d}\) orthogonal to the interface magnetization and to \(\mathbf{h}\). In this case, a \(\pi/2\) shift in \(\alpha\) arises compared to the \(P=0\) situation because of the induced triplet being orthogonal to the interface magnetization instead of parallel. Other than that, the form of the magnetization is similar, and the conclusions from the \(P=0\) case hold.
## VIII Conclusion
In summary, we predict an experimental signature of current-carrying triplet Cooper pairs in the form of an induced spin-signal. We show that a supercurrent carried only by triplet Cooper pairs induces a non-local magnetization that is controlled by the polarization direction of the triplet Cooper pairs. The dependence of the non-local magnetization on the polarization direction of the triplet pairs can be experimentally tested _in situ_. Specifically, the component of the \(\mathbf{d}\)-vector carrying the supercurrent is determined by the magnetizations in Fig. 15. This provides a measurement protocol to directly use the spin-polarization of the triplet Cooper pairs in supercurrents to transfer spin information in a dissipationless manner.
###### Acknowledgements.
This work was supported by the Research Council of Norway through Grant No. 323766 and its Centres of Excellence funding scheme Grant No. 262633 "QuSpin." Support from Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway, project NN9577K, is acknowledged.
## Appendix A Riccati parametrization of the SOC boundary conditions
We now derive the Riccati parametrization of spin-orbit coupling boundary conditions given in Eq. (3). These boundary conditions take into account charge-spin conversion at the interface [29]. The starting point is to perform the same operation on the right-hand side as we did to the left-hand side in Eq. (28) The first term is simply the Kuprianov-Lukichev boundary term [36]. The rest of the terms we go through one by one. For concreteness, the derivation below is done for a boundary condition which has \(g_{R}\partial_{y}g_{R}\) on the left-hand side of the boundary condition.
The first term is \(-\frac{2}{3}T_{1}^{2}p_{F}^{2}[\hat{g}_{R},\mathbf{\hat{r}}_{\parallel}\hat{g}_ {L}\mathbf{\hat{r}}_{\parallel}]\), for which define the matrix
\[\hat{U}^{(1)}=\mathbf{\hat{r}}_{\parallel}\hat{g}_{L}\mathbf{\hat{r}}_{ \parallel}=\begin{pmatrix}\tau_{x}g_{L}\tau_{x}&-\tau_{x}f_{L}\tau_{x}^{*}\\ \tau_{x}^{*}\hat{f}_{L}\tau_{x}&-\tau_{x}^{*}\hat{g}_{L}\tau_{x}^{*}\end{pmatrix} +\begin{pmatrix}\tau_{z}g_{L}\tau_{z}&-\tau_{z}f_{L}\tau_{z}^{*}\\ \tau_{z}^{*}\hat{f}_{L}\tau_{z}&-\tau_{z}^{*}\hat{g}_{L}\tau_{z}^{*}\end{pmatrix}. \tag{10}\]
Figure 14: (Color online) Induced magnetization \(m_{y}\) in (L) in the triplet supercurrent case where \(G_{\phi}\neq 0,P=0\).
Figure 13: (Color online) Induced magnetization \(m_{x}\) in (L) in the triplet supercurrent case where \(G_{\phi}\neq 0,P=0\).
Thus the contribution from this term will according to Eq. 33 be
\[\begin{split}&\frac{1}{2}N_{R}^{-1}\big{(}[\hat{g}_{R},\hat{\tau}_{ \parallel}\hat{g}_{L}\hat{\tau}_{\parallel}]^{1,2}-[\hat{g}_{R},\hat{\tau}_{ \parallel}\hat{g}_{L}\hat{\tau}_{\parallel}]^{1,1}\gamma_{R}\big{)}\\ =&\big{[}-(\tau_{x}\hat{g}_{L}T_{x}+\tau_{z}\hat{g}_ {L}\tau_{z})\gamma_{R}-\tau_{x}f_{L}\tau_{x}^{*}-\tau_{z}f_{L}\sigma^{*}-\gamma _{R}(\tau_{x}^{*}\hat{f}_{L}\tau_{x}+\tau_{z}^{*}\hat{f}_{L}\tau_{z})\gamma_{R }\\ &+\gamma_{R}(-\tau_{x}^{*}\hat{g}_{L}\tau_{x}^{*}-\tau_{z}^{*} \hat{g}_{L}\tau_{z}^{*})\big{]},\end{split} \tag{10}\]
and inserting \(f=2N\gamma\) and \(g=(2N-1)\), we get
\[\begin{split}&-\tau_{x}N_{L}\tau_{x}\gamma_{R}+\gamma_{R}-\tau_{x}N _{L}\gamma_{L}\tau_{x}^{*}-\gamma_{R}\tau_{x}^{*}\tilde{N}_{L}\tilde{\gamma}_ {L}\tau_{x}\gamma_{R}-\gamma_{R}\tau_{x}^{*}\tilde{N}_{L}\tau_{x}^{*}\\ &-\tau_{z}N_{L}\tau_{z}\gamma_{R}+\gamma_{R}-\tau_{z}N_{L}\gamma_ {L}\tau_{z}^{*}-\gamma_{R}\tau_{z}^{*}\tilde{N}_{L}\tilde{\gamma}_{L}\tau_{z} \gamma_{R}-\gamma_{R}\tau_{z}^{*}\tilde{N}_{L}\tau_{z}^{*}.\end{split} \tag{11}\]
From the second term we define
\[\begin{split}\hat{U}^{(2)}=&\{\hat{\tau}_{\parallel,x},\hat{g}_{L}\partial_{y}\hat{g}_{L}\}\\ =&\begin{pmatrix}\tau_{x}[g_{L}g_{L}^{\prime}-f_{L} \tilde{f}_{L}^{\prime}]&\tau_{x}[g_{L}f_{L}^{\prime}-f_{L}\tilde{g}_{L}^{ \prime}]\\ -\tau_{x}^{*}[\tilde{f}_{L}\tilde{g}_{L}^{\prime}+\tilde{g}_{L}\tilde{f}_{L} ^{\prime}]&-\tau_{x}[-f_{L}f_{L}^{\prime}+\tilde{g}_{L}\tilde{g}_{L}^{ \prime}]\end{pmatrix}\\ &+&\begin{pmatrix}[g_{L}g_{L}^{\prime}-f_{L}\tilde{f}_{L}^{\prime}] \tau_{x}&-[-\tilde{g}_{L}f_{L}^{\prime}-f_{L}\tilde{g}_{L}^{\prime}]\tau_{x}^ {*}\\ \tilde{f}_{L}g_{L}^{\prime}+\tilde{g}_{L}\tilde{f}_{L}^{\prime}]\tau_{x}&-[- \tilde{f}_{L}f_{L}^{\prime}+\tilde{g}_{L}\tilde{g}_{L}^{\prime}]\tau_{x}^{*} \end{pmatrix},\end{split} \tag{12}\]
Figure 15: (Color online) Two possible experimental realizations of the proposed system. In (a), thin magnetic insulators with a magnetization in the \(xy\)-plane couple BCS superconductors to a strong ferromagnet which is polarized in the \(z\)-direction. The strong ferromagnet filters out all superconducting correlations except equal-spin Cooper pairs along the magnetization direction. Adjusting the magnetization of the magnetic interfaces allows one to vary the spin-polarization of the triplet Cooper pairs carrying a supercurrent in the system. A simpler setup is shown in (b) where one uses intrinsic triplet superconductors. By growing these materials along different crystallographic axes relative the normal metal in the middle, the polarization of the triplet pairs, quantified by the \(\mathbf{d}\)-vector is varied. The drawback of setup (b) is that intrinsic \(p\)-wave triplet superconductivity is rare and only well-established in uranium-based compounds.
which gives us the contribution to the right-hand side
\[\begin{split}&\big{[}-(\tau_{x}[g_{L}g_{L}^{\prime}-f_{L}\tilde{f}_{ L}^{\prime}]+[g_{L}g_{L}^{\prime}-f_{L}\tilde{f}_{L}^{\prime}]\tau_{x})\gamma_{R} \\ &+\tau_{x}[g_{L}f_{L}^{\prime}-f_{L}\tilde{g}_{L}^{\prime}]-[g_{L} f_{L}^{\prime}-f_{L}\tilde{g}_{L}^{\prime}]\tau_{x}^{*}\\ &-\gamma_{R}(-\tau_{x}^{*}[\tilde{f}_{L}g_{L}^{\prime}+\tilde{g}_ {L}\tilde{f}_{L}^{\prime}]+[\tilde{f}_{L}g_{L}^{\prime}+\tilde{g}_{L}\tilde{f}_ {L}^{\prime}]\tau_{x})\gamma_{R}\\ &\gamma_{R}(-\tau_{x}[-\tilde{f}_{L}f_{L}^{\prime}+\tilde{g}_{L} \tilde{g}_{L}^{\prime}]-[-\tilde{f}_{L}f_{L}^{\prime}+\tilde{g}_{L}\tilde{g}_{ L}^{\prime}]\tau_{x}^{*})\big{]}.\end{split} \tag{10}\]
Using Eqs. (21) and (22) we see that we can write the following
\[\begin{split}&[g_{L}g_{L}^{\prime}-f_{L}\tilde{f}_{L}^{\prime}] =N_{L}[\gamma_{L}^{\prime}\tilde{\gamma}_{L}-\gamma_{L}\tilde{\gamma}_{L}^{ \prime}]N_{L},\\ &[g_{L}f_{L}^{\prime}-f_{L}\tilde{g}_{L}^{\prime}]=N_{L}[\gamma_ {L}^{\prime}-\gamma_{L}\tilde{\gamma}_{L}^{\prime}\gamma_{L}]\tilde{N}_{L}. \end{split} \tag{11}\]
Thus the contribution to the right-hand side of the Riccati parametrized boundary conditions can be written as
\[\begin{split}&\tau_{x}2N_{L}[\gamma_{L}^{\prime}-\gamma_{L} \tilde{\gamma}_{L}^{\prime}\gamma_{L}]\tilde{N}_{L}-2N_{L}[\gamma_{L}^{\prime }-\gamma_{L}\tilde{\gamma}_{L}^{\prime}\gamma_{L}]\tilde{N}_{L}\tau_{x}^{*}\\ &-\gamma_{R}\tau_{x}^{*}2\tilde{N}_{L}[\tilde{\gamma}_{L}^{\prime }\gamma_{L}-\tilde{\gamma}_{L}\gamma_{L}^{\prime}]\tilde{N}_{L}-\gamma_{R}2 \tilde{N}_{L}[\tilde{\gamma}_{L}^{\prime}\gamma_{L}-\tilde{\gamma}_{L}\gamma_ {L}^{\prime}]\tilde{N}_{L}\tau_{x}^{*}\\ &-\tau_{x}2N_{L}[\gamma_{L}^{\prime}\tilde{\gamma}_{L}-\gamma_{L} \tilde{\gamma}_{L}]N_{L}\gamma_{R}-2N_{L}[\gamma_{L}^{\prime}\tilde{\gamma}_{L }-\gamma_{L}\tilde{\gamma}_{L}]N_{L}\tau_{x}\gamma_{R}\\ &+\gamma_{R}\tau_{x}^{*}2\tilde{N}_{L}[\tilde{\gamma}_{L}^{\prime }-\tilde{\gamma}_{L}\gamma_{L}^{\prime}\tilde{\gamma}_{L}]N_{L}\gamma_{R}- \gamma_{R}2\tilde{N}_{L}[\tilde{\gamma}_{L}^{\prime}-\tilde{\gamma}_{L}\gamma _{L}^{\prime}\tilde{\gamma}_{L}]N_{L}\tau_{x}\gamma_{R}.\end{split} \tag{12}\]
The fourth term and fifth term in Eq. (3) can not be written as \([g_{R},\hat{U}]\), so we have to treat them differently. The two terms do, however, have the same form as each other. Therefore we only have to do the calculation once by performing the parametrization procedure on
\[[\hat{\rho}_{i},\hat{g}_{R}\hat{\rho}_{i}\hat{g}_{R}]. \tag{13}\]
We write out
\[g_{R}\hat{\rho}_{i}g_{R}=\begin{pmatrix}g_{R}\tau_{i}g_{R}+f_{R}\tau_{i}^{*} \tilde{f}_{R}&g_{R}\tau_{i}f_{R}+f_{R}\tau_{i}^{*}\tilde{g}_{R}\\ -\tilde{f}_{R}\tau_{i}g_{R}-\tilde{g}_{R}\tau_{i}^{*}\tilde{f}_{R}&-\tilde{f}_{R }\tau_{i}f_{R}-\tilde{g}_{R}\tau_{i}^{*}\tilde{g}_{R}\end{pmatrix}. \tag{14}\]
The upper left component of the whole expression then reads
\[[\hat{\rho}_{i},\hat{g}_{R}\hat{\rho}_{i}\hat{g}_{R}]^{(1,1)}=\tau_{i}g_{R} \tau_{i}g_{R}+\tau_{i}f_{R}\tau_{i}^{*}\tilde{f}_{R}-g_{R}\tau_{i}g_{R}\tau_{i} -f_{R}\tau_{i}^{*}\tilde{f}_{R}\tau_{i}, \tag{15}\]
and the upper right part reads
\[[\hat{\rho}_{i},\hat{g}_{R}\hat{\rho}_{i}\hat{g}_{R}]^{(1,2)}=\tau_{i}g_{R} \tau_{i}f_{R}+\tau_{i}f_{R}\tau_{i}^{*}\tilde{g}_{R}+g_{R}\tau_{i}f_{R}\tau_{i} ^{*}+f_{R}\tau_{i}^{*}\tilde{g}_{R}\tau_{i}^{*}. \tag{16}\]
We write \(g\) and \(f\) in terms of the Riccati parametrized expressions and get that the contribution from these terms to the right-hand side \(\frac{1}{2}N_{L}^{-1}((1,2)-(1,1))\)\(\frac{1}{2}N_{R}^{-1}((1,2)-(1,1))\) is
\[\begin{split}&\frac{1}{2}N_{R}^{-1}\big{[}\tau_{i}(2N_{R}-1)\tau_{ i}2N_{R}\gamma_{R}+\tau_{i}2N_{R}\gamma_{R}\tau_{i}^{*}(2\tilde{N}_{R}-1)+(2N_{R}-1) \tau_{i}2N_{R}\gamma_{R}\tau_{i}^{*}\\ &+2N_{R}\gamma_{R}\tau_{i}^{*}(2\tilde{N}_{R}-1)\tau_{i}^{*}-[ \tau_{i}(2N_{R}-1)\tau_{i}(2N_{R}-1)+\tau_{i}2N_{R}\gamma_{R}\tau_{i}^{*}2 \tilde{N}_{R}\tilde{\gamma}_{R}\\ &-(2N_{R}-1)\tau_{i}(2N_{R}-1)\tau_{i}-2N_{R}\gamma_{R}\tau_{i}^{*}2 N_{R}\tilde{\gamma}_{R}]\gamma_{R}\big{]}\\ &=-2\gamma_{R}+2\tau_{i}N_{R}\gamma_{R}\tau_{i}^{*}+2\gamma_{R} \tau_{i}^{*}\tilde{N}_{R}\tau_{i}^{*}+2\tau_{i}N_{R}\tau_{i}\gamma_{R}+2\gamma_{R} \tau_{i}^{*}\tilde{N}_{R}\tilde{\gamma}_{R}\tau_{i}\gamma_{R}.\end{split} \tag{17}\]
Putting all of the terms together we get
\[\begin{split}\partial_{y}\gamma_{R}&=2\frac{\tau_{0}^{2}}{ D}(1-\gamma_{R}\tilde{\gamma}_{L})N_{L}(\gamma_{R}-\gamma_{L})\\ &\qquad-2\frac{3}{2}\frac{T_{i}^{2}p_{F}^{2}}{D}\big{(}-\tau_{x}N_ {L}\tau_{x}\gamma_{R}+\gamma_{R}-\tau_{x}N_{L}\gamma_{L}\tau_{x}^{*}-\gamma_{R} \tau_{x}^{*}\tilde{N}_{L}\tilde{\gamma}_{L}\tau_{x}\gamma_{R}-\gamma_{R}\tau_{x} ^{*}\tilde{N}_{L}\tau_{x}^{*}\\ &\qquad-\tau_{z}N_{L}\tau_{z}\gamma_{R}+\gamma_{R}-\tau_{z}N_{L} \gamma_{L}\tau_{z}^{*}-\gamma_{R}\tau_{z}^{*}\tilde{N}_{L}\tilde{\gamma}_{L} \tau_{z}\gamma_{R}-\gamma_{R}\tau_{x}^{*}\tilde{N}_{L}\tau_{z}^{*}\\ &\qquad-mT_{1}T_{0}(+\tau_{x}2N_{L}[\gamma_{L}^{\prime}-\gamma_{L} \gamma_{L}^{\prime}\gamma_{L}^{\prime}]\tilde{N}_{L}-2N_{L}[\gamma_{L}^{ \prime}-\gamma_{L}\gamma_{L}^{\prime}\gamma_{L}]\tilde{N}_{L}\tau_{x}^{*}\\ &\qquad-\gamma_{R}\tau_{x}^{*}2\tilde{N}_{L}[\tilde{\gamma}_{L}^{ \prime}\gamma_{L}-\tilde{\gamma}_{L}\gamma_{L}^{\prime}]\tilde{N}_{L}-\gamma_{R} 2\tilde{N}_{L}[\tilde{\gamma}_{L}^{\prime}\gamma_{L}-\tilde{\gamma}_{L}\gamma_ {L}^{\prime}]\tilde{N}_{L}\tau_{x}^{*}\\ &\qquad-\tau_{x}2N_{L}[\gamma_{L}^{\prime}\tilde{\gamma}_{L}-\gamma_{L} \gamma_{L}]N_{L}\gamma_{R}-2N
## Appendix B Riccati parametrization of the spin active boundary conditions
We now find the Riccati parameterized boundary conditions for the spin active interfaces given in Equation (2). All the terms on this boundary condition are on the same form as in Eq. 26. Just like for the spin-orbit coupling boundary conditions, the first term is simply the Kuprianov-Lukichev term, the rest we go through one term at a time starting with the \(G_{1}\) term. Here we define
\[U_{1}=\hat{m}\hat{g}_{R}\hat{m}=\begin{pmatrix}\mathbf{m}\cdot\mathbf{\tau}_{gR}\mathbf{m} \cdot\mathbf{\tau}&\mathbf{m}\cdot\mathbf{\tau}f_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}\\ -\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{f}_{R}\mathbf{m}\cdot\mathbf{\tau}&-\mathbf{m}\cdot\mathbf{ \tau}^{*}\tilde{g}_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}\end{pmatrix}, \tag{28}\]
which gives the contribution to the right-hand side
\[-\mathbf{m}\cdot\mathbf{\tau}g_{R}\mathbf{m}\mathbf{\tau}\gamma_{L}+\mathbf{m}\mathbf{\tau}f_{R}\mathbf{m }\mathbf{\tau}^{*}+\gamma_{L}\mathbf{m}\mathbf{\tau}^{*}\tilde{f}_{R}\mathbf{m}\mathbf{\tau}\gamma _{L}-\gamma_{L}\mathbf{m}\mathbf{\tau}^{*}\tilde{g}_{R}\mathbf{m}\mathbf{\tau}^{*}. \tag{29}\]
For the second term we define the \(U\)-matrix
\[U_{\text{MR}}=\{\tilde{g}_{R},\hat{m}\}=\begin{pmatrix}g_{R}\mathbf{m}\cdot\mathbf{ \tau}+\mathbf{m}\cdot\mathbf{\tau}g_{R}&f_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}+\mathbf{m}\cdot\mathbf{ \tau}f_{R}\\ -\tilde{f}_{R}\mathbf{m}\cdot\mathbf{\tau}-\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{f}_{R}&- \tilde{g}_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}-\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{g}_{R} \end{pmatrix}, \tag{30}\]
which gives the contribution to the right-hand side
\[-(g_{R}\mathbf{m}\cdot\mathbf{\tau}+\mathbf{m}\cdot\mathbf{\tau}g_{R})\gamma_{L} \tag{31}\] \[+f_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}+\mathbf{m}\cdot\mathbf{\tau}f_{R}\] \[-\gamma_{L}(-\tilde{f}_{R}\mathbf{m}\cdot\mathbf{\tau}-\mathbf{m}\cdot\mathbf{ \tau}^{*}\tilde{f}_{R})\gamma_{L}\] \[+\gamma_{L}(-\tilde{g}_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}-\mathbf{m}\cdot\bm {\tau}^{*}\tilde{g}_{R}).\]
For the third term the \(U\)-matrix is simply \(U_{\phi}=\hat{m}\), which gives the contribution
\[-\mathbf{m}\cdot\mathbf{\tau}\gamma_{L}+\gamma_{L}\mathbf{m}\cdot\mathbf{\tau}^{*}. \tag{32}\]
The total Riccati parameterized spin-active boundary conditions thus read
\[\begin{split}\partial_{z}\gamma_{L}=& G_{0}(1-\gamma_{L}\tilde{\gamma}_{R})N_{R}(\gamma_{R}-\gamma_{L})\\ &+G_{1}(\mathbf{m}\cdot\mathbf{\tau}N_{R}\gamma_{R}\mathbf{m}\cdot\mathbf{\tau}^{* }-\gamma_{L}\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{N}_{R}\mathbf{m}\mathbf{\tau}^{*}+m^{2} \gamma_{L}\\ &-\mathbf{m}\cdot\mathbf{\tau}N_{R}\mathbf{m}\cdot\mathbf{\tau}\gamma_{L}+\gamma_ {L}\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{N}_{R}\tilde{\mathbf{y}}\mathbf{m}\cdot\mathbf{\tau} \gamma_{L})\\ &+G_{\text{MR}}(N_{R}\gamma_{R}\mathbf{m}\cdot\mathbf{\tau}^{*}+\mathbf{m} \cdot\mathbf{\tau}N_{R}\gamma_{R}-\gamma_{L}[\tilde{N}_{R}\mathbf{m}\cdot\mathbf{\tau}^{* }+\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{N}_{R}-\mathbf{m}\cdot\mathbf{\tau}^{*}]\\ &-[N_{R}\mathbf{m}\cdot\mathbf{\tau}-\mathbf{m}\cdot\mathbf{\tau}+\mathbf{m}\cdot\bm {\tau}N_{R}]\gamma_{L}+\gamma_{L}[\tilde{N}_{R}\tilde{\gamma}_{R}\mathbf{m}\cdot\mathbf{ \tau}+\mathbf{m}\cdot\mathbf{\tau}^{*}\tilde{N}_{R}\tilde{\gamma}_{R}]\gamma_{L})\\ &-iG_{\phi}\big{(}-\mathbf{m}\cdot\mathbf{\tau}\gamma_{L}+\gamma_{L}\mathbf{m} \cdot\mathbf{\tau}^{*}\big{)}.\end{split} \tag{33}\]
Appendix C Analysis of the symmetry of the anomalous Green function under the \(\hat{\dots}\)-operation
We here analyze whether the disappearance of the magnetization at \(J=0\) is caused by the spatial gradient of the \(f_{s,R}\) and \(d_{x,R}\) components at \(\phi=0,\pi\) or by the symmetry properties under the \(\hat{\dots}\) operation of the triplet and singlet component. To investigate this, consider Figs. 16 and 17. The first situation explored is when material (R) is connected to the middle of material (L), \(z_{0}=l/2\). It is seen from the figure that in this case the singlet component, \(f_{s,L}\), is zero at \(\phi=\pi\) while the derivative, \(\partial_{z}f_{s,L}\), is zero at \(\phi=0\). The figure shows that this also causes \(d_{x,R}\) to vanish at \(\phi=0\) and \(f_{s,R}\) at \(\phi=\pi\). Thus, a finite derivative of \(f_{s}\) is not sufficient to induce a magnetization in (R).
To check if it is the symmetry of the anomalous Green function under the \(\hat{\dots}\)-operation that dictates when a finite magnetization occurs, we considered a different situation where \(z_{0}=l/4\) so that the material (R) is no longer connected to the middle of (L). The result is shown in Figs. 18 and 19. It is here seen that neither \(f_{s,L}\), nor \(\partial_{z}f_{s,L}\) vanishes at any \(\phi\). Therefore also \(d_{x,R}\) and \(f_{s,R}\) is finite at every \(\phi\). It was checked that the magnetization looks exactly like Fig. 6, also for \(z_{0}=l/4\). Thus we can conclude that it is the \(\hat{\dots}\)-operation symmetry, which is influenced by whether or not a supercurrent flows, that causes the magnetization to vanish at \(\phi=0,\pi\).
|
2303.05743 | Compact Binary Formation in Open Star Clusters I: High Formation
Efficiency of Gaia BHs and Their Multiplicities | Gaia BHs, black hole (BH) binaries discovered from database of an astrometric
telescope Gaia, pose a question to the standard binary evolution model. We have
assessed if Gaia BHs can be formed through dynamical capture in open clusters
rather than through isolated binary evolution. We have performed gravitational
$N$-body simulations of $100$ open clusters with $10^5 M_\odot$ in total for
each metallicity $Z=0.02$, $0.01$, and $0.005$. We have discovered one Gaia
BH-like binary escaping from an open cluster, and found that the formation
efficiency of Gaia BHs in open clusters ($\sim 10^{-5} M_\odot^{-1}$) is larger
than in isolated binaries ($\sim 10^{-8} M_\odot^{-1}$) by 3 orders of
magnitude. The Gaia BH-like binary is the inner binary of a triple star system.
Gaia BHs can have tertiary stars frequently, if they are formed in open
clusters. Combining additional $N$-body simulations with 8000 open clusters
with $8 \times 10^6 M_\odot$, we have estimated the number of Gaia BHs in the
Milky Way disk to $10^4 - 10^5$ (depending on the definitions of Gaia BHs),
large enough for the number of Gaia BHs discovered so far. Our results indicate
that the discoveries of Gaia BHs do not request the reconstruction of the
standard binary evolution model, and that Gaia BHs are a probe for the dynamics
of open clusters already evaporated. | Ataru Tanikawa, Savannah Cary, Minori Shikauchi, Long Wang, Michiko S. Fujii | 2023-03-10T07:04:17Z | http://arxiv.org/abs/2303.05743v5 | Compact Binary Formation in Open Star Clusters I: High Formation Efficiency of Gaia BHs and Their Multiplicities
###### Abstract
Gaia BHs, black hole (BH) binaries discovered from database of an astrometric telescope _Gaia_, pose a question to the standard binary evolution model. We have assessed if Gaia BHs can be formed through dynamical capture in open clusters rather than through isolated binary evolution. We have performed gravitational \(N\)-body simulations of 100 open clusters with \(10^{5}M_{\odot}\) in total for each metallicity \(Z=0.02\), 0.01, and 0.005. We have discovered one Gaia BH-like binary escaping from an open cluster, and found that the formation efficiency of Gaia BHs in open clusters (\(\sim 10^{-5}M_{\odot}^{-1}\)) is larger than in isolated binaries (\(\sim 10^{-8}M_{\odot}^{-1}\)) by 3 orders of magnitude. The Gaia BH-like binary is the inner binary of a triple star system. Gaia BHs can have tertiary stars frequently, if they are formed in open clusters. We have estimated the number of Gaia BHs in the Milky Way disk to \(1.6\times 10^{4}\), large enough for the number of Gaia BHs discovered so far. Our results indicate that the discoveries of Gaia BHs do not request the reconstruction of the standard binary evolution model, and that Gaia BHs are a probe for the dynamics of open clusters already evaporated.
keywords: stars: black holes - galaxies: star clusters: general - Astrometry and celestial mechanics - (stars:) binaries (including multiple): close
## 1 Introduction
Black holes (BHs) are the final state of massive stars. Albeit of their darkness, they have been discovered in binary systems, such as X-ray binaries (Casares et al., 2017), spectroscopic binaries (Shenar et al., 2022), astrometric binaries (El-Badry et al., 2023, 2023; Tanikawa et al., 2022), and gravitational wave sources (The LIGO Scientific Collaboration et al., 2021). The discoveries of astrometric BH binaries, also known as Gaia BH1 and BH2 (El-Badry et al., 2023, 2023), respectively) in Gaia DR3 (Gaia Collaboration et al., 2022), challenge the standard binary evolution model. Gaia BH1 and BH2 (hereafter, Gaia BHs) have \(\sim 1M_{\odot}\) visible stars as companions, orbital periods of \(\sim 10^{2}\)-\(10^{3}\) days, and moderately high eccentricities (\(\sim 0.5\)). Isolated binaries can form such configurations only if common envelope ejection is \(\sim 10\) times more efficient than previously expected (El-Badry et al., 2023).
Another possible formation channel of Gaia BHs is that a BH dynamically captures a \(\sim 1M_{\odot}\) visible star in open clusters. This is similar to double BH formation in open clusters (Ziosi et al., 2014; Kumamoto et al., 2019, 2020; Di Carlo et al., 2019; Banerjee, 2021; Rastello et al., 2021). As for double BHs, such dynamical capture can happen in globular clusters (Portegies Zwart & McMillan, 2000; Tanikawa, 2013; Rodriguez et al., 2016; Askar et al., 2017; Kremer et al., 2022), first star clusters (Wang et al., 2022), and galactic centers (Tagawa et al., 2020). However, Gaia BHs are Milky Way disk components, and have near solar metallicities. Thus, they are likely to be formed in open clusters if their formation channel is dynamical capture.
In this _letter_, we report an extremely high formation efficiency of Gaia BHs in open clusters. Moreover, our simulations show that they frequently host tertiary stars. The structure of this _letter_ is as follows. In section 2, we describe our numerical method to follow Gaia BH formation in open clusters and isolated binaries. In section 3, we show our simulation results. In section 4, we conclude our results, and discuss the implication of our results for Gaia BHs discovered recently.
## 2 Method
We follow the dynamical evolution of open clusters by means of a gravitational \(N\)-body code PETAR(Wang et al., 2020). The PETAR
code is based on the particle-tree and particle-particle algorithm (Oshino et al., 2011; Iwasawa et al., 2017) with the aid of FDPS (Iwasawa et al., 2016, 2020). Binary orbital evolution and close encounters are treated by the slow-down algorithmic regularization method (SDAR: Wang et al., 2020). The external potential is calculated by GALPY(Bovy, 2015).
The PETAR code is coupled with the BSE code (Hurley et al., 2002; Banerjee et al., 2020) to solve single and binary star evolutions. Although the PETAR code contains the BSEEMP code (Tanikawa et al., 2020; Wang et al., 2022) for metal-poor stars, we do not use it here, since we treat only metal-rich stars. We overview the model of single and binary star evolution. Single stars evolve on a track developed by Hurley et al. (2000) with stellar winds formulated by Belczynski et al. (2010). A binary can experience common envelope evolution modeled as the \(\alpha\) formalism (Webbink, 1984), where we adopt \(\alpha=3\) and \(\lambda\) of Claeys et al. (2014). Massive stars either cause supernovae or collapse to BHs at the end of their lives, and leave behind NSs and BHs with masses modeled by the rapid model (Fryer et al., 2012) with modification of pair instability supernovae modeled by Belczynski et al. (2016).
We generate 100 open clusters for each metallicity of \(Z=0.02\), 0.01, and 0.005. The total mass of each open cluster is \(10^{3}M_{\odot}\), and then the total mass of open clusters is \(10^{5}M_{\odot}\) for each metallicity. We adopt the fractal distribution of stars with a fractal dimension of 3.0 (Kupper et al., 2011). We set the half-mass radius to 0.8 pc. Each cluster orbits around Milky Way at 8 kpc with 220 km s\({}^{-1}\). The initial binary fraction is 100 %. The primary stars of binaries have Kroupa's initial mass function (Kroupa, 2001) with the minimum and maximum masses of 0.08 and \(150M_{\odot}\), respectively. The mass ratio, orbital period, and eccentricity distribution of binaries follow initial conditions from Sana et al. (2012). The initial conditions are generated using MCLUSTER(Kupper et al., 2011). We follow their evolutions over 500 Myr. Although the initial central number density seems to be high (\(\sim 10^{3}\) pc\({}^{-3}\)), the central number density of visible stars (defined as stars except BHs, neutron stars, and white dwarfs) drop to \(\sim 3\) pc\({}^{-3}\) at 500 Myr.
For comparison, we also calculated isolated binary evolution. We prepare two types of initial conditions for calculations of isolated binary evolution. Like the binaries in our open clusters, the 1st type of initial conditions for isolated binary evolution are also adopted from Sana et al. (2012). We find that the 1st type of initial conditions never form Gaia BHs, because the minimum mass ratio is 0.1. In our single star evolution model, \(\gtrsim 20M_{\odot}\) stars can leave behind BHs. BHs cannot have \(\lesssim 1M_{\odot}\) stars as companions. Thus, we prepare the 2nd type of initial conditions in which the minimum mass ratio is reduced to 0.0005, such that even \(150M_{\odot}\) stars can have \(0.08M_{\odot}\) stars as companions. We would like to remark that _the 2nd type of initial conditions is much more likely to form Gaia BHs than the initial conditions of open clusters unless anything prohibits Gaia BHs from forming_. We follow their evolution over 30 Myr. Until that, all BH progenitors evolve to BHs.
Throughout this _letter_, we compare properties of BH binaries for open clusters at 500 Myr, and for isolated binaries at 30 Myr. We underestimate the formation efficiency of BH binaries with secondary masses of \(\gtrsim 2.5M_{\odot}\), because \(\gtrsim 2.5M_{\odot}\) stars evolve to remnant objects until 500 Myr. However, this does not affect our purpose. Gaia BHs discovered so far have secondary stars of \(\sim 1M_{\odot}\).
Our simulations generate 59 BH binaries. Such BH binaries have various secondary masses, periods, and eccentricities. In this _letter_, we define Gaia BHs as BH binaries with secondary masses of \(\leq 1.1M_{\odot}\), periods of 100-2000 days, and eccentricities of 0.3-0.9 unless stated.
## 3 Results
We find that one open cluster with \(Z=0.005\) contains a BH binary similar to Gaia BHs, hereafter called "Gaia BH-like binary". We summarize the properties of the Gaia BH-like binary in Table 1. Its BH mass is \(21.4M_{\odot}\). Its companion (hereafter, secondary) star is a main sequence (MS) star and has a mass of \(0.82M_{\odot}\). The binary period is \(8.3\times 10^{2}\) days, and its eccentricity oscillates from 0.3 to 0.8 due to von Zeipel-Lidow-Kozai (ZLK) mechanism (von Zeipel, 1910; Lidov, 1962; Kozai, 1962). We find that the Gaia BH-like binary is the inner binary of a triple star system, and it is because of a tertiary star that we see ZLK oscillation of the Gaia BH-like binary. The tertiary star is a MS star and has \(1.59M_{\odot}\) at 500 Myr. The outer period and eccentricity above \(1.2\times 10^{6}\) days and 0.689, respectively.
Figure 1 shows the position and velocity of the Gaia BH-like binary at 500 Myr. We can see that the Gaia BH-like binary has escaped from the open cluster located at the coordinate origin. The Gaia BH-like binary will be long-lived after 500 Myr as long as it is not perturbed by the tertiary star (described later). This Gaia BH-like binary escapes from the open cluster through two-body relaxation or close binary interaction1. Looking at their position and velocity in Figure 1, the Gaia BH-like binary belongs to a tidal tail of the open cluster. This means that it will be identified as a Galactic disk component long after the open cluster has evaporated.
Footnote 1: Such a binary with massive objects can be ejected from an open cluster (Fujii & Portegies Zwart, 2011)
We show the time evolution of the Gaia BH-like binary and its progenitor in Figure 2, and illustrate the formation process of the Gaia BH-like binary in Figure 3. At the initial time, its BH progenitor has \(22.6M_{\odot}\) and a companion with \(9.32M_{\odot}\). The binary period and eccentricity are \(2.8\times 10^{3}\) days and 0.167, respectively. At 9.49 Myr, its BH progenitor evolves to a \(16.7M_{\odot}\) BH. At 21.5 Myr, this binary is perturbed by an open cluster star, excites its eccentricity to nearly 1, and merges. This merger leaves behind a \(21.4M_{\odot}\) BH. Until 188 Myr, this BH does not form any hard binary. Although soft binaries are occasionally formed, their binary periods are at least \(\sim 10^{7}\) days. Such soft binaries are quickly disrupted by perturbations of other stars. At 188 Myr, the BH captures an open cluster star, and form a relatively hard binary with a period of \(5.3\times 10^{5}\) days. After this, the BH companion is replaced with another star several times (although omitted in Figure 3). At 244 Myr the BH companion becomes a \(1.59M_{\odot}\) MS star to be finally the tertiary star. At 260 Myr, the BH companion is exchanged with a \(0.82M_{\odot}\) MS star to be finally the secondary star. The superseded companion is not fully ejected from the Gaia BH-like binary, and stays as the tertiary star. Soon after this
\begin{table}
\begin{tabular}{l l l} \hline Parameters & Values & Remarks \\ \hline \hline Metallicity (Z) & 0.005 & \\ \hline BH mass & \(21.4M_{\odot}\) & \\ Secondary mass & \(0.82M_{\odot}\) & MS star \\ Period & \(8.3\times 10^{2}\) days & \\ Eccentricity & 0.3-0.8 & Oscillating \\ \hline Tertiary mass & \(1.59M_{\odot}\) & MS star \\ Outer period & \(1.2\times 10^{6}\) days & \\ Outer eccentricity & 0.689 & \\ Mutual inclination & 34-59 deg & Oscillating \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the Gaia BH-like binaryat 500 Myr.
interaction, this Gaia BH-like binary with the tertiary star escapes from the open cluster.
We can see in Figure 2 that the Gaia BH-like binary eccentricity and mutual inclination between the orbital planes of the inner and outer binaries (hereafter, mutual inclination) are oscillating after the Gaia BH-like binary escapes due to ZLK mechanism. The ZLK oscillation is modulated, and the modulation amplitude appears in constant. This is because the cadence of the snapshot (4 Myr) is longer than the timescale of the ZLK oscillation. In order to catch the ZLK oscillation by sufficiently high cadence of the snapshot, we recalculate the orbital evolution of the triple system. For this purpose, we extract the masses, positions, and velocities of the triple-system components at 400 Myr, and follow their evolution over 2 Gyr by means of the SDAR code (Wang et al., 2020), where the SDAR code is incorporated into the PETAR code to treat close binary and multiple star systems as described above. We approximate that the triple system is completely isolated from other stars, since the triple system escapes from the open cluster at 400 Myr. We do not take into account the tertiary star evolution to a red giant and white dwarf phases.
Figure 4 shows the eccentricity and mutual inclination of the Gaia BH-like binary. The cadence of the snapshot is 0.1 Myr, which is suf
Figure 1: Positions (top) and velocities (bottom) of the Gaia BH-like binary (star points) and other stars (dots) at 500 Myr. All the stars including BHs, neutron stars, and white dwarfs, are plotted. The central number density of visible stars is \(\sim 3.4\) pc\({}^{-3}\).
Figure 3: Illustration of the Gaia BH-like binary formation. Objects indicated by “BH”, “2” and “3” are the BH, secondary and tertiary stars in the Gaia BH-like binary, respectively. Object “A” is the initial companion of the BH progenitor. Objects “B”, “C”, “D” are perturbers for the BH and its companions. Although the Gaia BH-like binary experience many interactions during 188-244 Myr, we omit them. Thus, an ejected object at 244 Myr is not actually object “C”.
Figure 2: Time evolution of the Gaia BH-like binary and its progenitor. Secondary and tertiary stars are defined as temporary companions of the BH and its progenitor at each time. We indicate the distance from the center, component masses, binary periods, binary eccentricities, and mutual inclination from top to bottom. These quantities are plotted for each 4 Myr. The shaded regions show binary periods of \(100-2000\) days, which tends to be detected by _Gaia_.
ficiently high to show the ZLK oscillation. The oscillation on several Myr and modulation on several 100 Myr are due to the quadrupole-level and octupole-level interactions, respectively (Antognini, 2015). The modulation amplitude is constant, although it appears not to be constant due to the low cadence of the snapshot (4 Myr) in Figure 2. We can see that the Gaia BH-like binary eccentricity rises to at most 0.8, and its pericenter distance is reduced down to at least 0.97 au. Since this pericenter distance is much larger than the secondary radius, the Gaia BH-like binary will not merge. Note that the secondary star, a 0.82 MS star, will not evolve to a giant star within the Hubble time. The tertiary star evolves to a red giant star and then a white dwarf over \(\sim 2\) Gyr. The red giant star will not fill the Roche lobe of the outer binary, because the pericenter distance of the outer binary is \(2.0\times 10^{2}\) au much larger than the asymptotic giant branch radius \(\sim 1\) au. The tertiary star gradually decreases its mass and influence on the Gaia BH-like binary. The tertiary star evolution is expected not to trigger merger or disruption of the Gaia BH-like binary.
In Figure 5, we compare the formation efficiency among BH binaries in open clusters to the 1st and 2nd types of isolated binary evolution. We focus on only binary periods of \(100-2000\) days, which tends to be detected by \(Gaia\)2. The formation efficiency of all BH binaries in open clusters is much smaller than both the types of isolated binaries. However, when we limit BH binaries to Gaia BHs, i.e. those with light companions (\(1.1M_{\odot}\)) and moderately high eccentricities (\(0.3-0.9\)), the formation efficiency is reversed. It is \(\sim 10^{-5}M_{\odot}^{-1}\) in open clusters, while it is \(\sim 10^{-8}M_{\odot}^{-1}\) even in the 2nd type of isolated binaries. Thus, the formation efficiency is larger by 3 orders of magnitudes.
Footnote 2: The binary periods of Gaia BH1 and BH2 are \(\sim 180\) day (El-Badry et al., 2023b) and \(\sim 1300\) day (El-Badry et al., 2023a; Tanikawa et al., 2022).
We again emphasize that the formation of Gaia BHs in open clusters is much higher than in isolated binaries with the respect of initial conditions. The numbers of binaries with \(>20M_{\odot}\) and \(<1.1M_{\odot}\) stars are 0, 0, and \(5.4\times 10^{-5}\) per \(1M_{\odot}\) in open clusters and the 1st and 2nd types of isolated binaries, respectively. If we limit their orbital periods to \(100-2000\) days, the number is \(9.2\times 10^{-6}\) per \(1M_{\odot}\) in the 2nd types of isolated binaries. Note that \(>20M_{\odot}\) stars evolve to BHs long before 500 Myr, and \(<1.1M_{\odot}\) stars are still MS stars at 500 Myr if they evolve as single stars; such binaries can be potentially progenitors of Gaia BHs. On the other hand, the formation efficiencies of Gaia BHs are \(\sim 10^{-5}\), 0, and \(\sim 10^{-8}M_{\odot}^{-1}\) in open clusters and the 1st and 2nd types of isolated binaries, respectively. Open clusters are more disadvantage to form Gaia BHs than the 2nd type of isolated binaries. However, open clusters can form many more Gaia BHs.
Figure 6 presents a different respect from Figure 5. From the top panel, we can confirm that, for the secondary mass of less than \(1.1M_{\odot}\), the formation efficiency in open clusters is larger than in isolated binaries by 3 orders of magnitude for \(Z=0.005\). If BH binaries with the secondary mass of \(\sim 2M_{\odot}\) can be regarded as Gaia BHs, there is another Gaia BH in a \(Z=0.01\) open cluster (see the middle panel in Figure 6). Such a BH binary is also formed in open clusters more easily than in isolated binaries by 3 orders of magnitude. Note that we find no tertiary star for this BH binary.
The mass distribution in Figure 6 may have the sharp peak at \(\sim 1M_{\odot}\) for the open clusters with \(Z=0.005\) due to the small sampling. If we increase the number of open cluster simulations to much more than 100, the sharp peak is likely to be generated out around \(\sim 1M_{\odot}\). Thus, some of them may exceed \(1M_{\odot}\), and may not be categorized as Gaia BHs. Nevertheless, it should have a minor effect on the formation efficiency of Gaia BHs in open clusters.
The low formation efficiency in isolated binaries is consistent with the argument of El-Badry et al. (2023b). Here, we review its reason.
Figure 4: Time evolution of the Gaia BH-like binary eccentricity (top) and mutual inclination (bottom) over 2 Gyr from 400 Myr. The quantities are plotted for each 0.1 Myr.
Figure 5: Formation efficiency of BH binaries per \(M_{\odot}\) for each bin in open clusters (top), the 1st type of isolated binaries (middle), and the 2nd type of isolated binaries (bottom). The stellar type of the companions are limited to MS stars. The grey, blue, and red histograms indicate All BH binaries, BH binaries with \(\lesssim 1.1M_{\odot}\) companions, and those with \(\lesssim 1.1M_{\odot}\) companions and eccentricities from 0.3 to 0.9 (i.e. Gaia BHs). The shaded regions show binary periods of \(100-2000\) days, which tends to be detected by \(Gaia\).
Let's consider a binary with a BH progenitor (\(\gtrsim 20M_{\odot}\)) and \(\mathrm{a}\sim 1M_{\odot}\) star. Its period is \(\sim 10^{3}\) days, and then its separation is \(\sim 10^{3}R_{\odot}\). When the BH progenitor evolves to a giant star, it fills its Roche lobe, and starts Roche lobe overflow. The Roche lobe overflow is typically unstable, since the donor star (here, the BH progenitor) is much more massive than the accretor (here, the \(1M_{\odot}\) star). Thus, the binary experiences common envelope evolution. The common envelope evolution ends with a merger of the BH progenitor core and \(\sim 1M_{\odot}\) star. This is because the gravitational binding energy between the core and star is too small to fully eject the common envelope. Even if the binary survives against the common envelope evolution, its eccentricity is reduced to \(\lesssim 0.2\)(e.g. Kruckow et al., 2021; Trani et al., 2022). The BH natal kick could excite its eccentricity if its velocity is sufficiently high. However, the barycenter velocities of the Gaia BHs are too small to suppose such high BH natal kick velocities.
In contrast to isolated binaries, the formation of Gaia BHs in open clusters avoids all the difficulties described above. As illustrated in Figure 3, Gaia BHs can be formed through dynamical capture after BH progenitors evolve to BHs in open clusters. If we dare to mention the difficulty of Gaia BH formation in open clusters, BHs may rarely capture \(\sim 1M_{\odot}\) stars. Generally, BHs capture more massive stars more easily, and there are many \(\gtrsim 1M_{\odot}\) MS and white dwarfs in open clusters.
Taking into account only the Gaia BH-like binary, we estimate the number of Gaia BHs formed in open clusters over Milky Way as follows:
\[N_{\mathrm{GaiaBH,MW}} \sim 1.6\times 10^{4}\left(\frac{\eta}{10^{-5}M_{\odot}}\right) \left(\frac{M_{\mathrm{MW}}}{6.1\times 10^{10}M_{\odot}}\right) \tag{1}\] \[\times\left(\frac{f_{Z=0.005}}{0.26}\right)\left(\frac{f_{\mathrm{ cluster}}}{0.1}\right),\]
where \(\eta\) is the formation efficiency obtained from our simulation results, \(M_{\mathrm{MW}}\) is Milky Way mass, and \(f_{Z=0.005}\) is the mass fraction of \(Z=0.005\) in the Milky Way, which is calculated by integrating stellar mass between \(Z=0.003\) and \(0.007\) in a Milky Way model of Wagg et al. (2022) and Shikauchi et al. (2023). The fraction \(f_{\mathrm{cluster}}\) is the mass fraction of stars formed in open clusters (Misiriotis et al., 2006; Piskunov et al., 2007). We assume that the lifetime of Gaia BHs is larger than the Hubble time because of the small mass of the Gaia BH-like binary. Since the size of Milky Way disk is \(\sim 10\) kpc, the number of Gaia BHs can be \(\sim 160\) within \(\sim 1\) kpc. We expect that the formation efficiency in open clusters will explain the number of Gaia BHs, even if the number of Gaia BHs (currently, 2) will grow in the future as seen in the lists of BH binary candidates (Andrews et al., 2022; Shahaf et al., 2023).
## 4 Conclusion and Discussion
Our main conclusions in this _letter_ are as follows.
1. Higher formation efficiency of Gaia BHs in open clusters than in isolated binaries by 3 orders of magnitude.
2. Possibility of Gaia BHs in multiple star systems.
We find a Gaia BH-like binary in our simulations albeit of the small total mass of open clusters. The Gaia BH-like binary has many points similar to Gaia BHs. The visible star has \(\sim 1M_{\odot}\), and the binary period and eccentricity are \(10^{2}\)-\(10^{3}\) days and \(\sim 0.5\), respectively. Its barycenter orbit indicates that it is a Galactic disk component. Since the visible star is dynamically captured by the BH after the BH formation, it should not be polluted by supernova ejecta from the BH, which is consistent with the chemical abundance pattern of Gaia BH2 (El-Badry et al., 2023). The visible star has subsolar metallicity similar to Gaia BH1. The number of such BH binaries is large enough to explain the presence of Gaia BHs. The Gaia BH-like binary contains different points from Gaia BHs, especially Gaia BH2. The visible star of Gaia BH-like binary has subsolar metallicity and does not evolve to a red giant star within the Hubble time, while Gaia BH2's visible star has solar metallicity, and already evolves to a red giant star. However, if we simulate the dynamical evolutions of many more open clusters, we are likely to generate BH binaries similar to Gaia BH2.
We expect that Gaia BHs frequently have tertiary stars if they are formed in open clusters. The frequency of the presence of tertiary stars is highly uncertain. If we estimate the frequency from the small samples, the frequency is 100 %, because the Gaia BH-like binary, which is just one Gaia BH in our simulation, has a tertiary star. Even if we include the BH binary with the secondary mass of \(\sim 2M_{\odot}\), this frequency is not changed much because of its short lifetime. Nevertheless, we recognize that this is a rough estimate. In order to determine the frequency accurately, we need to perform many more cluster simulations, and acquire a larger number of samples than now. On the other hand, there are no reports for such tertiary stars in Gaia BHs. Its reason may be just that no one searches for such tertiary stars (but see El-Badry et al., 2021). Moreover, the tertiary star of the Gaia BH-like binary becomes a white dwarf after \(\sim 2\) Gyr, and may
Figure 6: Formation efficiency of BH binaries per \(M_{\odot}\) for each bin in open clusters, and the 1st and 2nd types of isolated binaries as a function of secondary masses. The top, middle, and bottom panels indicate \(Z=0.005\), 0.01, and 0.02, respectively. All the secondary stars are MS stars, and all the binary periods and eccentricities are 100-2000 days and 0.3-0.9, respectively.
be too faint to be detected by _Gaia_ at the present day. Thus, we are not always able to discover such tertiary stars if any.
As for multiplicity, our results show that a visible binary can orbit a BH. In other words, the BH itself is a tertiary star of the multiple star system. We observe several binaries orbiting BHs in our simulations. However, their outer binary periods are \(>10^{5}\) days. They cannot be discovered by _Gaia_, since their periods are much larger than _Gaia_'s operation duration, a few \(10^{3}\) days. We may generate such systems detectable by _Gaia_ if we increase the number of open cluster simulations again.
Only Shikauchi et al. (2020) have previously investigated Gaia BHs formed in open clusters, although many studies have studied Gaia BHs formed in isolated binaries (Mashian & Loeb, 2017; Breivik et al., 2017; Yamaguchi et al., 2018; Kinugawa & Yamaguchi, 2018; Yalinewich et al., 2018; Shao & Li, 2019; Andrews et al., 2019; Wiktorowicz et al., 2020; Chawala et al., 2022; Shikavich et al., 2022, 2023). They have not found BH binaries with \(\sim 1M_{\odot}\) MS stars despite that they treat open clusters with \(9\times 10^{5}M_{\odot}\) in total, 3 times larger than those in this _letter_. This discrepancy can be explained by different initial conditions of open clusters between their and our studies. First, their and our initial binary fraction is 0 and 100 %, respectively. Second, their open clusters are initially in dynamical equilibrium, while our open cluster are not. In our open clusters, many dynamical interactions tend to happen. Third, their open clusters are only solar metallicity.
In the near future, we will perform a larger number of open cluster simulations than in this _letter_. These open clusters will have different mass and density, and will be in different tidal fields. It will reveal the formation efficiency of Gaia BHs in open clusters more accurately. We will analyze the distribution of BH and visible star masses, binary periods, eccentricities, and frequency of the presence of tertiary stars. It is also important to obtain the distribution of the tertiary star masses, and outer binary periods and eccentricities. If an outer binary has a short enough period to cause the ZLK oscillation of its inner BH binary, we may observe an eccentricity variability of the BH binary (Hayashi & Suto, 2020). We may also see visible binaries orbiting BHs with periods of 100-2000 days. The presence of such binaries may be indicated by differences between spectroscopic and astrometric mass functions for each binary (Tanikawa et al., 2022). These distributions should be a clue to elucidate the formation process of Gaia BHs.
This _letter_ indicates two important possibilities. First, we do not need to rebuild the theory of common envelope evolution, which is carefully constructed to explain several types of compact binaries, such as double BHs (e.g. Belczynski et al., 2016). Second, if Gaia BHs are dominantly formed in open clusters, they can be a useful probe to study open cluster dynamics.
## Acknowledgments
This research could not be accomplished without the support by Grants-in-Aid for Scientific Research (17H06360, 19K03907) from the Japan Society for the Promotion of Science. M.F. is supported by The University of Tokyo Excellent Young Researcher Program. S.C. is supported by the Fulbright U.S. Student Program, which is sponsored by the U.S. Department of State and Japan-U.S. Educational Commission. Its contents are solely the responsibility of the author and do not necessarily represent the official views of the Fulbright Program, the Government of the United States, or the Japan-U.S. Educational Commission. M.S. thanks a support by Research Fellowships of Japan Society for the Promotion of Science for Young Scientists, by Forefront Physics and Mathematics Program to Drive Transformation (FoPM), a World-leading Innovative Graduate Study (WINGS) Program, the University of Tokyo, and by JSPS Overse Challenge Program for Young Researchers. L.W. thanks the support from the one-hundred-talent project of Sun Yat-sen University, the Fundamental Research Funds for the Central Universities, Sun Yat-sen University (22hydd09) and the National Natural Science Foundation of China through grant 12073090 and 12233013.
## Data Availability
Results will be shared on reasonable request to authors. The data is generated by the software PETAR and SDAR, which are available in GitHub at [https://github.com/lwang-astro/PeTar](https://github.com/lwang-astro/PeTar) and [https://github.com/lwang-astro/SDAR](https://github.com/lwang-astro/SDAR), respectively. The initial conditions of star cluster models are generated by the software MCLUSTER (Kupper et al., 2011), which is available in GitHub at [https://github.com/lwang-astro/mcluster](https://github.com/lwang-astro/mcluster).
|
2303.15550 | Randomized rounding algorithms for large scale unsplittable flow
problems | Unsplittable flow problems cover a wide range of telecommunication and
transportation problems and their efficient resolution is key to a number of
applications. In this work, we study algorithms that can scale up to large
graphs and important numbers of commodities. We present and analyze in detail a
heuristic based on the linear relaxation of the problem and randomized
rounding. We provide empirical evidence that this approach is competitive with
state-of-the-art resolution methods either by its scaling performance or by the
quality of its solutions. We provide a variation of the heuristic which has the
same approximation factor as the state-of-the-art approximation algorithm. We
also derive a tighter analysis for the approximation factor of both the
variation and the state-of-the-art algorithm. We introduce a new objective
function for the unsplittable flow problem and discuss its differences with the
classical congestion objective function. Finally, we discuss the gap in
practical performance and theoretical guarantees between all the aforementioned
algorithms. | François Lamothe, Emmanuel Rachelson, Alain Haït, Cedric Baudoin, Jean-Baptiste Dupe | 2023-03-27T19:05:38Z | http://arxiv.org/abs/2303.15550v1 | # Randomized rounding algorithms for large scale unsplittable flow problems
###### Abstract
Unsplittable flow problems cover a wide range of telecommunication and transportation problems and their efficient resolution is key to a number of applications. In this work, we study algorithms that can scale up to large graphs and important numbers of commodities. We present and analyze in detail a heuristic based on the linear relaxation of the problem and randomized rounding. We provide empirical evidence that this approach is competitive with state-of-the-art resolution methods either by its scaling performance or by the quality of its solutions. We provide a variation of the heuristic which has the same approximation factor as the state-of-the-art approximation algorithm. We also derive a tighter analysis for the approximation factor of both the variation and the state-of-the-art algorithm. We introduce a new objective function for the unsplittable flow problem and discuss its differences with the classical congestion objective function. Finally, we discuss the gap in practical performance and theoretical guarantees between all the aforementioned algorithms.
###### Abstract
The _unsplittable flow problem_ is an extensively studied variant of the classical maximum flow problem. In this problem, one is given a directed or undirected graph, together with capacities on its arcs. A family of commodities, each composed of an origin, a destination, and a demand, is also provided. Each commodity has to route its demand from its origin to its destination through a unique path. The routing must ensure that capacities on the arcs are not exceeded by the flow of the commodities, or at least minimize the violation of the capacities.
Keywords:Unsplittable flows randomized rounding heuristic approximation algorithm
## Declarations
This work was partially funded by Thales Alenia Space and made in collaboration with several of its members. This work was partially funded by the CNES. Several authors are academically related to ISAE-SUPAERO.
The authors declare that they have no conflict of interest. The datasets and the code used in the experimental section of this work are accessible at [https://github.com/SuReLU/randomized_rounding_paper_code](https://github.com/SuReLU/randomized_rounding_paper_code).
## 1 Introduction
The unsplittable flow problem is an extensively studied variant of the classical maximum flow problem. In this problem, one is given a directed or undirected graph, together with capacities on its arcs. A family of commodities, each composed of an origin, a destination, and a demand, is also provided. Each commodity has to route its demand from its origin to its destination through a unique path. The routing must ensure that capacities on the arcs are not exceeded by the flow of the commodities, or at least minimize the violation of the capacities.
This problem is NP-hard as it contains several NP-hard problems as subcases. When there are only two nodes linked by one arc, the problem is equivalent to the knapsack problem. When all demands and capacities are 1, the problem is equivalent to the edge-disjoint paths problem.
This problem has various applications especially in telecommunication networks (_e.g._ optical networks, telecommunication satellites), and goods transportation. In these applications, large-scale instances appear with up to 500 nodes, 2000 arcs, and 150 000 commodities. However, only a few methods in the literature can scale to such large instances, such as the approximation algorithm of Raghavan and Tompson (1987) and some meta-heuristics tuned to have small computing times. The algorithm presented by Raghavan and Tompson (1987) uses randomized rounding to compute a solution to the unsplittable flow problem. Even though this \(O\left(\frac{\ln m}{\ln\ln m}\right)\)-approximation algorithm has theoretically the best approximation factor achievable, the solution it yields are often far from optimal in terms of solution quality.
That is why, in this work, we focus on an algorithm that can scale to large instances while giving good practical results. This algorithm is a heuristic extension of the randomized rounding algorithm of Raghavan and Tompson (1987). As such, it also uses randomized rounding on the linear relaxation of the unsplittable problem to create an unsplittable solution. This algorithm alternates randomized rounding steps and resolutions of the linear relaxation and will thus be called, in this work, the Sequential Randomized Rounding algorithm (SRR). This heuristic is also an extension of the algorithm proposed
by Coudert and Rivano (2002) for which no complete proof of the approximation factor was given. Compared to the algorithm of Coudert and Rivano (2002), the SRR heuristic offers more flexibility on the number of linear relaxation resolutions and more importantly, takes advantage of the fact that commodities might have different demand levels. We also describe a variation of the SRR heuristic for which we prove the same approximation guarantees as the algorithm of Raghavan and Tompson (1987). This variation will be called the Constrained Sequential Randomized Rounding algorithm (CSRR). Moreover, we tighten the analysis of both approximation algorithms and prove that they achieve a \(O\left(\frac{\gamma\ln m}{\ln(\gamma\ln m)}\right)\)-approximation factor where \(\gamma\) is a parameter that is small when the commodities demands are small compared to the capacities of the arcs. Finally, we experimentally show that the SRR algorithm scales to large instances. Furthermore, its practical results on large datasets outperform other tested methods.
This paper is structured as follows. In Section 2 we describe the notations used together with several Mixed Integer Linear Programs (MILP) for the unsplittable flow problem. Related work is presented in Section 3. Section 4 presents the SRR heuristic and its complexity analysis. Section 5 describes the CSRR algorithm and provides the analysis leading to the \(O\left(\frac{\gamma\ln m}{\ln(\gamma\ln m)}\right)\)-approximation factor. In Section 6, we provide experimental results that compare the empirical quality of the various algorithms presented. In Section 7, we discuss different properties of the SRR heuristic and the CSRR algorithm. Finally, we conclude and give perspectives in Section 8.
## 2 The unsplittable flow problem
Throughout this paper, we will use the following notations:
* \(G=(V,E)\) is a directed or undirected graph, with \(V\) the set of nodes and \(E\) the set of arcs
* \(L=(o_{k},d_{k},D_{k})_{k\in K}\) is a set of commodities defined by their origin, destination and demand.
* \((c_{e})_{e\in E}\) are capacities on the arcs
We also use the Kronecker notation, \(\delta_{x}^{y}\) equals 1 if \(x=y\) and 0 otherwise. The sets of arcs incoming and outgoing of node \(v\) will be noted \(E^{-}(v)\) and \(E^{+}(v)\) respectively. Finally, \(c_{min}=\min_{e\in E}c_{e}\) and \(D_{max}=\max_{k\in K}D_{k}\) are the smallest capacity and the largest demand.
### Objective functions
Four objective functions for the unsplittable flow problems can be found in the literature: maximizing the served demand (Kolman, 2003), minimizing the cost (Barnhart et al., 2000), minimizing the congestion (Martens and Skutella,
2006), minimizing the number of necessary routing rounds (Aumann and Rabani, 1995). In this work, we focus on minimizing the congestion.
The congestion of a graph is the smallest number \(\Delta\) by which it is necessary to multiply all the capacities in order to route all the commodities (Martens and Skutella, 2006). The congestion of an arc is the ratio of the flow on this arc to its capacity. This metric emphasizes low capacity arcs. Besides, minimizing the congestion puts no restrictions on the flow going through the arcs that do not have a maximal congestion. In particular, it induces no incentive to minimize the congestion on those arcs. This becomes problematic when an arc is largely more congested in every solution than any other arc because it lifts all restrictions for the other arcs. To prevent this, we introduce a new criterion to minimize the violation of the capacities of the arcs. We use the term _overflow_ to describe the quantity of flow \(\Delta_{e}\) that exceeds the capacity of an arc \(e\). The overflow of an arc is always non-negative. Our new criterion is to minimize the sum of the overflows \(\sum_{e\in E}\Delta_{e}\). Note that the congestion \(\Delta\) is not the maximum overflow over all the arcs.
In the following sections, we present two equivalent Mixed Integer Linear Programs (MILP) for the unsplittable flow problem.
### Arc-node formulation
The arc-node formulation is compact as it has a polynomial number of variables and constraints in the number of commodities, nodes, and arcs. It can thus be solved directly in a MILP solver for small instances. This formulation is characterized by the flow conservation constraints which ensure the structural property of flows. The objective function presented is the overflow sum. The meaning of the variables in this formulation is the following:
* \(f_{ek}\) indicates whether commodity \(k\) pushes flow on arc \(e\),
* \(\Delta_{e}\) represents the overflow on arc \(e\).
The unsplittable flow problem is then:
\[\min_{f_{ek},\Delta_{e}}\sum_{e\in E}\Delta_{e}\] (1a) such that \[\sum_{e\in E^{+}(v)}f_{ek}\ -\sum_{e\in E^{-}(v)}f_{ek}=\delta_{v}^{o_{k }}-\delta_{v}^{d_{k}} \forall k\in K,\ \forall v\in V, \tag{1b}\] \[\sum_{k\in K}f_{ek}D_{k}\leq c_{e}+\Delta_{e} \forall e\in E,\] (1c) \[f_{ek}\in\{0,1\},\ \Delta_{e}\in\mathbb{R}^{+} \forall k\in K,\ \forall e\in E. \tag{1d}\]
Equation (1b) corresponds to the flow conservation constraints. It ensures that, for each commodity and every node except the origin and the destination of the commodity, the same amount of flow of the commodity goes in and out
of the node. Equation (1c) corresponds to the capacity constraints. It ensures that the capacity of an arc is respected or that the overflow is recorded in \(\Delta_{e}\). The fact that \(f_{ek}\in\{0,1\}\) ensures that the flow is unsplittable.
One can create an arc-node congestion formulation by replacing \(c_{e}+\Delta_{e}\) by \(c_{e}\Delta\) in Equation (1c) and minimizing over \(\Delta\) instead of \(\sum_{e}\Delta_{e}\). Note that the \(\Delta\) variable is common to all arcs while there was one variable \(\Delta_{e}\) for each arc.
### Path formulation
In the path formulation, the flow conservation constraints are unnecessary. The variables represent paths so these constraints are always verified. However, there is an exponential number of variables (in the number of commodities, nodes, and arcs) so the formulation must be solved through a particular MILP technique named Branch and Price (Barnhart et al., 2000). The objective function presented is the overflow sum. The meaning of the variables in this formulation is the following:
* \(x_{pk}\) indicates whether commodity \(k\) uses path \(p\) to push its flow,
* \(\Delta_{e}\) represents the overflow on arc \(e\).
The unsplittable flow problem is then:
\[\min_{x_{pk},\Delta_{e}}\sum_{e\in E}\Delta_{e}\] (2a) such that \[\sum_{p\in P_{k}}x_{pk}=1 \forall k\in K, \tag{2b}\] \[\sum_{k\in K}\sum_{p\in P_{k}|e\in p}x_{pk}D_{k}\leq c_{e}+\Delta_ {e} \forall e\in E,\] (2c) \[x_{pk}\in\{0,1\},\ \Delta_{e}\in\mathbb{R}^{+} \forall p\in\bigcup_{k}P_{k},\ \forall k\in K,\ \forall e\in E. \tag{2d}\]
Here, \(P_{k}\) denotes the set of all paths usable by commodity \(k\). Equation (2b) ensures that exactly one path is chosen for each commodity. Equation (2c) corresponds to the capacity constraints. It ensures that the capacity of an arc is respected or that the overflow is recorded in \(\Delta_{e}\). The fact that \(x_{pk}\in\{0,1\}\) ensures that the flow is unsplittable.
## 3 Related work
In this section, we review important solution approaches to the unsplittable flow problem present in the literature. These works are grouped into three subsections: exact methods, approximation algorithms, and meta-heuristics. A
fourth sub-section is dedicated to the linear relaxation of the unsplittable flow problem (the multi-commodity flow problem) whose resolution is paramount to the resolution of the unsplittable flow problem.
### Exact methods
Barnhart et al. (2000) presented a Branch and Price and Cut procedure applied to a path formulation with a cost minimization objective function. Most subsequent works use this baseline as a comparison. A major contribution of their work is their branching strategy. Unlike straightforward branching strategies for this problem, theirs keeps intact the structure of the pricing problem throughout the branching procedure. For a commodity in a non-integer solution, the divergence node is the first node where the flow of the commodity splits. The outgoing arcs of the divergence node are divided into two disjoint subsets \(E_{1}\) and \(E_{2}\). Each set must contain at least one of the arcs used by the commodity. The branching rule is: either forbid the use of \(E_{1}\) or forbid the use of \(E_{2}\). In both cases, the previous non-integer solution is cut from the problem and forbidding sets of arcs keeps the structure of the pricing problem intact. They also included cuts to strengthen the linear relaxation. These cuts are lifted cover inequalities of the capacity constraints.
Park et al. (2003) mixed the path formulation and a knapsack formulation (which is not presented in this work) to derive a new linear formulation of the problem. The linear relaxation of this formulation yields a stronger lower bound, which in turn decreases the time needed to complete the branching procedure. They compared different branching rules and report that the one of Barnhart et al. (2000) produces much better results. They present computational results only for this rule.
Belaidouni and Ben-Ameur (2007) presented a cutting plane method based on super-additive functions to get strong cuts for their Branch and Price method. It appears on small instances that the addition of their cuts derives integer solutions without using a Branch and Bound procedure. Results are compared with those of Barnhart et al. (2000) and large improvements are reported.
Overall, the best results can be found in the articles of Belaidouni and Ben-Ameur (2007) and Park et al. (2003). Belaidouni and Ben-Ameur (2007) compared their results with those of Barnhart et al. (2000) and solved all their instances (at most 30 nodes, 60 arcs, 100 commodities) in less than 10 seconds. Park et al. (2003) did not compare their results with previous works but solved instances of the same magnitude (at most 30 nodes, 80 arcs, 100 commodities) in less than 15 seconds. Note that results were only given for small instances in these approaches.
Other earlier works have been reported in (Parker and Ryan, 1993; Alvelos and De Carvalho, 2003; Park et al., 1996).
### Approximation algorithms and heuristics
As the unsplittable flow problem is NP-hard, a lot of attention has been given to approximation algorithms and heuristics. In particular, the maximum served demand objective has been extensively studied. We refer to the Handbook of approximation algorithms (Group and Gonzalez, 2020) for a detailed survey on approximation algorithms in the context of unsplittable flows. We recall here some works related to the minimum congestion objective.
Approximation algorithms are given with a factor of approximation \(\lambda\). Let \(\Gamma\) be the objective function of the minimization problem at hand. Solutions generated by a \(\lambda\)-approximation algorithm verify the following inequality:
\[\Gamma(S^{*})\leq\Gamma(S)\leq\lambda\Gamma(S^{*}),\]
where \(\Gamma(S)\) and \(\Gamma(S^{*})\) are respectively the value of the produced solution and the value of the optimal solution. This guarantees that the ratio between the value of the produced solution and the value of the optimal solution is not too high. When considering approximation algorithms, more attention must be paid to the objective function being optimized. The literature uses four different objective functions: maximizing the served demand, minimizing the congestion, minimizing the number of rounds, and minimizing the cost. Approximation hardness results demonstrate that none of these objective functions admits a constant-factor approximation algorithm (Group and Gonzalez, 2020).
In the congestion minimization context, we seek the smallest number by which it is necessary to multiply all the capacities in order to fit all the commodities. The best-known approximation algorithm for congestion is a randomized rounding method introduced by Raghavan and Tompson (1987) which we shall call the RR algorithm in this work. The method proceeds in two steps. First, a solution of the linear relaxation of the problem is computed. Each commodity is allowed to use multiple paths in this solution. The proportion of flow for each commodity on each path is \(((x_{pk})_{p\in P_{k}})_{k\in K}\). Then a path is selected for each commodity. Path \(p\) is selected in \(P_{k}\) with probability \(x_{pk}\). Each commodity is then assigned to the selected path to create an unsplittable solution. This procedure produces, with arbitrarily high probability, an unsplittable solution whose congestion is \(O\left(\frac{\ln|E|}{\ln\ln|E|}\right)\) larger than the one of the fractional solution. Their algorithm is thus a \(O\left(\frac{\ln|E|}{\ln\ln|E|}\right)\)-approximation algorithm that works for directed and undirected graphs. The randomized rounding process can be derandomized using the method of conditional probabilities (Raghavan, 1988). Chuzhoy et al. (2007) showed a tight \(\Omega(\frac{\ln|V|}{\ln\ln|V|})\) bound on directed graphs, assuming \(NP\nsubseteq BPTIME(|V|^{O(\ln\ln|V|)})\). Andrews et al. (2010) showed that minimizing congestion for an unsplittable flow in an undirected graph is hard to approximate within \(\Omega(\ln\ln|V|/\ln\ln\ln|V|)\), assuming \(NP\nsubseteq ZPTIME(|V|^{polylog(|V|)})\). A \((|K|+2)\)-approximation algorithm is presented in (Asano, 2000). The reported results show that in practice this algorithm gives results comparable to classical randomized rounding.
In the context of the maximum demand objective function, a parameter \(\frac{D_{max}}{c_{min}}\) is introduced. This parameter plays an important role for this objective function. Indeed, when this parameter is upper-bounded by a small constant, several works reported stronger approximation results for their algorithms (Chakrabarti et al., 2007; Shepherd and Vetta, 2015; Azar and Regev, 2006). However, we did not find similar results for the congestion objective function.
A few heuristics have been proposed in previous works. They are either greedy or Linear programming (LP) based heuristics. Coudert and Rivano (2002) introduced an algorithm very similar to the SRR algorithm presented in Section 4, without proving it is an approximation algorithm. Asano (2000) as well as Wang and Wang (1999) proposed greedy algorithms and LP-based algorithms. Reported results show that, except in specific cases, the greedy approaches are usually not competitive with LP-based methods. On the other hand, LP-based heuristics yield results that are similar to the randomized rounding algorithm of Raghavan and Tompson (1987).
### Meta-heuristics
As introduced above, it is NP-hard to find an optimal solution or even to give a constant-factor approximation to the unsplittable flow problem. Thus, the literature investigated various randomized search procedures such as genetic algorithms (Cox, 1991; Masri et al., 2019), tabu search (Anderson et al., 1993; Laguna and Glover, 1993; Xu et al., 1997), local search and GRASP (Santos et al., 2013, 2010; Alvelos and Valerio de Carvalho, 2007; Masri et al., 2015, 2019) or ant colony optimization (Li et al., 2010; Masri et al., 2011). One of the major difficulties encountered when solving the unsplittable flow problem with a meta-heuristic is to efficiently create useful paths for the commodities.
Early approaches such as (Cox, 1991; Anderson et al., 1993) encode solutions as permutations of the commodities. The space of permutations is the one searched by the meta-heuristic. The following function is used to create a solution from a permutation and evaluate it. The function goes through the permutation, examining each commodity. The commodity is then allocated to the shortest path where there is still enough capacity to fit the commodity. Once a path is assigned to every commodity, the objective function can be computed.
In (Laguna and Glover, 1993) and (Masri et al., 2015) the \(k\) shortest paths are pre-computed for each commodity using the algorithm of Yen (1971). The search space of their meta-heuristics is restrained to the space of solutions using only those paths.
A different idea used in (Santos et al., 2013, 2010; Alvelos and Valerio de Carvalho, 2007) is to consider paths extracted from the linear relaxation of the problem. The linear relaxation is solved with a column generation algorithm applied to the path formulation. During the column generation, a set of paths \(\hat{P}_{k}\) is generated for each commodity. A meta-heuristic such as a multi-start
local search is then used to explore the solutions where only paths from \(\hat{P}_{k}\) are used. In (Santos et al., 2013b), after the first linear relaxation is solved, perturbed linear models are solved to create new useful columns and extend the solution space of the meta-heuristic.
Ant colony optimization is also a means to navigate the large solution space of the possible paths and is used in (Li et al., 2010; Masri et al., 2011). In an ant colony optimization approach, at each iteration, each commodity creates a path by taking into account several metrics: path length, path load, and pheromones. Each arc of the graph has a pheromone level for each commodity and the higher the pheromone level the higher the probability of the arc to belong to the path generated. Pheromones are updated through two means. First, the best solutions add pheromones to the arc they use. Second, pheromones decay so that their level does not become excessive thus facilitating the exploration of the solution space.
We refer to the work of Li et al. (2010) and Santos et al. (2013a) for the best performing meta-heuristics. Li et al. (2010) compared their results with the solver CPLEX and were able to solve instances with up to 60 nodes, 400 arcs, and 3500 commodities to optimality in less than 900 seconds. Santos et al. (2013a) show that all their instances (26 nodes, 80 arcs, 500 commodities) are solved in less than 180 seconds with values close to the linear relaxation lower bound.
### Linear multi-commodity flow problem
The multi-commodity flow problem is the linear relaxation of the unsplittable flow problem. The value of its optimal solution is a lower bound for the binary problem and this linear relaxation is used in several exact and approximate methods. As a special case of linear programming, this problem is solvable in polynomial time.
A lot of effort has been invested in exact methods for this problem. Even if a commercial solver can solve the node-arc formulation, this method may take a prohibitive time in large instances. An alternative solution is to use a Lagrangian relaxation of the capacity constraints to decompose the problem into easier sub-problems as in (Retvdri et al., 2004). As reported by Dai et al. (2017), Lagrangian relaxation shows lesser performances in most instances than the competitor method, applying a column generation algorithm to the path formulation. Lagrangian relaxation seems to be the best choice when the number of commodities is very large because its computing time scales only linearly with this parameter. In most other cases, column generation seems to be the best solution.
Several works contributed to increase the performance of column generation algorithms for this problem. First, a primal-dual column generation is presented in (Gondzio et al., 2013; Gondzio and Gonzalez-Brevis, 2015; Gondzio et al., 2016). An interior-point algorithm is used to solve the master problem and obtain sub-optimal but well-centered solutions. These well-centered solu
tions are used to compute new columns in the sub-problems which stabilizes the column generation process and reduces the number of iterations needed to achieve convergence. Another approach, which could be combined with the previous one is the use of aggregated variables presented by Bauguion et al. (2013, 2015). In this method, variables do not represent paths but aggregated paths such as trees or more complex structures. The authors report that the sub-problems associated with aggregated variables can be solved efficiently. Aggregated variables reduce the size of the master problem during the algorithm but might induce a larger number of iterations, thus aggregation must be carefully done. Another method presented in (Babonneau et al., 2006) consists of a specialized interior-point method to solve the multi-commodity flow problem. This method has been improved by Castro and Cuesta (2012). Other contributions to linear programming methods can be found in (Moradi et al., 2015; Dai et al., 2016, 2016)
For large instances, linear programming methods may take a lot of computing time before finding the optimal solution. Thus the literature focused on combinatorial approximation algorithms and in particular on Fully Polynomial-Time Approximation Schemes (FPTAS). The best results were obtained through the use of exponential length functions. This idea was first introduced by Shahrokhi and Matula (1990). In their algorithm, a length exponential in the passing flow is assigned to each arc. The flow is iteratively augmented on the shortest path connecting any of the source-sink pairs. Their algorithm was improved by Fleischer (2000) who showed that only the computation of an \(\epsilon\)-shortest path was needed. The algorithm of Fleischer (2000) is the fastest FPTAS in practice while not being the one with the smallest complexity. Indeed, Madry (2010) presented an algorithm with a smaller complexity but Emanuelsson (2016) showed in his work that the algorithm of Fleischer (2000) is faster for instances having less than 100 million arcs. For a detailed survey on the multi-commodity flow problem methods previous to 2005, see the article of Wang (2018).
## 4 The Sequential Randomized Rounding heuristic
The Sequential Randomized Rounding algorithm (SRR) is a polynomial greedy heuristic for the unsplittable flow problem. This algorithm is similar to the one proposed by Coudert and Rivano (2002) for the light-path assignment problem. We add some features that are specific to our problem such as the sorting of the commodities by decreasing demand. The SRR algorithm also gives the possibility to compute less often the linear relaxation of the problem to reduce the overall computing time of the algorithm. Compared to the approximation algorithm of Raghavan (1988), the SRR algorithm has larger running times, no performance guarantees but returns higher quality solutions on the tested instances. As a heuristic, the SRR algorithm has a shorter computing time than exact solutions and meta-heuristics, especially for large instances. A variation of the SRR heuristic with approximation guarantees is presented in Section
5 together with an analysis of these approximation guarantees. Finally, the SRR algorithm is further discussed in Section 7 to explain its behavior on the tested instances.
### Presentation of the algorithm
The SRR algorithm, presented in Algorithm 1 alternates between two different steps: solve the linear relaxation of the problem and fix some commodities to a unique path. In our case, the linear relaxation is a multi-commodity flow problem. Even though for the sake of clarity we use the notations of the path formulation in the following, in the experimentations, an arc-node formulation paired with the commercial solver (Gurobi Optimization, 2020) is used to solve the linear relaxation. More efficient specialized solvers or approximation algorithms for the multi-commodity flow problem can be found in the literature (see Section 3.4). Solving the linear relaxation provides a distribution of flow among the paths for each commodity: \(((x_{pk})_{p\in P_{k}})_{k\in K}\). After solving the linear relaxation, a path is selected for some commodities. These commodities will be forced to use only these paths for the rest of the algorithm. The path selected for each commodity is chosen through the same randomized rounding procedure introduced by Raghavan and Tompson (1987): for commodity \(k\), path \(p\) is selected with probability \(x_{pk}\). The probability that commodity \(k\) uses arc \(e\) is \(f_{ek}=\sum_{p\in P_{k}|e\in p}x_{pk}\). When solving the next linear relaxations, the fixed commodities will also be forced to only use their single allowed path.
The major difference with the RR algorithm of Raghavan and Tompson (1987) is that, in the SRR heuristic, the linear relaxation is actualized several times during the randomized rounding process. More precisely, after deciding to fix some commodities to a single path, the linear relaxation is solved again with the added constraints that the fixed commodities must use their affected path. To decide when the linear relaxation is actualized, the following procedure is used. In the solution of the linear relaxation, some commodities use multiple paths. After \(\theta\) of these commodities are fixed to a single path, the linear relaxation is actualized. Choosing the threshold \(\theta\) trades off between computation time and solution quality. If the linear relaxation is actualized often (low threshold), fixing decisions take into account most of the previous fixing decisions but the computation time is high. If the linear relaxation is never actualized, branching decisions do not take into account previous branching decisions but the computation time is low. The threshold value is fixed to \(\frac{|V|}{4}\) in our experiments. A sensitivity analysis of this parameter is given in Section 6.
Another difference with the RR algorithm is that solutions are created using of the overflow sum objective function presented in Section 2.1 instead of the classical congestion objective function. As will be explained in Section 7.1, when using the overflow sum objective function, the SRR heuristic returns solutions with a lower overflow but also a lower congestion.
Compared to the algorithm proposed in (Coudert and Rivano, 2002), the SRR heuristic offers more flexibility on the number of actualizations of the linear relaxation thanks to parameter \(\theta\). Moreover, the main difference is that, because the unsplittable flow problem in (Coudert and Rivano, 2002) arises from a Light-Path Assignment problem, all commodities have a demand of one and thus the commodities have their path chosen in no particular order. In the SRR heuristic, paths are assigned to the commodities in decreasing order of commodity's demand, _i.e._ the commodities with larger demands have their paths chosen first. This order is classically used in bin packing heuristics such as the Next Fit Decreasing heuristic (Csirik et al., 1986). The rationale behind this ordering is to allocate commodities with a large demand first, while a large amount of capacity is left in the arcs. Commodities with smaller demands are then used to fill the remaining gaps. In Section 6, it is shown that this rounding order of the variables has a large impact on the quality of the solution returned by the heuristic.
```
0:\(G=(V,E,c)\) a capacitated graph, \(L=(o_{k},d_{k},D_{k})_{k\in K}\) a list of commodities
1: Sort the commodities by decreasing demand
2:\(K_{fixed}=\varnothing\)\(\triangleright\)\(K_{fixed}\) is the set of indices of commodities fixed to a single path
3:for each commodity \(k^{*}\) in decreasing demand order do
4:if an actualization is needed then
5:\(((x_{pk})_{p\in P_{k}})_{k\in K}=Solve\_Linear\_Relaxation\left(G,L,K_{fixed},(p_{k})_{k\in K _{fixed}}\right)\)
6: Draw a path \(p^{*}\) from \(P_{k^{*}}\) with probability \(x_{p^{*}k^{*}}\)
7: Add index \(k^{*}\) to \(K_{fixed}\).
8:\(p_{k^{*}}=p^{*}\)
9:return\((p_{k})_{k\in K}\)
```
**Algorithm 1** The SRR heuristic
### Complexity analysis
There are \(|K|\) iterations and during each of them the algorithm might do the following:
* Solve the linear relaxation: \(O(LR(|V|,|E|,|K|))\) operations where \(LR(|V|,\)\(|E|,|K|)\) is the complexity of the linear relaxation resolution (line 5).
* Select \(p_{*}\): as the flow of each commodity can be decomposed in at most \(|E|\) paths, at most \(|E|\) of the variables \(x_{pk}\) have a non-zero value (Ford Jr, 1956). Choosing one of these variables requires \(O(|E|)\) operations (line 6).
Additionally, sorting the commodities requires \(O(|K|\log(|K|))\) operations (line 1). The total time complexity is \(O(|K|(\log|K|+|E|+LR(|V|,|E|,|K|)))\)
### Grouping commodities by origin
Grouping commodities by origin is the process of considering a set of commodities with the same origin as one single commodity. This new commodity has a demand equal to the sum of the original demands. To ensure that the flow goes to the right destinations, a super-destination node is created and connected to each original destination with an arc of capacity equal to the original demand of the commodity. When solving the _linear relaxation_ with grouped commodities, the same solution is computed at the condition that all commodities of a group originate from the same node. Grouping commodities can greatly reduce the computing time of a linear solution computed with an LP solver when the number of different origins is much smaller than the number of commodities. In our test cases, inspired by practical telecommunication instances, commodities are emitted from a small number of different origins thus it is efficient to group the commodities by origin. However, solutions produced when commodities are grouped yield a little less information. It is necessary to compute exactly what paths each commodity uses. This can be done quickly in \(O(|V|(|E|+|K|))\) operations with a flow decomposition algorithm (Ford Jr, 1956).
## 5 A variation of the heuristic with approximation guarantees
In this section, we present the Constrained Sequential Randomized Rounding algorithm (CSRR). It is a variation of the SRR heuristic for which we prove approximation guarantees similar to the one of the RR algorithm of Raghavan and Tompson (1987). Approximation results are obtained considering the classical congestion objective function. Indeed, for the overflow sum objective, the value of the optimal solution may be zero. This happens when all commodities fit in the capacities. In this case, any approximation algorithm must find the optimal solution. Thus the overflow sum objective function is not suited for approximation proofs.
The RR algorithm is able to yield a solution satisfying a \(O\left(\frac{\ln(|E|\epsilon^{-1})}{\ln(\ln(|E|\epsilon^{-1}))}\right)\)-approximation factor with probability \(1-\epsilon\). We extend and tighten the analysis of randomized rounding algorithms by giving a new \(O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^{-1}))}\right)\) factor for the CSRR algorithm and for the RR algorithm of Raghavan and Tompson (1987). In this new factor, \(\gamma\) is the granularity parameter of the instance which is equal to \(\frac{D_{max}}{c_{min}\Delta^{*}}\) where \(\Delta^{*}\) is the optimal congestion of the linear relaxation. This parameter is small when the commodities are smaller compared to \(c_{min}\Delta^{*}\). The parameter \(\gamma\) can be related to the parameter \(\frac{D_{max}}{c_{min}}\) introduced by Chakrabarti et al. (2007); Shepherd and Vetta (2015); Azar and Regev (2006) to tighten their approximation analysis in the case of the maximum demand objective. The number \(\gamma\) is a decisive parameter of unsplittable flow instances. Indeed, it remains constant when the capacities or the demands are scaled uniformly.
To prove an approximation factor for a randomized rounding where the linear relaxation is actualized (as in the line 5 of Algorithm 1), it appeared necessary to add a constraint to the linear relaxation. Thus the CSRR algorithm is the same as the SRR algorithm with the following additional constraint in the linear relaxation:
\[\sum_{k\in K\setminus K_{fixed}}f_{ek}D_{k}\leq c_{e}\Delta^{*}-\sum_{k\in K_{ fixed}}\hat{f}_{ek}D_{k},\ \forall e\in E, \tag{3}\]
where \(\Delta^{*}\) is the optimal congestion of the first linear relaxation; \(K_{fixed}\) is the set of commodities that were fixed to their respective single paths before the current linear relaxation resolution; \(f_{ek}\) are the variables used to optimize the flow of the unfixed variables. The commodities in \(K_{fixed}\) sent their flow on several paths in the linear relaxation before they were fixed to one path; \(\hat{f}_{ek}\) is the corresponding fractional flow on each arc for these commodities. Note that for the first computation of the linear relaxation, this constraint has no impact on the final solution and can be removed. Thus, this constraint disappears in the algorithm of Raghavan and Tompson (1987), since there is only one resolution of the linear relaxation.
For the sake of clarity, we derive the present proof in the case where the CSRR algorithm makes only a single actualization of the linear relaxation's solution which occurs just after the first rounding step. Extension to the case of several actualizations performed at any time can be done by induction.
In the following, let the discrete random variables \(F_{ek}\) indicate the flow of commodity \(k\) on arc \(e\) in the solution returned by the CSRR algorithm. The variables \(F_{ek}\) take the value \(D_{k}\) with probability \(f_{ek}=\sum_{p\in P_{k}|e\in p}x_{pk}\) and \(0\) otherwise. Thus, their expectation \(\mathbb{E}[F_{ek}]=f_{ek}D_{k}=D_{k}\sum_{p\in P_{k}|e\in p}x_{pk}\) is also the flow of commodity \(k\) on arc \(e\) in the solution of the linear relaxation. Let \(k_{1}\) be the index of the commodity of largest demand (thus the first one to be fixed in Algorithm 1) and let \(F_{e}=\sum_{k\in K\setminus\{k_{1}\}}F_{ek}\). Once conditioned by \(F_{ek_{1}}\), the random variables \(F_{ek}\) (\(k\neq k_{1}\)) are independent of each other because the linear relaxation is not actualized between their rounding step. However, the random variables \(F_{ek}\) (\(k\neq k_{1}\)) are not independent of \(F_{ek_{1}}\); in particular, we have \(\mathbb{E}[F_{ek}|F_{ek_{1}}]\neq\mathbb{E}[F_{ek}]\). Indeed the realization of \(F_{ek_{1}}\) in the unique rounding step conditions the resolution of the subsequent linear relaxation. Thus, it conditions the values \(f_{ek}\) which parametrize the distribution of the random variables \(F_{ek}\). To recall this dependency, we write \(f_{ek}(F_{ek_{1}})\) the fractional flow of commodity \(k\) on arc \(e\). Note that constraint (3) added in the CSRR algorithm can be re-written in terms of random variables:
\[\mathbb{E}[F_{e}|F_{ek_{1}}]\leq c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}].\]
Recall that in the congestion formulation, the objective function aims at minimizing the minimum multiplicative factor on all arc capacities needed to fit the commodities. We note \(C^{*}\) the optimal congestion for the considered unsplittable flow instance. As introduced above, \(F_{ek_{1}}+F_{e}\) is the flow on arc \(e\) in the solution returned by the CSRR algorithm. Thus, proving a probabilistic
\((1+\alpha)\)-approximation for this algorithm bound boils down to proving that for all arcs, \(F_{ek_{1}}+F_{e}\) remains below \((1+\alpha)c_{e}C^{*}\) with high probability. Formally, for a small \(\epsilon\):
\[\mathbb{P}\left(\forall e\in E,F_{ek_{1}}+F_{e}\leq(1+\alpha)c_{e}C^{*}\right) \geq 1-\epsilon.\]
Conversely, this is equivalent to proving that, with at most probability \(\epsilon\), there exists an arc \(e\) where the congestion exceeds \((1+\alpha)C^{*}\):
\[\mathbb{P}\left(\exists e\in E,F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}C^{*}\right) \leq\epsilon.\]
To that end, we prove in Theorem 3.1 that for every arc \(e\):
\[\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*}\right)\leq\frac{ \epsilon}{|E|}.\]
Indeed, in this case, as \(\Delta^{*}\) is a lower bound on \(C^{*}\), we have:
\[\mathbb{P}\left(\exists e\in E,F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{ e}C^{*}\right) =\mathbb{P}\left(\bigvee_{e\in E}F_{ek_{1}}+F_{e}\geq(1+\alpha)c_ {e}C^{*}\right)\] \[\leq\mathbb{P}\left(\bigvee_{e\in E}F_{ek_{1}}+F_{e}\geq(1+\alpha )c_{e}\Delta^{*}\right)\] \[\leq\sum_{e\in E}\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c _{e}\Delta^{*}\right)\] \[\leq\sum_{e\in E}\frac{\epsilon}{|E|}\] \[=\epsilon\]
To ensure that \(\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*}\right)\leq\frac {\epsilon}{|E|}\), we first prove through Lemma 2 that the probability \(\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*}\right)\) is upper bounded by a quantity \(g_{e}(\alpha)\). Lemma 2 is proved by using the Markov inequality and the probabilistic translation of constraint (3). Proving Lemma 2 requires a preliminary result introduced in Lemma 1. Finally, the proof is completed by showing that there exists a value for \(\alpha\) satisfying \(1+\alpha=O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^ {-1}))}\right)\) and for every arc \(g_{e}(\alpha)\leq\frac{\epsilon}{|E|}\)
Without loss of generality and to remove \(D_{max}\) from the proof, we assume the considered instances are scaled so that \(D_{max}=1\) and thus \(\gamma=(c_{min}\Delta^{*})^{-1}\). We now present the two Lemmas together with their proof.
Lemma 1: _For any positive scalar \(\alpha\), \(\mathbb{E}\left[(1+\alpha)^{F_{e}}|F_{ek_{1}}\right]\leq e^{\alpha\mathbb{E} \left[F_{e}|F_{ek_{1}}\right]}\)._
Proof: \[\mathbb{E}\left[(1+\alpha)^{F_{e}}|F_{ek_{1}}\right] = \mathbb{E}\left[\prod_{k\in K\setminus\{k_{1}\}}(1+\alpha)^{F_{ek}}| F_{ek_{1}}\right]\] \[= \prod_{k\in K\setminus\{k_{1}\}}\mathbb{E}\left[(1+\alpha)^{F_{ek }}|F_{ek_{1}}\right]\quad\text{because the $F_{ek}|F_{ek_{1}}$ are independent}\] \[= \prod_{k\in K\setminus\{k_{1}\}}(f_{ek}(F_{ek_{1}})(1+\alpha)^{D_ {k}}+1-f_{ek}(F_{ek_{1}}))\] \[\leq \prod_{k\in K\setminus\{k_{1}\}}(f_{ek}(F_{ek_{1}})(1+\alpha D_{ k})+1-f_{ek}(F_{ek_{1}}))\quad\text{because $D_{k}\leq 1$}\] \[= \prod_{k\in K\setminus\{k_{1}\}}(1+\alpha f_{ek}(F_{ek_{1}})D_{ k})\] \[\leq \prod_{k\in K\setminus\{k_{1}\}}e^{\alpha f_{ek}(F_{ek_{1}})D_{ k}}\] \[= e^{\alpha\sum_{k\in K\setminus\{k_{1}\}}f_{ek}(F_{ek_{1}})D_{ k}}\] \[= e^{\alpha\mathbb{E}[F_{e}|F_{ek_{1}}]}\]
Lemma 2: _For any positive scalar \(\alpha\) and any instance of the unsplittable flow problem, the flow \(F_{ek_{1}}+F_{e}\) returned by the CSRR algorithm on arc \(e\) satisfies:_
\[\mathbb{P}(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*})\leq\left[\frac{e^{ \alpha}}{(1+\alpha)^{1+\alpha}}\right]^{c_{e}\Delta^{*}}\]
Proof: We note \(\delta=(1+\alpha)^{(1+\alpha)c_{e}\Delta^{*}}\)
\[\mathbb{P}(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*})\] \[= \mathbb{P}\left((1+\alpha)^{F_{ek_{1}}+F_{e}}\geq\delta\right)\] \[\leq \delta^{-1}\mathbb{E}\left[(1+\alpha)^{F_{ek_{1}}+F_{e}}\right] \text{Markov inequality}\] \[= \delta^{-1}\mathbb{E}\left[\mathbb{E}[(1+\alpha)^{F_{ek_{1}}+F_{ e}}|F_{ek_{1}}]\right]\] \[= \delta^{-1}\mathbb{E}\left[(1+\alpha)^{F_{ek_{1}}}\mathbb{E}[(1+ \alpha)^{F_{e}}|F_{ek_{1}}]\right]\] \[\leq \delta^{-1}\mathbb{E}\left[(1+\alpha)^{F_{ek_{1}}}e^{\alpha \mathbb{E}[F_{ek_{1}}]}\right] \text{Lemma 1}\] \[\leq \delta^{-1}\mathbb{E}\left[(1+\alpha)^{F_{ek_{1}}}e^{\alpha(c_{e} \Delta^{*}-\mathbb{E}[F_{ek_{1}}])}\right] \text{because of constraint (\ref{eq:constraint})}\] \[= \delta^{-1}e^{\alpha(c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}])} \mathbb{E}\left[(1+\alpha)^{F_{ek_{1}}}\right]\] \[= \delta^{-1}e^{\alpha(c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}])}(f_{ek _{1}}(1+\alpha)^{D_{k_{1}}}+1-f_{ek_{1}})\] \[\leq \delta^{-1}e^{\alpha(c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}])}(f_{ek _{1}}(1+\alpha D_{k_{1}})+1-f_{ek_{1}})\text{ because }D_{k_{1}}\leq D_{max}\leq 1\] \[\leq \delta^{-1}e^{\alpha(c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}])}e^{ \alpha f_{ek_{1}}D_{k_{1}}}\] \[= \delta^{-1}e^{\alpha(c_{e}\Delta^{*}-\mathbb{E}[F_{ek_{1}}]+ \mathbb{E}[F_{ek_{1}}])}\] \[= \delta^{-1}e^{\alpha c_{e}\Delta^{*}}\] \[= \left[\frac{e^{\alpha}}{(1+\alpha)^{1+\alpha}}\right]^{c_{e} \Delta^{*}}\]
We now present the main theorem which upper-bounds for every arc the probability of high congestion in the solution returned by the CSRR algorithm.
Theorem 1.1: _For any \(\epsilon>0\), there exists an approximation factor \(1+\alpha\) which is \(O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^{-1}))}\right)\) such that, for any instance of the unsplittable flow problem, the flow \(F_{ek_{1}}+F_{e}\) returned by the CSRR algorithm on arc \(e\) satisfies:_
\[\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*}\right)\leq\frac {\epsilon}{|E|}.\]
Proof: Lemma 2 gives us \(\mathbb{P}(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*})\leq\left[\frac{e^{ \alpha}}{(1+\alpha)^{1+\alpha}}\right]^{c_{e}\Delta^{*}}\). To ensure the veracity of the theorem we need to show that there exists a scalar \(1+\alpha\) which is \(O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^{-1}))}\right)\) and for every arc satisfies:
\[\left[\frac{e^{\alpha}}{(1+\alpha)^{1+\alpha}}\right]^{c_{e}\Delta^{*}}\leq \frac{\epsilon}{|E|}\]
\[\Longleftrightarrow\]
\[(1+\alpha)\ln(1+\alpha)-\alpha\geq\frac{\ln(|E|\epsilon^{-1})}{c_{e}\Delta^{*}}\]
For the arc of capacity \(c_{min}\) which gives the highest bound, the lower bound is \(B=\gamma\ln(|E|\epsilon^{-1})\) (recall that \(D_{max}=1\)). Thus we study the solution of the equation \((1+\alpha)\ln(1+\alpha)-\alpha=B\) and show that it satisfies \(1+\alpha=O\left(\frac{B}{\ln(B)}\right)\). By replacing \(\ln(1+x)\) with the classical bounds \(2\frac{x-1}{x+1}\leq\ln(1+x)\leq x\), we have:
\[\alpha^{2}\geq(1+\alpha)\ln(1+\alpha)-\alpha\geq\alpha-2\] \[\iff\quad\alpha^{2}\geq B\geq\alpha-2\] \[\iff\quad\sqrt{B}\leq\alpha\leq B+2\]
Which implies:
\[B=(1+\alpha)\ln(1+\alpha)-\alpha\geq(1+\alpha)\ln(1+\sqrt{B})-B-2\]
and finally,
\[1+\alpha\leq\frac{2B+2}{\ln(1+\sqrt{B})}\sim\frac{4B}{\ln(B)}=O\left(\frac{B}{ \ln(B)}\right)\]
Thus, the solution of the equation \((1+\alpha)\ln(1+\alpha)-\alpha=\gamma\ln(|E|\epsilon^{-1})\) satisfies \(1+\alpha=O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^{ -1}))}\right)\) and for this value of \(\alpha\), we have
\[\mathbb{P}\left(F_{ek_{1}}+F_{e}\geq(1+\alpha)c_{e}\Delta^{*}\right)\leq\frac {\epsilon}{|E|}\]
Using Theorem 1.1, we showed that, for any instance of the unsplittable flow problem, the CSRR algorithm returns a solution whose congestion is less than \(O\left(\frac{\gamma\ln(|E|\epsilon^{-1})}{\ln(\gamma\ln(|E|\epsilon^{-1}))} \Delta^{*}\right)\) with probability \(1-\epsilon\). As for the RR algorithm of Raghavan and Tompson (1987), Lemma 2 and Theorem 1 still apply even though the demonstration of Lemma 2 is simpler because commodity \(k_{1}\) does not need to be treated separately. We thus have the same approximation results for the RR algorithm.
During the resolutions of the linear relaxation of the CSRR algorithm, the constraint (3) is very restrictive. It almost does not leave any flexibility to the variables to move away from the previous optimum.
_Example: We call here first linear relaxation the linear program solved before any commodity is fixed to a single path and second linear relaxation the linear program solved after the first set of commodities is fixed to a single path. For the first and second linear relaxation respectively, we note \(f_{e}^{1}\) and \(f_{e}^{2}\) the total flow of all the commodities that are not fixed in the first rounding step. Let us suppose that the first linear relaxation has a unique optimal solution in which every arc has the same congestion \(\Delta^{*}\). Then, in the second linear relaxation, constraint (3) ensures that, \(\forall e\in E,f_{e}^{1}\geq f_{e}^{2}\). In this case, we must have \(\forall e\in E,f_{e}^{1}=f_{e}^{2}\). Otherwise, the flow of the unfixed commodities in the first linear relaxation could be replaced by the same flow from the second linear relaxation thus creating a new optimal solution of the first linear relaxation.
_This would contradict the assumption that the first linear relaxation has a unique optimal solution. Thus, in this example, the variables of the second linear relaxation cannot move away from the previous optimum at all._
This example highlights that the second linear relaxation can only move away from the solution of the first linear relaxation by using the slack left between the congestion of each arc and \(\Delta^{*}\). Thus, when constraint (3) is used, the actualization step only has a small impact on the linear relaxation's solution and the CSRR algorithm does not plainly benefit from the actualization step. We conjecture that it is why the CSRR algorithm does not yield experimentally good results compared to the SRR heuristic (see experimental results of the CSRR algorithm in Section 6). To overcome this problem, we replace constraint (3) by the following constraint:
\[\sum_{k\in K_{free}}f_{ek}D_{k}\leq\beta c_{e}\Delta^{*}-\sum_{k\in K_{fixed}} \hat{f}_{ek}D_{k},\ \forall e\in E. \tag{4}\]
By replacing \(\Delta^{*}\) by \(\beta\Delta^{*}\) and \(\gamma\) by \(\beta^{-1}\gamma\) for some \(\beta\geq 1\) in the proof, one can prove that using constraint (4) the approximation factor of the CSRR algorithm becomes \(O\left(\frac{\gamma\ln(|E|e^{-1})}{\ln(\beta^{-1}\gamma\ln(|E|e^{-1}))}\right)\) which is still equal to \(O\left(\frac{\gamma\ln(|E|e^{-1})}{\ln(\gamma\ln(|E|e^{-1}))}\right)\) for any fixed \(\beta\). Unlike constraint (3), in most practical cases, constraint (4) is not active in the optimal solutions of the linear relaxations even for \(\beta=1.1\). Thus, constraint (4) has no impact on the practical computations and enables the CSRR algorithm to yield the same experimental results as the SRR heuristic.
To summarize, in this section, we added a constraint to the SRR heuristic and switched back to the congestion formulation to create an algorithm that has the same approximation guarantees as the RR algorithm of Raghavan and Tompson (1987). We also tightened the approximation analysis of both algorithms by introducing a granularity parameter \(\gamma\). This parameter has the property of remaining constant when commodities and arc capacities are uniformly scaled. Finally, we slightly modified the added constraint to alleviate its impact in practice. This modification increased the approximation factor by a negligible value.
## 6 Experimental results
In this section, we present experiments that support our claims: the SRR heuristic has a lower computing time on large instances than exact methods and yields solutions with a better overflow than the algorithm of Raghavan and Tompson (1987) and meta-heuristics. The impact of sorting the commodities in decreasing order of demand in the randomized rounding algorithms is also investigated. Moreover, the SRR heuristic is compared to the CSRR approximation algorithm. The datasets and the code used in the experimental section of this work are accessible at [https://github.com/SuReLU/randomized_rounding_paper_code](https://github.com/SuReLU/randomized_rounding_paper_code). All the code for this work was written in Python 3.
The experiments were made on a server with 48 CPU Intel Xeon E5-2670 2.30GHz, 60 Gbit of RAM, and CentOS Linux 7.
In this section, each figure presents the results for a dataset of instances. Each dataset features ten groups of a hundred instances with each group containing instances created using the same parameters. An exception is made in Figure 2 in which each group contains only ten instances. Each point in a figure reports, for an algorithm, the average result of one group of instances. The 95% confidence intervals are also represented as semi-transparent boxes around the main curve.
### Instance datasets
As stated in (Masri et al., 2019), no standard benchmark of instances is present in the literature for the unsplittable flow problem, especially for large instances. Indeed, most works use small graphs (less than 50 nodes) on which they manually generate a set of commodities. The largest instances (100 nodes) can be found in the work of Masri et al. (2015) and Li et al. (2010). Masri et al. (2015) use a grid graph while Li et al. (2010) create their graphs with an adaptation of the graph generator NETGEN. To compensate for this absence of benchmark, we give a detailed explanation of our instance generation procedure. Moreover, all our instances are given in our GitHub repository together with the code used for their generation.
In our tests, we consider two types of graphs: strongly connected random graphs and grid graphs. For strongly connected random graphs, we use the following method to construct a random very sparse strongly connected graph: select a random node \(u\), select a random \(v\) such that there is no path from \(u\) to \(v\), add an arc \((u,v)\) to the graph, repeat until the graph is strongly connected. Afterward, random edges are added to control the average degree of the graph (it cannot be less than 2). In our tests, the average degree is fixed to 5 and the probability of a node being an origin is \(1/10\). A grid graph is an \(n\times m\) toric grid with \(p\) additional nodes. Each additional node is randomly connected to \(q\) nodes on the grid and serves as an origin of the flow. In our tests, we use \(m=n=p=\frac{q}{2}\). Unless mentioned otherwise, the arc capacities are \(10^{4}\).
For both types of graphs the demand is created as follows until no more commodity can be added without breaking the capacity constraints:
* choose a destination node \(d\);
* choose an origin \(o\) which can access \(d\) within the remaining capacities;
* compute a random simple path \(p\) from \(o\) to \(d\) using a depth-first search where the visit order of newly discovered nodes is random;
* choose a demand level \(D\) uniformly between 1 and \(\hat{D}_{\max}\) where \(\hat{D}_{\max}\) is a parameter defining the maximum possible demand of a commodity; if the chosen demand level is larger than the remaining capacity on \(p\) then truncate it to the remaining capacity;
* decrease the capacities on the path \(p\) by \(D\);
* add \((o,d,D)\) to the list of created commodities;
Note that demands created this way can always be routed within the arc capacities and the optimal congestion is one. Thus, we know optimal solutions have zero overflow and a congestion of one. Hence, these optimal values do not need to be computed with exact optimization methods. Moreover, the parameter \(\hat{D}_{\max}\) used to parametrize the size of the commodities and thus the number of commodities. Unless mentioned otherwise the value of \(\hat{D}_{\max}\) is fixed to 1500 in our tests.
### Benchmarking meta-heuristics
In this section, we present a benchmark of ACO-MC, one of the ant colony optimization algorithms presented in (Li et al., 2010), and the variable neighborhood search of Masri et al. (2015). Both algorithms were reproduced and the code used is given in our GitHub repository. We compare these algorithms with a handcrafted simulated annealing. The goal is to use only the best one as a comparison for the SRR algorithm in the next sections. The algorithm of Li et al. (2010) was coded exactly as presented in their paper.
**Implementation of (Masri et al., 2015):** due to differences in the considered unsplittable flow problem, the local search part of their algorithm had to be modified. Their local search creates a new path for a commodity by starting from its destination and adding arcs to the path until the origin is reached. At each step, the next arc is chosen by considering the following heuristic information for each out-going arc of the last node of the current path:
\[I_{e}=\frac{1}{l_{e}}+\left(1-\frac{1}{\hat{c}_{e}}\right)\]
where \(l_{e}\) is the lead time of arc \(e\) (_i.e._ its length) and \(\hat{c}_{e}\) is the remaining capacity on arc \(e\). As we do not have lead times for each arc in our problem, \(l_{e}\) was set to 1 for each arc. Moreover, in the problem studied in (Masri et al., 2015), the remaining capacity \(\hat{c}_{e}\) is positive because no overflow is allowed. Because we can have negative remaining capacities, we replace the function \(f:x\to 1-\frac{1}{x}\) applied to \(\hat{c}_{e}\) by \(g:x\rightarrow\frac{1}{2}\left(1+\frac{x}{1+|x|}\right)\). Function \(g\) was chosen to have similar properties to the function \(f\).
\[\forall x\in\ (1,+\infty),\ 0<f(x)<1\]
\[\forall x\in\ (-\infty,+\infty),\ 0<g(x)<1\]
\[f(x)-1\underset{+\infty}{\sim}g(x)-1\underset{+\infty}{\sim}\frac{-1}{x}\]
\[g(x)\underset{-\infty}{\sim}\frac{-1}{x}\]
In our tests, we also present the results obtained with a version of the algorithm of Masri et al. (2015) where the local search part is disabled.
**Our simulated annealing:** at each iteration, a solution is created in the neighborhood of the current solution; this modification is accepted with a probability depending on the improvement/deterioration of the solution and a temperature parameter. At each iteration, the temperature parameter is multiplied by \(1-\epsilon\) for some small \(\epsilon\) depending on the number of iterations. At the beginning of the simulated annealing procedure and similarly to the algorithm of Masri et al. (2015), a list of \(k\)-shortest paths of length 10 is computed for each commodity using the algorithm of Jimenez and Marzal (1999). To initialize the solution, each commodity takes a random path in its list of \(k\)-shortest paths. At each iteration, a neighborhood solution is created by replacing the path of a commodity with a path randomly chosen in the list of \(k\)-shortest paths of the commodity. The stopping criterion is the total number of iterations.
**Hyper-parameter setting:** we benchmarked different hyper-parameters values for ACO-MC but finally settled to use the same values as in (Li et al., 2010). Except for the size of the largest neighborhood which is not mentioned, both versions of the variable neighborhood search use the hyper-parameter values given in (Masri et al., 2015). After testing different values, the size of the largest neighborhood was set to 3. The hyper-parameters of the simulated annealing are the initial and final temperature chosen to be respectively 200 and 1. The simulated annealing (SA) is given \(2|K|^{1.5}\) iterations so that it takes a time comparable to the SRR algorithm in the next sections. ACO-MC and the VNS of Masri et al. (2015) were given respectively 50 and 100 iterations. Although these numbers seem very low, one must consider that for each of their iterations, the algorithms of Li et al. (2010) and Masri et al. (2015) respectively generate \(|K|\) and \(\frac{|K|}{2}\) new paths. With these numbers of iterations, they already require a longer computing time than the simulated annealing procedure. Finally, the variation of the variable neighborhood search where the local search part is disabled (VNS2) was given \(|K|^{1.5}\) iterations.
**Results:** Figure 1 presents the results of each algorithm on a dataset composed of grid graphs of varying sizes. The simulated annealing procedure clearly outperforms the other algorithms on this dataset. We obtained similar results on all other datasets. That is why the simulated annealing algorithm has been chosen in the next sections to be the comparison point for the SRR algorithm.
**Discussion:** we now try to explain why the meta-heuristics from the literature are outperformed. At each iteration, the ant colony procedure spends most of its computing time creating an entirely new solution. Indeed, a new path is created for each commodity for a total of \(|K|\) paths generated. Conversely, at each iteration, the simulated annealing procedure chooses only one path in a list (choosing a path is much faster than creating one). This leads to a much larger number of iterations for the simulated annealing procedure and thus a lot more solutions evaluated. As for the algorithm of Masri et al. (2015), it seems that the local search part performs poorly in our tests. This is partly because the path generation procedure does not know where its target node is until it is encountered. Indeed, the heuristic information used to choose the
next arc of the path considers the length \(l_{e}\) of the arcs and not the length of the shortest path to the target node. Thus, when choosing the next arc, the procedure does not know if it goes toward or away from its goal. We tried to replace \(l_{e}\) by the length of the shortest path to the target node in the local search but disabling it completely still yielded better solutions. Finally, it appears that variable neighborhood search is not a good choice of meta-heuristic for our version of the unsplittable flow problem (especially when the number of commodities is more than a thousand). Indeed, by changing the size \(k_{max}\) of the largest neighborhood considered we obtained that the best value for this parameter is \(k_{max}=1\) which is the case where only the smallest neighborhood is considered.
### Results
We compare the following algorithms in our tests :
* RR: the randomized rounding algorithm of Raghavan and Tompson (1987);
* RR sorted: a version of the RR algorithm where the commodities are rounded in order of decreasing demand;
* SRR: the Sequential Randomized Rounding heuristic described in Section 4;
* SRR unsorted: a version of the SRR heuristic where the commodities are not rounded in order of decreasing demand but in random order;
* CSRR: the version of the SRR algorithm with approximation properties presented in Section 5;
* SA and SA2: a handcrafted simulated annealing procedure presented in Section 6.2; SA is given \(2|K|^{1.5}\) iterations to have a similar computing time to SRR on grid graphs while SA2 is given \(6|K|^{1.5}\) iterations;
* MILP solver: arc-node formulation solved with Gurobi 8.11.
Figure 1: Performance and computing time versus the number of nodes of various meta-heuristics
All the randomized rounding algorithms in this list use the overflow-sum objective in the linear relaxation to create their solutions.
An important characteristic of the SRR heuristic is its capacity to scale to large instances. Thus we first illustrate how the algorithm outperforms a MILP method. In Figure 2, a MILP method based on the arc-node formulation is compared with the RR and the SRR algorithms on small grid graphs with very few commodities. The MILP algorithm solves optimally all the small instances but gives poor results on large instances within a 20 minutes time limit. In comparison, the other methods retain a reasonable performance on large instances. For these instances, the capacity of the arcs is 3 and the maximum demand of a commodity is 2 to keep the number of commodities and variables of the MILP formulation low. Due to the large amount of memory required, the MILP algorithm could not be tested on the other larger datasets.
Figure 3 compares different randomized rounding algorithms and the simulated annealing algorithm on grid graphs and random connected graphs. In both cases, the SRR algorithm requires a larger computing time than pure randomized rounding but returns solutions of much higher quality. For small instances of grid graphs, the best results are given by the simulated annealing procedure. However, as the number of nodes increases the SRR heuristics outperforms the simulated annealing procedure with the same computing time (SA). Furthermore, on the largest instances, the simulated annealing procedure is outperformed by the SRR heuristics even when given three times more computing time (SA2). As for strongly connected random graphs, with the described settings, the SRR heuristic returns the highest quality solutions but requires a larger computing time than the simulated annealing. It appears that solving the linear relaxation takes a longer time on random graphs than on grid graphs. The CSRR algorithm returns worse solutions than the SRR heuristic
Figure 2: Performance and computing time versus the number of nodes on small instances
in a similar computing time. As explained at the end of Section 5, this is due to constraint (3) added in the resolution of the linear relaxation. One can relax this constraint into constraint (4) while keeping a similar approximation factor. In practice, constraint (4) is never active in the optimal solution of the linear relaxation. Thus, with this adaptation, the CSRR algorithm returns exactly the same solutions as the SRR heuristic.
Figure 4 reports the solution performance and the computing time for grid graphs with a constant number of nodes and arcs but a various number of commodities. This variation is obtained by changing the ratio of the arc capacities over the maximum demand of a commodity. Table 1 reports the parameters used to create the instances. As can be seen in Figure 3(a), all algorithms produce higher quality solutions when there is a large number of commodities (_i.e._ commodities have small demand compared to the arc capacities). As explained in Section 5, for randomized rounding algorithms, this practical result can be related to the presence of the granularity parameter \(\gamma\) in the approximation
Figure 3: Performance and computing time versus the number of nodes on large instances
factor of the RR and CSRR algorithms. Figure 4b shows that our use of an aggregated arc-node formulation enables the randomized rounding algorithms to have a computing time that scales very well with the number of commodities. However, this is not the case for simulated annealing which requires a large increase in its number of iterations to obtain the solution reported in Figure 4a.
We now analyze the impact of rounding the commodities in order of decreasing demand, based on the results presented in Figure 5. Both pure randomized rounding and the SRR heuristic yield solutions of higher quality when the commodities are sorted. The impact is significantly higher on the SRR algorithm than on pure randomized rounding. A possible explanation for such a strong impact is given in Section 7.2. At first glance, a rounding order should have no impact on pure randomized rounding since the commodities are rounded independently. However, results clearly indicate otherwise and this is due to how the linear relaxation is computed in our experiments. Indeed when using the arc-node formulation of Section 2.2 and the commodity aggregation presented in Section 4.3, the linear program does not directly yield a flow distribution for each commodity. The flow distribution returned is for the aggregated commodities. To get the flow of each commodity, a flow decomposition algorithm is used. This algorithm tends to split less the commodities that are decomposed first. Thus when the commodities with the largest demands are rounded first, they are also decomposed first. This means the biggest commodities have a lower chance of being split and thus a lower chance that their rounding creates a large overflow.
Finally, we describe how we chose the value \(\frac{|V|}{4}\) for the actualization parameter \(\theta\) of the SRR algorithm presented in Section 4. Setting this value implies making a trade-off between solution quality and computing time. Figure 6 compares the performance of SRR for different values of this parameter,
Figure 4: Performance and computing time versus the number of commodities
on a grid graph with 110 nodes. The computing time of the SRR heuristic affinely decreases with the \(\theta\) parameter. However, the overflow of the returned solution decreases less and less when \(\theta\) decreases. We decided to choose \(\theta\) such that the returned solution is close to the best obtainable solution while a lot of computing time is saved compared to choosing \(\theta\) close to zero. When repeating this experiment on graphs of different sizes and types, it appeared that the chosen trade-off value was close to \(\frac{|V|}{4}\). This value is represented with a dashed line in Figure 6.
Figure 5: Influence of commodity sorting on performance and computing time
Figure 6: Performance and computing time versus the value of the \(\theta\) parameter of SRR
## 7 Discussion
In this section, we discuss different properties of the randomized rounding algorithms. We first show that using the overflow sum objective function presented in Section 2.1 instead of the classical congestion objective function has a positive impact on practical results. Then we highlight a positive interplay between sorting the commodities and actualizing the linear relaxation.
### Impact of the objective function
Two types of objective function coexist in the SRR heuristic. The first one is the objective function of the unsplittable flow problem which will be called _evaluation metric_ in this section. The second one is the objective function used in the resolutions of the linear relaxation inside the SRR heuristic which we will call _generation objective_. For both of the previous types, we study the two functions presented in Section 2.1: the sum of the overflow and the congestion. Usually, the generation objective and the evaluation metric are chosen to be the same. However, the overflow sum and the congestion are very close metrics and we want to study the impact of using one for creating solutions for the other. We will show below that using the overflow sum instead of the congestion as generation objective yields solutions of higher quality for both evaluation metrics.
When the evaluation metric is the overflow sum, in our tests, solutions generated using the overflow sum as generation objective have a ten times better solution quality than the one created using the congestion. More surprisingly, the overflow sum used as generation objective also yields the best solutions when the evaluation metric is the congestion. To test this, the SRR heuristic has been applied with the two generation objectives on 100 grid graphs with 120 nodes, 700 arcs, and 4500 commodities. The mean congestion of the returned solutions is reported in the first two columns of Table 2 together with the standard deviation of the results. The mean congestions for the two generation objectives are separated by more than one standard deviation in favor of the overflow sum. Using the values given in the first two columns of Table 2, an unpaired two-sample Student t-test was made. The test showed that the overflow sum used as generation objective performs significantly better than the congestion. The \(p\)-value associated with the test is \(10^{-18}\).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Arc capacities & 1 & 2 & 5 & 10 & 20 \\ \hline Maximum demand of a commodity & 1 & 1 & 2 & 3 & 4 \\ \hline Average number of commodities & 182 & 362 & 685 & 1038 & 1615 \\ \hline \hline Arc capacities & 50 & 100 & 200 & 500 & 1000 \\ \hline Maximum demand of a commodity & 7 & 10 & 14 & 22 & 31 \\ \hline Average number of commodities & 2462 & 3512 & 5048 & 8138 & 11644 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the instances of the commodity scaling dataset : the graphs have 110 nodes and 580 arcs
We conjecture that the difference observed above between the two generation objectives is explained by the following reasoning. In the linear relaxation, the congestion objective function does not differentiate solutions having one arc with high congestion from solutions with several arcs with high congestion. However, a larger number of highly congested arcs in the linear relaxation implies, on average, a larger congestion in the solution returned by the randomized rounding process. Furthermore, if the randomized rounding process creates arcs with a larger congestion than the optimal congestion of the linear relaxation, then, in subsequent actualizations of the linear relaxation, the capacity constraints are lifted for all the other arcs. This might lead to even more congested unsplittable solutions.
With this conjecture in mind, we created an algorithm that uses the congestion as its main generation objective but performs as well as the algorithm using the overflow sum as generation objective. To that end, a two-level objective is set in the linear relaxation: find the solution of least overflow sum among the solutions with minimal congestion. The results obtained with this alternative generation objective are presented in the third column of Table 2 under the name "Mixed" objective. These results are comparable to those obtained with the overflow sum objective. Indeed, an unpaired two-sample Student t-test made on the values given in the first and third columns of Table 2 does not show that the difference is statistically significant. The \(p\)-value associated with the test is \(0.32\).
### Interaction between sorting and actualization
In this section, we discuss the positive interplay observed in Section 6 between sorting the commodities by decreasing demand and actualizing the linear relaxation. To that end, we study a toy example where the graph is composed of two nodes linked by two parallels arcs. We also assume the sum of the demands is equal to the sum of the capacities. We compare the solutions with the overflow sum objective and assume that no binary solution has an overflow of zero.
When computing the linear relaxation, optimal extrema of the polyhedron are of the following form: every commodity is assigned to one arc except for one commodity which is split between the two arcs. We assume that each commodity has the same probability of being the split commodity. After applying randomized rounding to a linear solution of this type, the binary solution obtained has an overflow between zero and the demand of the split commodity.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline & Overflow sum & Congestion & Mixed \\ \hline Mean & 1.025 & 1.049 & 1.027 \\ \hline Standard Deviation & 0.016 & 0.019 & 0.012 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the congestion of the solutions returned by the SRR heuristic using different generation objectives.
On the other hand, if we actualize the linear relaxation after fixing the split commodity we should often get a better solution. Indeed, after actualizing the linear solution, three situations can occur. Firstly, the remaining free commodities cannot switch arc to compensate any of the overflow generated by rounding the split commodity. In this case, no commodity changes its flow, the new linear solution is a binary solution and this solution is the same with and without actualization. Secondly, the remaining free commodities compensate only a part of the overflow created. In this case, even though some commodities change their flow, the new linear solution is a binary solution whose overflow is lower than the one before the actualization step. Lastly, the remaining free commodities compensate completely the overflow created and a new split commodity is created. Applying randomized rounding from there yields a solution whose overflow is between zero and the demand of the new split commodity. Since the commodities are fixed in order of decreasing demand, the new split commodity has a smaller demand than the old split commodity. The range in which the overflow of the final solution can vary is thus smaller which should on average lead to a better solution. Furthermore, new split commodities are created until the free commodities cannot completely compensate for the rounding of the last split commodity. Most of the time, this happens when only a few commodities remain and thus when only small commodities remain. In this case, the last split commodity has a small demand, and the algorithm yields a small final overflow.
Finally, we can see that in this toy example, actualizing the linear solution and sorting the commodities by decreasing demands yields, on average, solutions with a smaller overflow than pure randomized rounding. We conjecture that this phenomenon generalizes to any graph. Indeed, the fact that the split commodities get smaller and smaller as the algorithm progresses should lead to smaller and smaller final overflows.
## 8 Conclusion
In this paper, we have presented a heuristic based on randomized rounding for large-scale instances of the unsplittable flow problem which extends the algorithm of Raghavan and Tompson (1987). We experimentally showed that on large-scale instances this heuristic produces solutions of smaller overflow than any other method used for comparison.
We also derived an approximation algorithm from the heuristic by restraining the possible actualization of the linear relaxation. The approximation factor of both this algorithm and the algorithm of Raghavan and Tompson (1987) was then tightened to \(O\left(\frac{\gamma\ln(|E|e^{-1})}{\ln(\gamma\ln(|E|e^{-1}))}\right)\). This new approximation factor depends on the granularity parameter \(\gamma\) and enables to understand the behavior of randomized rounding when the commodities are small compared to the capacities (_i.e.\(\gamma\ll 1\)_).
Furthermore, the behavior of the presented heuristic has been analyzed to highlight two of its key particularities. First, the new objective function used
in the algorithm for practical computations yielded solutions of higher quality. Secondly, the actualization of the solution of the linear relaxation enhanced the performances of the heuristic when the commodities are sorted in order of decreasing demand.
Finally, even though the techniques discussed in this paper were presented in the context of unsplittable flows, they apply to other contexts where the randomized rounding method of Raghavan and Tompson (1987) is used (packing problems, covering problems...). Their performances and variations in these contexts could be investigated. Moreover, the impact of backtracking of the decisions made during randomized rounding algorithms seems a promising research direction.
|
2309.00660 | Forest fire spreading: a nonlinear stochastic model continuous in space
and time | Forest fire spreading is a complex phenomenon characterized by a stochastic
behavior. Nowadays, the enormous quantity of georeferenced data and the
availability of powerful techniques for their analysis can provide a very
careful picture of forest fires opening the way to more realistic models. We
propose a stochastic spreading model continuous in space and time that is able
to use such data in their full power. The state of the forest fire is described
by the subprobability densities of the green trees and of the trees on fire
that can be estimated thanks to data coming from satellites and earth
detectors. The fire dynamics is encoded into a density probability kernel which
can take into account wind conditions, land slope, spotting phenomena and so
on, bringing to a system of integro-differential equations for the probability
densities. Existence and uniqueness of the solutions is proved by using
Banach's fixed point theorem. The asymptotic behavior of the model is analyzed
as well. Stochastic models based on cellular automata can be considered as
particular cases of the present model from which they can be derived by space
and/or time discretization. Suggesting a particular structure for the kernel,
we obtain numerical simulations of the fire spreading under different
conditions. For example, in the case of a forest fire evolving towards a river,
the simulations show that the probability density of the trees on fire is
different from zero beyond the river due to the spotting phenomenon.
Firefighters interventions and weather changes can be easily introduced into
the model. | Roberto Beneduci, Giovanni Mascali | 2023-09-01T12:34:12Z | http://arxiv.org/abs/2309.00660v1 | # Forest fire spreading: a nonlinear stochastic model continuous in space and time
###### Abstract
Forest fire spreading is a complex phenomenon characterized by a stochastic behavior. Nowadays, the enormous quantity of georeferenced data and the availability of powerful techniques for their analysis can provide a very careful picture of forest fires opening the way to more realistic models. We propose a stochastic spreading model continuous in space and time that is able to use such data in their full power. The state of the forest fire is described by the subprobability densities of the green trees and of the trees on fire that can be estimated thanks to data coming from satellites and earth detectors. The fire dynamics is encoded into a density probability kernel which can take into account wind conditions, land slope, spotting phenomena and so on, bringing to a system of integro-differential equations for the probability densities. Existence and uniqueness of the solutions is proved by using Banach's fixed point theorem. The asymptotic behavior of the model is analyzed as well.
Stochastic models based on cellular automata can be considered as particular cases of the present model from which they can be derived by space and/or time discretization. Suggesting a particular structure for the kernel, we obtain numerical simulations of the fire spreading under different conditions. For example, in the case of a forest fire evolving towards a river, the simulations show that the probability density of the trees on fire is different from zero beyond the river due to the spotting phenomenon. Firefighters interventions and weather changes can be easily introduced into the model.
**Key Words:** Forest fire spreading, Space-time continuous probabilistic models, Integro-differential equations, Runge-Kutta scheme
## 1 Introduction
Forest fires have a relevant role in agriculture, forestry, hydrological cycles, soil fertility and conservation, climate stability and all of them in turn affect biological diversity. They are natural phenomena [1, 2, 3] (disturbances) in the earth ecosystems balance and are important in the regulation of diversity and adaptive capacity of ecosystems. Moreover, forest fires behavior is influenced by the human factor, which is analyzed in several interdisciplinary studies, see Caldararo [4] for a review. That brings us directly to the anthropocene crisis with humanity as a main factor for climate and ecosystems evolution. In the last decades the frequency and severity of large forest fires has increased [1], and it is reasonable to expect a continuous increasing of timber emission. It has been estimated [5] that _the damages from wind, bark beetles and forest fires will increase in
_Europe with a rate of \(0.91\times 10^{6}m^{3}\) of timber per year until 2030_. Yet, according to the Global Fire emission database (GFED4s), \(CO_{2}\) emissions due to fires in 2014 were 23 % of global fossil fuel \(CO_{2}\) emissions [6]. Severe forest fires can contribute to modify the climate by emission of \(CH_{4}\) and \(N_{2}O\) and by emission of precursors of aerosols and ozone, moreover, they can contribute to change the albedo [1] and the carbon cycle [3]. Climate change and forest fires are connected in a vicious circle since climate change make forest fires more probable, with extreme-fires becoming ever more common, while severe forest fires can speed up climate changes. All of that emphasizes the role of forest fires as a relevant factor of the anthropocene [1]. The impact on people and property is remarkable as well. As a consequence, forest fire management is ever more important in the effort of humanity to preserve biota and to mitigate climate change. It has been calculated [7] that biota protection and biodiversity preservation have an enormous economic impact with annual benefits of 125-140 trillion dollars per year worldwide. In the present paper we would like to contribute to the development of mathematical modelling that constitutes one of the main tools in fire management since mathematical models can show possible fire-spread scenarios allowing planned interventions. Moreover, thanks to new technologies, mathematical models can be ever more realistic and can be confronted more easily with real scenarios. It is worth remarking that forest fires are complex phenomena which are the result of the combination of different stochastic behaviors. There exist relevant deterministic models for fire spreading, both phenomenological and physical, both based on differential equations and on cellular automata [8, 9, 10, 11, 12, 13, 14, 15, 16], see Sullivan [17] for a review. Some of them are usually combined in simulation softwares. Those are good resources to figure out scenarios of the average behavior of the forest fire spreading but do not contemplate its stochastic behavior which is intrinsic to the process, particularly in its early stage when the number of firing trees is not very high and the same initial conditions can bring to different evolutions. That makes stochastic models very relevant in forest fire management as well as in epidemic management [18, 19, 20, 21] to which our model can be adapted with some appropriate modifications. In particular, a stochastic model will be able to outline different scenarios all of them compatible with the same initial conditions. In our knowledge, the stochastic models already developed are based on cellular automata [22, 23, 24, 25, 26, 27, 17] and make some simplifications of the real process, in particular they are discrete in space and/or time and assume that the spreading happens only in few directions that depend on the geometry of the lattice (triangular, rectangular, hexagonal, octagonal). Simplification apart, the selection of only few spreading directions could bring to distortions of the fire shape produced by the model [12, 28].
Here, a non-linear continuous stochastic model for the spreading of forest fires in space and time is developed. The other stochastic models based on cellular automata can be considered as particular cases that can be derived from the continuous model by space and/or time discretization. One of the advantages of the model is that it can make use in its full power of the data coming from satellites and earth detectors which are very important tools coming from new technologies and data science. Indeed, the description of the state of the fire in the model is realized by space probability density functions that can be approximated from data by means of non-parametric estimation techniques as for example kernel density estimation (KDE). Therefore, non homogeneous trees distributions can be taken into account making the model and its predictions closer to the real features of the forest. The dynamics is encoded into a density probability kernel which takes into account both wind conditions, land slope, spotting phenomena, and brings to a system of integro-differential equations for the probability densities whose time evolution can be forecasted once the system has been solved. The numerical implementation of the model provides probabilistic space-time maps of the fire evolution. Since the dynamics of the forest fire can be detected by means of data acquisition and elaboration through KDE, a very accurate model validation is at hand. Besides, the model we propose can be used to outline possible scenarios compatible with the initial conditions. Moreover, the model is able to describe fire spotting which is a typical stochastic mechanism for fire spreading at long distances and to include firefighters interventions and weather changes in its dynamics. It is worth remarking that, at variance with cellular automata models, fire spreading is not constrained in few directions, but is modelled as a continuous process in all directions giving a more realistic description of fire spreading. One of the motivation for using cellular automata is that the simplification they
make in describing fire spreading corresponds to a reduction in the computational costs. Here we show that the computational problem one has to face when dealing with a continuous model can be overcome due to the possible parallelization of the numerical scheme. Moreover, by Banach's fixed point theorem, it is proved that the non-linear system of integro-differential equations that encodes the dynamics of the forest fires admits a unique solution. That is relevant from the mathematical viewpoint and for the consistency of the model. In the next section we build the model and propose a physical interpretation. Then, we provide its numerical implementation (by using fourth order Runge-Kutta scheme) and show some solutions in simple scenarios. In the appendix we prove existence and uniqueness of solutions and analyze the asymptotic behavior.
## 2 The Probabilistic Model
Let \(\Sigma\) be the surface of the earth we focus on. A point in the surface is denoted by \(\mathbf{x}=(x,y)\in\Sigma\). We assume there are positive definite functions \(\psi^{F}(t,\mathbf{x}),\psi^{B}(t,\mathbf{x}),\psi^{G}(t,\mathbf{x})\in L^{1} (\Sigma,d^{2}\mathbf{x})\) representing sub-probability densities and describing the state of the forest fire at time \(t\). In particular, \(\psi^{F}(t,\mathbf{x})\) is the sub-probability density for the trees on fire, \(\psi^{B}(t,\mathbf{x})\) the sub-probability density for the burnt trees (firing trees which turn into burnt ones) and \(\psi^{G}(t,\mathbf{x})\) the sub-probability density for the green trees. The probability density functions are assumed to be derivable with respect to the time variable. Moreover, we assume \(\psi^{G}(0,\mathbf{x})+\psi^{F}(0,\mathbf{x})=\psi^{F}(t,\mathbf{x})+\psi^{G} (t,\mathbf{x})+\psi^{B}(t,\mathbf{x})\) for every \(t\) and \(\int_{\Sigma}(\psi^{G}(0,\mathbf{x})+\psi^{F}(0,\mathbf{x}))\,d\mathbf{x}=1\). It is worth noticing that the existence of \(\psi^{F}\), \(\psi^{B}\) and \(\psi^{G}\) is a theoretical assumption; the actual fire distribution is just an instantiation of the random process associated to the fire distribution, see Fig. 1 which illustrates the spatio-temporal evolution of a forest fire. Anyway, an approximation of \(\psi^{F}\), \(\psi^{B}\) and \(\psi^{G}\) can be obtained through density estimation methods as for example the kernel density estimation (KDE) [29] which provides differentiable probability density functions, starting from a distribution like that in Fig. 1.
The model is inspired by an epidemic model [30] where an interpretation of the probability densities motivated by the statistical interpretation of quantum mechanics [31] is suggested. We remark that in the present paper the focus is on the probability density functions \(\psi^{F}\), \(\psi^{G}\), \(\psi^{B}\) which, by analogy with quantum mechanics, define the state of the forest fire. We assume the existence of these probability density functions for which we postulate a deterministic evolution law (see below).
Moreover, the model, describing the evolution of \(\psi^{F}\) can be used to locate those regions of space where the probability of firing is going to increase or to decrease. That could be very helpful to governments as we remarked.
Now, we proceed to define the evolution equation for the state of the forest fire. We provide a non-linear evolution for the sub-probability distributions.
Figure 1: Qualitative illustration of the spatio-temporal evolution of a forest fire. The image illustrates qualitatively the expansion of the forest fire in Europe during the period 18-22 June 2022 and can be interpreted as a probability cloud evolving in time. The image has been downloaded from the NASA website [https://shorturl.at/pKTV5](https://shorturl.at/pKTV5).
The time it takes for a firing tree to burn down depends on several factors (humidity, nature of the wood, wind exposure, distance to other trees, mass, mass distribution, etc.) that make it a random variable, which we indicate by \(T\). Let \(p(t)\) denote its probability distribution that we assume to be continuous. Therefore, the probability that a tree that started firing at \(t=0\) burns down in the time interval \([0,t]\) is \(P(T<t)=F(t)=\int_{0}^{t}p(\tau)\,d\tau\) and the probability that the same tree is still firing at time \(t\) is \(\pi(t)=F(\infty)-F(t)=P(T>t)\). Then, by the Bayes theorem, the probability that a tree burns down during the time interval \([t,t^{\prime}]\), \(t^{\prime}>t\), given it is firing at time \(t\), is \(P(t<T<t^{\prime}|T>t)=\frac{F(t^{\prime})-F(t)}{F(\infty)-F(t)}\). We can define the burning rate, that is, the conditional probability rate \(\alpha(t):=\lim_{\epsilon\to 0}\frac{P(t<T<t+\epsilon[T>t)}{\epsilon}= \frac{1}{F(\infty)-F(t)}\lim_{\epsilon\to 0}\frac{F(t+\epsilon)-F(t)}{ \epsilon}=\frac{1}{\pi(t)}\frac{dF}{dt}=\frac{p(t)}{\pi(t)}\).
In the case the probability distribution of T is exponential, \(p(t)=\lambda e^{-\lambda t}\), we obtain \(\alpha(t)=\lambda\) and the average time it takes for a firing tree to burn down is \(\hat{F}=\frac{1}{\lambda}\), where the parameter \(\lambda\) can therefore depend on the factors specified above. Moreover, in the exponential case, the burning down process is memoryless; in other words, the probability that a tree that is firing at time t will fire down in the time interval \([t,t^{\prime}]\) does not depend on \(t\) but only on \(t^{\prime}-t\).
### Modelling the burning down process
The probability rate \(\alpha(t)\) can be used to model the burning down process and to connect \(\psi^{B}\) and \(\psi^{F}\):
\[\psi^{B}(t,\mathbf{x})=\int_{0}^{t}\alpha(\tau)\psi^{F}(\tau, \mathbf{x})d\tau, \tag{1}\]
we notice that \(\alpha\) can also be a function of the position, since the trees in a location can be different from the trees in another location (e.g., type of wood, humidity and so on).
Note that the time \(\tau\) in equation (1) is the forest fire time. If we assume that the burning down process of a tree does not depend on the time the tree started burning, \(\alpha\) must be a constant (memoryless process). On the contrary, in order to describe a scenario in which external factors can influence the forest fire propagation, the time dependence of \(\alpha\) should be introduced. For example, firefighters intervention will certainly increase the probability that a tree stops firing; that would correspond to an increasing function \(\alpha(t)\). Later (see the subsection below regarding the model with memory) we will modify the \(\psi^{B}\) in order to describe a scenario with no external intervention but with a burning down process which is not memoryless.
### Modelling the fire spreading
The forest fire propagation is encoded into a kernel \(W(t,\mathbf{x},\mathbf{y})\) which describes the transition probability of the fire from \(\mathbf{y}\) to \(\mathbf{x}\) at time \(t\). We remark that the transition probability takes into account the randomness of the forest fire; just to give some examples: 1) it considers the randomness of the blaze geometry, 2) firebrand spotting in which airborne burning particles, which travel through wind-driven transport, can ignite new fires beyond the fire line, 3) the randomness in the heat propagation due to its turbulence.
In particular, the quantity
\[[d^{2}\mathbf{x}\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})d\tau]\]
is interpreted as the conditional probability that a green tree in the neighbourhood centered in \(\mathbf{x}\) with radius
\(d^{2}{\bf x}\) is set on fire during the time interval \([\tau,\tau+d\tau]\) if a tree on fire is present in \({\bf y}\) at time \(\tau\).1 Then,
Footnote 1: The meaning of \(W\) can be better understood by introducing the quantity
\[\mu_{\Delta_{2}}(\tau,{\bf y})\,d\tau:=\int_{\Delta_{2}}[W(\tau,{\bf x},{\bf y}) d\tau]\psi^{G}(\tau,{\bf x})d^{2}{\bf x}\]
which can be interpreted as the transition probability that a green tree in spatial region \(\Delta_{2}\) is set on fire during the time interval \((\tau,\tau+d\tau)\) if in \({\bf y}\) there is a tree on fire. Indeed, \(W(\tau,{\bf x},{\bf y})\) is the Radon-Nikodym derivative of \(\mu_{(\cdot)}(\tau,{\bf y})\) with respect to \(\int_{(\cdot)}\psi^{G}(\tau,{\bf x})\,d^{2}{\bf x}\); \(\mu_{(\cdot)}(\tau,{\bf y})\) being absolutely continuous with respect to \(\int_{(\cdot)}\psi^{G}(\tau,{\bf x})\,d^{2}{\bf x}\). That seems quite natural since \(\int_{\Delta_{2}}\psi^{G}(\tau,{\bf x})\,d^{2}{\bf x}=0\) implies that no green trees can be set on fire inside \(\Delta_{2}\). Then,
\[[d^{2}{\bf x}\psi^{G}(\tau,{\bf x})W(\tau,{\bf x},{\bf y})d\tau](\psi^{F}(\tau,{\bf y})\,d\tau\,d^{2}{\bf y}) \tag{2}\]
represents the probability that a green tree in the neighbourhood centered in \({\bf x}\) with radius \(d^{2}{\bf x}\) is set on fire by the trees on fire contained in the neighbourhood centered in \({\bf y}\) with radius \(d^{2}{\bf y}\) during the time interval \([\tau,\tau+d\tau]\). Hence,
\[d^{2}{\bf x}\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,{\bf x})W(\tau,{\bf x},{ \bf y})\psi^{F}(\tau,{\bf y})\,d\tau\,d^{2}{\bf y} \tag{3}\]
gives the probability of new trees on fire in the neighbourhood centered in \({\bf x}\) with radius \(d^{2}{\bf x}\) during the time interval \([0,t]\).
Since in the same time interval there is a probability that the trees fire down, we have
\[d^{2}{\bf x}\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,{\bf x})W( \tau,{\bf x},{\bf y})\psi^{F}(\tau,{\bf y})\,d\tau\,d^{2}{\bf y}=\] \[=d^{2}{\bf x}\,[\psi^{F}(t,{\bf x})-\psi^{F}(0,{\bf x})+\psi^{B} (t,{\bf x})]. \tag{4}\]
Concerning the time dependence of \(W\), it can be due, for example, to changes in wind and atmospheric conditions, or to external interventions such as firefighters action which will certainly decrease the probability that a firing tree can make a green tree to start firing; that would correspond to a decreasing function \(W(t)\).
The flux diagram in figure 2, case 1), illustrates the variation of the sub-probability distributions during an infinitesimal time interval \(dt\).
By (4) we obtain,
\[\psi^{F}(t,{\bf x})=\psi^{F}(0,{\bf x})-\psi^{B}(t,{\bf x})+\int_{\Sigma}\int_ {0}^{t}\,\psi^{G}(\tau,{\bf x})W(\tau,{\bf x},{\bf y})\psi^{F}(\tau,{\bf y})\, d\tau\,d^{2}{\bf y} \tag{5}\]
with,
\[\psi^{G}(0,{\bf x})+\psi^{F}(0,{\bf x}) =\psi^{F}(t,{\bf x})+\psi^{G}(t,{\bf x})+\psi^{B}(t,{\bf x}) \tag{6}\] \[1 =\int_{\Sigma}\left(\psi^{G}(0,{\bf x})+\psi^{F}(0,{\bf x})\right) d^{2}{\bf x}\] \[\psi^{B}(t,{\bf x}) =\int_{0}^{t}\,\alpha(\tau,{\bf x})\psi^{F}(\tau,{\bf x})\,d\tau.\]
By (6) and (5) we obtain,
\[\psi^{G}(t,\mathbf{x})=\psi^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\,\psi^{G} (\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,d\tau \,d^{2}\mathbf{y}. \tag{7}\]
Equations (5), (7) and (1) define the following non-linear system of integral equations
\[\begin{cases}\psi^{F}(t,\mathbf{x})&=\psi^{F}(0,\mathbf{x})-\psi^{B}(t, \mathbf{x})+\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\\ \psi^{G}(t,\mathbf{x})&=\psi^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\, \psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{ y})|\,d\tau\,d^{2}\mathbf{y}.\end{cases} \tag{8}\]
See the appendix for the proof of the existence and uniqueness of solutions of system (8).
System (8) can be written in differential form as follows. Supposing \(\epsilon\) sufficiently small, we can use (5) to derive the evolution from the state at time \(t\) to the state at time \(t+\epsilon\),
\[\psi^{F}(t+\epsilon,\mathbf{x}) =\psi^{F}(0,\mathbf{x})-\int_{0}^{t+\epsilon}\alpha(\tau)\psi^{F} (\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t+\epsilon}\,\psi^{G}(\tau, \mathbf{x})\,W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,d\tau\, d^{2}\mathbf{y}\] \[=\psi^{F}(0,\mathbf{x})-\int_{0}^{t}\alpha(\tau)\psi^{F}(\tau, \mathbf{x})\,d\tau-\int_{t}^{t+\epsilon}\alpha(\tau)\psi^{F}(\tau,\mathbf{x} )\,d\tau+\] \[\qquad\qquad+\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x} )\,W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,d\tau\,d^{2} \mathbf{y}+\] \[\qquad\qquad+\int_{\Sigma}\int_{t}^{t+\epsilon}\,\psi^{G}(\tau, \mathbf{x})\,W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,d\tau\, d^{2}\mathbf{y}\] \[=\psi^{F}(t,\mathbf{x})-\alpha(t,\mathbf{x})\psi^{F}(t,\mathbf{x })\,\epsilon+\epsilon\psi^{G}(t,\mathbf{x})\int_{\Sigma}\,W(t,\mathbf{x}, \mathbf{y})\psi^{F}(t,\mathbf{y})\,d^{2}\mathbf{y}. \tag{9}\]
Then, dividing both sides by \(\epsilon\) and taking the limit for \(\epsilon\) going to \(0\), one gets
Figure 2: Evolution of the sub-probability densities during an infinitesimal time interval \(d\,t\). The arrows represent the direction of the probability fluxes.
\[\frac{\partial\psi^{F}(t,\mathbf{x})}{\partial t}=-\alpha(t,\mathbf{x})\psi^{F}(t, \mathbf{x})+\psi^{G}(t,\mathbf{x})\int_{\Sigma}\,W(t,\mathbf{x},\mathbf{y})\psi^ {F}(t,\mathbf{y})\,d^{2}\mathbf{y}. \tag{10}\]
Using a similar procedure, one can obtain the differential equation for the green trees probability density
\[\frac{\partial\psi^{G}_{t}(\mathbf{x})}{\partial t}=-\psi^{G}_{t}(\mathbf{x}) \int_{\Sigma}W(t,\mathbf{x},\mathbf{y})\psi^{F}(t,\mathbf{y})d^{2}\mathbf{y}. \tag{11}\]
Equations (10) and (11) define a non linear system of integro-differential equations.
### The case with memory
In the case one needs to describe a burning down process whose probability rate depends on the time (memory), it is possible to modify the model by changing the way we calculate \(\psi^{B}\). In particular, recalling that \(\pi(t)\) is the probability that a tree that went on fire at time \(t=0\) is still on fire at time \(t\), the probability that the trees that went on fire at position \(\mathbf{x}\) during the time interval \([\tau,\tau+d\tau]\) have not completely burned at time \(t>\tau\) is given by
\[\int_{\Sigma}\,\Big{[}\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y}) \psi^{F}(\tau,\mathbf{y})\Big{]}\pi(t-\tau)\,d\tau. \tag{12}\]
As a consequence, the flux diagram has to be modified, see figure 2 case 2).
On the basis of these considerations we can write down the evolution equations for the sub-probability densities \(\psi^{F}\) and \(\psi^{G}\), we have
\[\psi^{F}(t,\mathbf{x}) =\psi^{F}(0,\mathbf{x})\pi(t)+\int_{0}^{t}\,\int_{\Sigma}\psi^{G} (\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\pi(t -\tau)\,d\tau\,d^{2}\mathbf{y}, \tag{13}\] \[\psi^{G}(t,\mathbf{x}) =\psi^{G}(0,\mathbf{x})-\int_{0}^{t}\int_{\Sigma}\psi^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,d\tau\,d^{ 2}\mathbf{y}. \tag{14}\]
The sub-probability density \(\psi^{B}\) of burned trees is defined by
\[\psi^{G}(0,\mathbf{x})+\psi^{F}(0,\mathbf{x})=:\psi^{G}(t,\mathbf{x})+\psi^{F }(t,\mathbf{x})+\psi^{B}(t,\mathbf{x}), \tag{15}\]
and the following property holds
\[\int_{\Sigma}\left(\psi^{G}(0,\mathbf{x})+\psi^{F}(0,\mathbf{x})\right)d^{2} \mathbf{x}=\int_{\Sigma}\left(\psi^{G}(t,\mathbf{x})+\psi^{F}(t,\mathbf{x})+ \psi^{B}(t,\mathbf{x})\right)d^{2}\mathbf{x}=1.\]
Equations (13) and (14) constitute a non-linear system of integral equations.
By the same procedure we used to derive equations (10) and (11), the system of equations (13)-(14) can be written in differential form. Supposing \(\epsilon\) sufficiently small, we can derive the evolution from the state at time \(t\) to the state at time \(t+\epsilon\),
\[\psi^{F}(t+\epsilon,\mathbf{x}) =\psi^{F}(0,\mathbf{x})\pi(t+\epsilon)+\int_{0}^{t+\epsilon}\int_ {\Sigma}\psi^{G}(\tau,\mathbf{x})\,W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,\pi(t+\epsilon-\tau)\,d\tau\,d^{2}\mathbf{y}\] \[\approx\psi^{F}(0,\mathbf{x})[\pi(t)+\epsilon\dot{\pi}(t)]+\int_ {\Sigma}\int_{0}^{t}\psi^{G}(\tau,\mathbf{x})\,W(\tau,\mathbf{x},\mathbf{y}) \psi^{F}(\tau,\mathbf{y})\left[\pi(t-\tau)+\epsilon\dot{\pi}(t-\tau)\right]d \tau\,d^{2}\mathbf{y}\] \[+\int_{t}^{t+\epsilon}\int_{\Sigma}\psi^{G}(\tau,\mathbf{x})\,W( \tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\,\pi(t-\tau)\,d\tau\,d^{2} \mathbf{y}.\]
Then, we can get an integro-differential equation for the firing trees probability density \(\psi^{F}\). Indeed \(\psi^{F}(t+\epsilon,{\bf x})=\psi^{F}(t,{\bf x})+\epsilon\frac{\partial\psi^{F}(t,{\bf x})}{\partial t}+o(\epsilon)\) so that substituting the latter in the previous expression, dividing both sides by \(\epsilon\) and taking the limit for \(\epsilon\to 0\), we obtain
\[\frac{\partial\psi^{F}(t,{\bf x})}{\partial t} = \psi^{F}(0,{\bf x})\dot{\pi}(t)+\int_{\Sigma}\int_{0}^{t}\psi^{G}( \tau,{\bf x})\,W(\tau,{\bf x},{\bf y})\psi^{F}(\tau,{\bf y})\,\dot{\pi}(t-\tau )\,d\tau\,d^{2}{\bf y} \tag{16}\] \[+ \int_{\Sigma}\psi^{G}(t,{\bf x})\,W(t,{\bf x},{\bf y})\psi^{F}(t,{\bf y})\,d^{2}{\bf y}.\]
Analogously, it is possible to write an integro-differential equation for the green trees probability density \(\psi^{G}\),
\[\frac{\partial\psi^{G}(t,{\bf x})}{\partial t} = -\int_{\Sigma}\psi^{G}(t,{\bf x})\,W(t,{\bf x},{\bf y})\psi^{F}(t,{\bf y})\,d^{2}{\bf y}. \tag{17}\]
For the proof of the existence and uniqueness of solutions of the previous system, see Appendix.
In the case of a memoryless burning process, \(\pi(t)=e^{-\lambda t}\), and it can be easily seen that equations (16) and (17) coincide with equations (10) and (11) respectively.
## 3 Numerical simulations
In this section we will investigate the potentialities of the proposed model by presenting the results of some numerical simulations. An essential element of the model is the rate function \(W\) which has to incorporate the main factors which influence fire spread, such as the type of vegetation, terrain slope, humidity, wind speed, as well as spotting phenomenon. It is not easy to take into account all these factors, since they are not completely understood and, furthermore, they interact with each other. We attempt at a phenomenological approach inspired by some theoretical results [32, 33]. In particular, we assume that \(W\) can be written as the sum of two terms: \(W_{1}(t,{\bf x},{\bf y})\) and \(W_{2}(t,{\bf x},{\bf y})\). The former one takes into account the fire transition probability rate from \({\bf y}\) to \({\bf x}\) due to heat transfer by radiation or convection and can be written as
\[W_{1}(t,{\bf x},{\bf y})=c_{w_{1}}\exp{[-\rho^{2}/2\sigma_{w_{1}}^{2}(1+e\cos {(\varphi-\bar{\varphi})})]}.\]
where \(\rho:=|{\bf x}-{\bf y}|\), \(c_{w_{1}}\) and \(\sigma_{w_{1}}\) can depend on the space and time variables, \(e\) is a sort of eccentricity taking into account that fire spreads mainly in the direction of the resultant wind-slope vector \({\bf v}\), while \(\bar{\varphi}\) and \(\varphi\) respectively are the angles that the vectors \({\bf v}\) and \({\bf x}-{\bf y}\) makes with the \(x\)-axis. The vector \({\bf v}\) is given by
\[{\bf v}={\bf v}_{w}+{\bf v}_{s},\]
where \({\bf v}_{w}\) is the component in the wind direction, whose modulus quantifies the wind effect, while \({\bf v}_{s}\) is in the direction of the maximum slope at \({\bf y}\) and has modulus equal to the slope effect. Wind and slope effects are respectively given by:
\[|{\bf v}_{w}|=C(3.281\nu)^{B}(\beta/\beta_{op}^{-E}),\] \[|{\bf v}_{s}|=5.275\beta^{-0.3}\tan{\omega^{2}},\]
here \(\nu\) is the wind speed, \(\beta\) is the packing ratio of the fuel bed, \(\beta_{op}\) is the optimum packing ratio, \(\omega\) is the slope at \({\bf y}\) expressed in radians, and \(C\), \(B\) and \(E\) are coefficients which depend on the fuel particle size in the fuel bed [34].
The functions \(c_{w_{1}}\), \(\sigma_{w_{1}}\) and the eccentricity \(e\) depend on the above-specified factors. This dependence might be accounted for into physical models or, empirically, by the fitting of real data. Here, we do not enter into
the details and limit ourselves to develop the spreading model and its interpretation; its implementation in real scenarios, for which a deeper description of the parameters is necessary, will be the topic of a future work.
Concerning the function \(W_{2}\), it represents the fire transition probability rate from \(\mathbf{y}\) to \(\mathbf{x}\) due to firebrand generation, transport, and ignition. We assume that firebrand generation is a Poisson process [35] with density2\(c_{w_{2}}(\mathbf{y},t)\). Therefore, the emission probability during a sufficiently small time interval \([t,t+\Delta t]\) does not depend on \(t\) but only on \(\Delta t\). In other words, we model a tree as a Poissonian emitter. That is an idealization since the emission probability should depend on the firing intensity of the tree and then on the time it started firing. Anyway such an idealization should be effective in describing the fire spotting process since we focus on the average behavior of the trees. According to the Poisson process theory, the probability that a single firebrand is emitted during the time interval \([t,t+\Delta t]\), given that it is firing at time \(t\), is \(P(Emission\ in\ [t,t+\Delta t]|Firing\ at\ \mathbf{y}):=c_{w_{2}}(\mathbf{y},t) \Delta t+o(\Delta t)\), while the probability that more than one firebrand is emitted is proportional to \(o(\Delta t)\). It follows that the probability that a tree in a neighborhood of \(\mathbf{y}\) is firing at time \(t\) and emits a firebrand during the time interval \([t,t+\Delta t]\) is
Footnote 2: In a Poisson process, the density is constant. Here, the time dependence in \(c_{w_{2}}(\mathbf{y},t)\) is introduced to take account of possible variations in the emission rate due to variations in the atmospheric conditions and/or external interventions (which would correspond to a Poisson process with a different density) while the space dependence can be due to the fact that different space regions can contain different types of trees.
\[P(emission\ in\ [t,t+\Delta t]|firing\ at\ \mathbf{y})\psi^{F}(t,\mathbf{y})\,d \mathbf{y}=c_{w_{2}}(\mathbf{y},t)\psi^{F}(t,\mathbf{y})\Delta t\,d\mathbf{y}.\]
Further, we assume \(W_{2}\) to have the form
\[W_{2}=\left\{\begin{array}{l}\frac{c_{w_{2}}}{\rho}\exp\Big{[}-\frac{(\log \left(\rho/\rho_{0}\right)-\mu)^{2}}{2\sigma_{w_{2}}^{2}}\Big{]}\exp\Big{(}- \frac{1}{\cos\left(\varphi-\bar{\varphi}\right)}\Big{)}\exp\Big{(}-\Big{(} \frac{((\mathbf{x}-\mathbf{y})\cdot\frac{\mathbf{w}_{1}}{|\mathbf{w}_{1}|})^{ 2}}{2\sigma_{1}^{2}}+\frac{(\mathbf{x}-\mathbf{y})\cdot\mathbf{n}_{w}}{2 \sigma_{2}^{2}}\Big{)}\Big{)},\ -\frac{\pi}{2}<\varphi<\frac{\pi}{2}\\ 0,\qquad\text{otherwise},\end{array}\right. \tag{18}\]
where \(\frac{1}{\rho}\exp\Big{[}-\frac{(\log\left(\rho/\rho_{0}\right)-\mu)^{2}}{2 \sigma_{w_{2}}^{2}}\Big{]}\times\exp\Big{(}-\frac{1}{\cos\left(\varphi-\bar{ \varphi}\right)}\Big{)}\) is proportional to the probability that a firebrand generated at \(\mathbf{y}\) lands at \(\mathbf{x}\)[32, 33], in particular the first two factors represent a lognormal function of the landing distance [32, 36] while the third factor takes into account that this probability has its maximum in the wind direction and approaches zero as the angle between \(\mathbf{x}-\mathbf{y}\) and this direction goes to \(\frac{\pi}{2}\). The last factor is proportional to the probability of fire ignition at \(\mathbf{x}\), given that in \(\mathbf{x}\) there is a green tree,
\[\exp\Big{(}-\Big{(}\frac{(\mathbf{x}-\mathbf{y})\cdot\frac{\mathbf{v}_{w}}{| \mathbf{v}_{w}|})^{2}}{2\sigma_{1}^{2}}+\frac{(\mathbf{x}-\mathbf{y})\cdot \mathbf{n}_{w}}{2\sigma_{2}^{2}}\Big{)}\Big{)}\psi^{G}(t,\mathbf{x})d^{2} \mathbf{x}\]
being the probability that a tree in a neighborhood of \(\mathbf{x}\) is put on fire by a firebrand coming from \(\mathbf{y}\). This probability exponentially decreases with \(\rho\), since the longest the travelled distance the greater is the decreasing in the temperature of the firebrands. It is also taken into account that the velocity in the wind direction is higher, hence the same distance is travelled in less time and the decreasing of the temperature is lower.
Therefore, combining all those conditional probabilities, we obtain that
\[dt\,d^{2}\mathbf{x}\int_{\Sigma}\psi^{G}(t,\mathbf{x})W_{2}\psi^{F}(t, \mathbf{y})d^{2}\mathbf{y}\]
can be interpreted as the probability that a firebrand arrives in a neighbourhood of \(\mathbf{x}\) and makes a green tree to start firing during the time interval \([t,t+dt]\).
The meaning of the various terms which appear in (18) is the following
* \(\rho_{0}\) is a reference length, \(\mu\) and \(\sigma_{w_{2}}\) are the mean and the standard deviation of \(\log\frac{\rho}{\rho_{0}}\) respectively;
* \(\mathbf{n}_{w}\) is the unit vector in the direction orthogonal to that of the wind, and \(\sigma_{1}\) and \(\sigma_{2}\) are the standard deviations of the probability of fire ignition in the wind direction and in the direction orthogonal to it.
The terms \(c_{w_{2}}\)\(\mu\), \(\sigma_{w_{2}}\), \(\sigma_{1}\) and \(\sigma_{2}\) depend on the various factors which influence the firebrand phenomenon such as the critical surface fire intensity needed to ignite the crown of a tree, the surface ifeline intensity, the wind speed, the fire intensity produced by torching trees and so on. Also this dependence will be the object of future studies.
We conducted two preliminary numerical experiments. In the first one we considered only the function \(W_{1}\), while in the second one we took into account also \(W_{2}\). For simplicity, in both experiments we considered a flat rectangular wildland and a constant wind speed, which points along the negative \(x\)-axis. Moreover, \(c_{w_{1}}\), \(\sigma_{w_{1}}\), \(e\), \(c_{w_{2}}\),\(\sigma_{w_{2}}\), \(\sigma_{1}\), and \(\sigma_{2}\) were taken to be constant. The unit time (u.t.) was taken equal to the decay time of the firing trees, while the unit length (u.l.) was chosen in such a way that a 1\(\times\)1 square in the spatial grid corresponds to the mean area of a tree which is around 15m\({}^{2}\). The values of the constants are reported in Table 1 below.
\begin{tabular}{|c|c||c|c||c|c||c|c|} \hline \(c_{w_{1}}\) & 1\(\times 10^{3}\) 1/u.t. & \(\bar{\varphi}\) & \(\pi\) rad & \(\mu\) & 25 & \(\sigma_{2}\) & 4.86 \(\times\) 10\({}^{-2}\) & u.l. \\ \(\sigma_{w_{1}}\) & 1.41 \(\times\) 10\({}^{-2}\) & u.l. & \(c_{w_{2}}\) & 1\(\times 10^{3}\) u.l./u.t. & \(\sigma_{w_{2}}\) & 5 & & \\ \(e\) & 0.9 & \(\rho_{0}\) & 2.38 \(\times\) 10\({}^{-4}\) & u.l. & \(\sigma_{1}\) & 0.391 & u.l. & & \\ \hline \end{tabular}
At the middle of the rectangular land there is a subregion, parallel to the \(y\)-axis, which is not inflammable, for example a river. In both cases we took \(\pi(t)=\exp{(-t)}\). The initial green trees sub-probability density is constant in the inflammable region, while that for the firing trees is non-null only in a small area centered at \(x_{1}=1.92\,u.l.\), \(x_{2}=0.6\,u.l.\), see Fig. 3 below. From Figs. 3-8 below, it can be seen that if the spotting phenomenon is not taken into account, the fire propagates upwind and the wildfire growth is ovoid. The fire reaches the non-inflammable region but does not manage to overcome it, and slowly propagates downwind until it is extinguished. From Fig. 8 one can see that there is a non-null probability that a part of the trees survives and that this probability increases near the right boundary of the region. When the term \(W_{2}\) is added to the transition rate the fire manages to jump the non-inflammable region, see Figs. 9-14 below. We numerically solved the system of equations (16)-(17), by using a fourth order Runge-Kutta scheme. We used a spatial grid consisting of 251\(\times\) 121 points and a time step equal to 10\({}^{-2}u.t\).
Eventually, we remark that the numerical simulation could be very much sped-up by exploiting the inherent parallelism of the model, in fact the spatial integrals which appear on the right-hand sides of equations (16)-(17) can be computed at any point of the grid independently from each other.
\begin{table}
\begin{tabular}{|c|c||c|c||c|c||c|c|} \hline \(c_{w_{1}}\) & 1\(\times 10^{3}\) 1/u.t. & \(\bar{\varphi}\) & \(\pi\) rad & \(\mu\) & 25 & \(\sigma_{2}\) & 4.86 \(\times\) 10\({}^{-2}\) & u.l. \\ \(\sigma_{w_{1}}\) & 1.41 \(\times\) 10\({}^{-2}\) & u.l. & \(c_{w_{2}}\) & 1\(\times 10^{3}\) u.l./u.t. & \(\sigma_{w_{2}}\) & 5 & & \\ \(e\) & 0.9 & \(\rho_{0}\) & 2.38 \(\times\) 10\({}^{-4}\) & u.l. & \(\sigma_{1}\) & 0.391 & u.l. & & \\ \hline \end{tabular}
\end{table}
Table 1: Values of the terms present in the expressions of \(W_{1}\) and \(W_{2}\). The terms without units are dimensionless.
Figure 4: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0.2 u.t. First numerical experiment.
Figure 5: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0.3 u.t. First numerical experiment.
Figure 3: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0 time. u.t. First numerical experiment.
Figure 8: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=7 u.t. First numerical experiment.
Figure 6: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=1 u.t. First numerical experiment.
Figure 7: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=3 u.t. First numerical experiment.
Figure 11: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0.3 u.t. Second numerical experiment.
Figure 10: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0.2 u.t. Second numerical experiment.
Figure 9: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=0 u.t. Second numerical experiment.
Figure 14: Left: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=7 u.t. Second numerical experiment.
Figure 12: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=1 u.t. Second numerical experiment.
Figure 13: Green trees sub-probability density. Right: Firing trees sub-probability density. Time=3 u.t. Second numerical experiment.
Conclusions
We proposed a non-linear stochastic model for forest fire spreading which is continuous in space and time, and can be used in forest fire management. It is well known that forest fires are a contributing factor to climate change and are responsible for large \(CO_{2}\) emissions. Moreover, forest fire spreading is a complex phenomenon characterized by a stochastic behavior whose description requires stochastic models. Until some years ago, continuous stochastic models would have been unhelpful because of the lack of data but nowadays the enormous quantity of geo-referenced data and the availability of powerful techniques for their analysis can provide a very careful picture of forest fires opening the way to more realistic stochastic models that are continuous in space and time. We provide such a model that is able to exploit geo-referenced data in their full power and can be used for forest fire forecasting. The effects due to land slope, wind and fire spotting are included. Moreover, the model can take into account the effects of firefighters interventions. Finally, at variance with stochastic models based on cellular automata where the spreading happens only in few directions, it is continuous in space. In our knowledge it is the first model of this kind. Besides, it is a kind of mother model from which many other models (both stochastic and deterministic) can be derived. For example, stochastic models based on cellular automata can be derived by time and/or space discretization.
The state of the forest fire is described by the sub-probability densities of green and firing trees (both can be estimated thanks to data coming from satellites and earth detectors) whose evolution is determined by a density kernel \(W\) and can be calculated by solving a system of integro-differential equations. The evolution of the sub-probability densities predicted by the model can be used to locate those regions in space where the fire probability is going to increase, providing crucial information for firefighters.
In order to implement the model numerically we used a phenomenological approach inspired by some theoretical results [32, 33] that suggested a particular structure for \(W\). By using fourth order Runge-Kutta scheme we realized two numerical experiments and calculated the evolution of the probability densities in the case of 1) a forest fire propagating towards a river in the presence of wind and 2) a forest fire propagating towards a river in the presence of both wind and fire spotting. The simulation shows that in the first case the fire propagation stops in correspondence to the river while in the second case the probability that the fire propagates beyond the river is different from zero. In both cases the probability of downwind propagation is different from zero but lower than the probability of propagation in the wind direction.
## Acknowledgements
The present work was performed under the auspices of the GNFM (Gruppo Nazionale di Fisica Matematica).
The present research has been funded by the European Union and by the Italian Ministry of Research (NextGenerationEU), project title: TECH4YOU, SPOKE 1 GOAL 1.4 PP3, project number: \(ECS-00000009\).
## Appendix
In this Appendix, we study the existence and uniqueness of solutions for the system (8) and for the system defined by equations (13) and (14). In particular, by using the Banach fixed point theorem, we prove the existence and uniqueness of solutions of the two systems. For the reader convenience we copy the two systems below,
\[\begin{cases}&\psi^{F}(t,\mathbf{x})=\psi^{F}(0,\mathbf{x})-\psi^{B}(t,\mathbf{ x})+\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x}, \mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\\ &\psi^{G}(t,\mathbf{x})=\psi^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\,\psi^ {G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})| \,d\tau\,d^{2}\mathbf{y}\end{cases} \tag{19}\]
and
\[\begin{cases}&\psi^{F}(t,\mathbf{x})=\psi^{F}(0,\mathbf{x})\pi(t)+\int_{0}^{t }\int_{\Sigma}\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\psi^{F}( \tau,\mathbf{y})\pi(t-\tau)\,d\tau\,d^{2}\mathbf{y}\\ &\psi^{G}(t,\mathbf{x})=\psi^{G}(0,\mathbf{x})-\int_{0}^{t}\int_{\Sigma}\psi^ {G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\psi^{F}(\tau,\mathbf{y})\, d\tau\,d^{2}\mathbf{y}.\end{cases} \tag{20}\]
for the memoryless and the memory cases respectively.
### Existence and uniqueness of solutions I
In order to ensure that system (19) admits a unique solution, we consider the operator \(\hat{F}:L^{\infty}(\mathbb{R}\times\Sigma,dt\,d^{2}\mathbf{x})\oplus L^{ \infty}(\mathbb{R}\times\Sigma,dt\,d^{2}\mathbf{x})\to L^{\infty}(\mathbb{R} \times\Sigma,dt\,d^{2}\mathbf{x})\oplus L^{\infty}(\mathbb{R}\times\Sigma, dt\,d^{2}\mathbf{x})\) defined by
\[\begin{pmatrix}\widetilde{\psi}^{F}(t,\mathbf{x})\\ \widetilde{\psi}^{G}(t,\mathbf{x})\end{pmatrix}:=\hat{F}\begin{pmatrix}\psi^{F} \\ \psi^{G}\end{pmatrix}(t,\mathbf{x})=\begin{pmatrix}\psi^{F}(0,\mathbf{x})-\psi^{B }(t,\mathbf{x})+\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \psi^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W( \tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y }\end{pmatrix} \tag{21}\]
and make the following assumptions. For all \(t\in[0,\infty)\):
* we assume \(\psi^{G}(t,\mathbf{x}),\psi^{F}(t,\mathbf{x})\in L^{\infty}(\mathbb{R}\times \Sigma,dt\,d^{2}\mathbf{x})\) to be continuous for all \(\mathbf{x}\in\Sigma\),
* \(0\leq\psi^{G}(t,\mathbf{x})\leq\psi^{G}(0,\mathbf{x})\leq M\), \(|\psi^{F}(t,\mathbf{x})|\leq 2M\), \(\psi^{F}(0,\mathbf{x})\geq 0\) for all \(\mathbf{x}\in\Sigma\), \(\psi^{G}(0,\mathbf{x})+\psi^{F}(0,\mathbf{x})\leq M\), where \(M\) is a positive constant,
* concerning the kernel \(W\), we assume \(0\leq W(t,\mathbf{x},\mathbf{y})\leq\bar{W}\in\mathbb{R}_{+}\). Moreover, we assume that there is a \(T>0\) such that \(\int_{kT}^{(k+1)T}\alpha(\tau)\,d\tau\leq\frac{1}{2}\), and \(2M|\Sigma|\bar{W}T\leq\frac{1}{4}\), for every \(k\in\mathbb{N}\), where \(|\Sigma|\) denotes the area of the region \(\Sigma\).
The space \(\mathcal{L}^{\infty}(\mathbb{R}):=L^{\infty}(\mathbb{R}\times\Sigma,dt\,d^{2} \mathbf{x})\oplus L^{\infty}(\mathbb{R}\times\Sigma,dt\,d^{2}\mathbf{x})\) with the norm
\[\|(\psi^{F},\psi^{G})\|_{\infty}=\max\{\|\psi^{F}\|_{\infty},\|\psi^{G}\|_{ \infty}\}\]
is a Banach space.
The proof is divided into steps. Let us consider the space \(S_{nT}\) which is the subset of \(L^{\infty}([(n-1)T,nT]\times\Sigma,dt\,d^{2}\mathbf{x})\oplus L^{\infty}([(n- 1)T,nT]\times\Sigma,dt\,d^{2}\mathbf{x})\), with \(n=1,2,\dots\), such that the following assumptions hold
* \(\psi^{G}(t,\mathbf{x}),\psi^{F}(t,\mathbf{x})\in L^{\infty}([(n-1)T,nT]\times \Sigma,dt\,d^{2}\mathbf{x})\) continuous for all \(\mathbf{x}\in\Sigma\),
* \(0\leq\psi^{G}(t,\mathbf{x})\leq\psi^{G}((n-1)T,\mathbf{x})\leq M\), \(|\psi^{F}(t,\mathbf{x})|\leq 2M\), \(\psi^{F}((n-1)T,\mathbf{x})\geq 0\) for all \(\mathbf{x}\in\Sigma\), \(\psi^{G}((n-1)T,\mathbf{x})+\psi^{F}((n-1)T,\mathbf{x})\leq M\), where \(M\) is a positive constant.
Moreover, we introduce \(g(\mathbf{x}):=\psi^{G}(0,\mathbf{x})\), \(f(\mathbf{x}):=\psi^{F}(0,\mathbf{x})\) (that correspond to the choice of the initial condition for the system).
Let \(\hat{F}_{T}:L^{\infty}([0,T]\times\Sigma,dt\,d^{2}\mathbf{x})\oplus L^{\infty}( [0,T]\times\Sigma,dt\,d^{2}\mathbf{x})\to L^{\infty}([0,T]\times\Sigma,dt\,d^{ 2}\mathbf{x})\oplus L^{\infty}([0,T]\times\Sigma,dt\,d^{2}\mathbf{x})\) be the operator defined in the time interval \([0,T]\) by
\[\begin{pmatrix}\widetilde{\psi}^{F}(t,\mathbf{x})\\ \widetilde{\psi}^{G}(t,\mathbf{x})\end{pmatrix}: =\hat{F}_{T}\begin{pmatrix}\psi^{F}\\ \psi^{G}\end{pmatrix}(t,\mathbf{x})\] \[=\begin{pmatrix}\psi^{F}(0,\mathbf{x})-\psi^{B}(t,\mathbf{x})+ \int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x}, \mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \psi^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\psi^{G}(\tau,\mathbf{x})W( \tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y }\end{pmatrix}. \tag{22}\]
In order for Banach fixed point theorem to be applied, we need to show that \(\hat{F}_{T}S_{T}\subset S_{T}\) and that \(\hat{F}_{T}:S_{T}\to S_{T}\) is a contraction. First, we show that \(\hat{F}_{T}S_{T}\subset S_{T}\). By conditions 1), 2) and 3), we have
\[\int_{\Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\leq \psi^{G}(0,\mathbf{x})\int_{\Sigma}\int_{0}^{t}\,W(\tau,\mathbf{x},\mathbf{y}) |\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\] \[\leq\psi^{G}(0,\mathbf{x})2M\Sigma\bar{W}T\leq\psi^{G}(0,\mathbf{ x}).\]
Therefore,
\[0\leq\widetilde{\psi}^{G}(t,\mathbf{x})\leq\widetilde{\psi}^{G}(0,\mathbf{x})=\psi^{G}(0,\mathbf{x})\leq M.\]
We proceed analogously for \(\widetilde{\psi}^{F}\),
\[|\widetilde{\psi}^{F}(t,\mathbf{x})| =\left|\psi^{F}(0,\mathbf{x})+\int_{\Sigma}\int_{0}^{t}\,\psi^{G} (\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}-\psi^{B}(t,\mathbf{x})\right|\] \[\leq\psi^{F}(0,\mathbf{x})+\int_{\Sigma}\int_{0}^{t}\,\psi^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}+\left|\psi^{B}(t,\mathbf{x})\right|\] \[\leq\psi^{F}(0,\mathbf{x})+\psi^{G}(0,\mathbf{x})+|\psi^{B}(t, \mathbf{x})|\leq\psi^{F}(0,\mathbf{x})+\psi^{G}(0,\mathbf{x})+M\leq 2M,\]
where, the inequality \(|\int_{0}^{T}\alpha(\tau)\,\psi^{F}(\tau,\mathbf{x})\,d\tau|\leq M\) has been used.
We have proved that \(\hat{F}_{T}S_{T}\subset S_{T}\). Now we show that \(\hat{F}_{T}\) is a contraction. Since, \(\psi_{1}^{F}(0,\mathbf{x})=\psi_{2}^{F}(0,\mathbf{x})\) and \(\psi_{1}^{G}(0,\mathbf{x})=\psi_{2}^{G}(0,\mathbf{x})\), we have
\[\hat{F}_{T}\begin{pmatrix}\psi_{1}^{F}\\ \psi_{1}^{G}\end{pmatrix}(t,\mathbf{x})-\hat{F}_{T}\begin{pmatrix}\psi_{2}^{F} \\ \psi_{2}^{G}\end{pmatrix}(t,\mathbf{x})=\\ =\begin{pmatrix}-\psi_{1}^{B}(t,\mathbf{x})+\psi_{2}^{B}(t,\mathbf{x})+\int_{ \Sigma}\int_{0}^{t}\,\left[\psi_{1}^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x}, \mathbf{y})|\psi_{1}^{F}(\tau,\mathbf{y})|-\psi_{2}^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi_{2}^{F}(\tau,\mathbf{y})|\right]d\tau\,d^{2} \mathbf{y}\\ \hskip 14.226378pt-\int_{\Sigma}\int_{0}^{t}\,\left[\psi_{1}^{G}(\tau,\mathbf{x })W(\tau,\mathbf{x},\mathbf{y})|\psi_{1}^{F}(\tau,\mathbf{y})|-\psi_{2}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{2}^{F}(\tau,\mathbf{y})| \right]d\tau\,d^{2}\mathbf{y}\end{pmatrix}.\]
Notice that
\[\begin{split}&\Big{|}\int_{\Sigma}\int_{0}^{t}\,\left[\psi_{1}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{1}^{F}(\tau,\mathbf{y})|- \psi_{2}^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{2}^{F}(\tau, \mathbf{y})|\right]d\tau\,d^{2}\mathbf{y}\Big{|}\\ &\leq\int_{\Sigma}\int_{0}^{t}\,\Big{|}\psi_{1}^{G}(\tau, \mathbf{x})-\psi_{2}^{G}(\tau,\mathbf{x})\Big{|}\psi_{2}^{F}(\tau,\mathbf{y})|W (\tau,\mathbf{x},\mathbf{y})\,d\tau\,d^{2}\mathbf{y}+\\ &\hskip 28.452756pt+\int_{\Sigma}\int_{0}^{t}\,\Big{|}|\psi_{1}^{ F}(\tau,\mathbf{y})|-|\psi_{2}^{F}(\tau,\mathbf{y})|\Big{|}\psi_{1}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})\,d\tau\,d^{2}\mathbf{y}\\ &\leq\|\psi_{1}^{G}-\psi_{2}^{G}\|_{\infty}2M\Sigma\bar{W}T+\| \psi_{1}^{F}-\psi_{2}^{F}\|_{\infty}M\Sigma\bar{W}T\leq\frac{1}{3}\|\psi_{1}^ {G}-\psi_{2}^{G}\|_{\infty}+\frac{1}{6}\|\psi_{1}^{F}-\psi_{2}^{F}\|_{\infty}. \end{split}\]
Therefore,
\[\begin{split}\|\widetilde{\psi}_{1}^{F}-\widetilde{\psi}_{2}^{F} \|_{\infty}&\leq\int_{0}^{t}\alpha(\tau,\mathbf{x})|\,\psi_{1}^{F} (\tau,\mathbf{x})-\psi_{2}^{F}(\tau,\mathbf{x})|\,d\tau+\frac{1}{3}\|\psi_{1}^ {G}-\psi_{2}^{G}\|_{\infty}+\frac{1}{6}\|\psi_{1}^{F}-\psi_{2}^{F}\|_{\infty} \\ &\leq\frac{1}{2}\|\,\psi_{1}^{F}-\psi_{2}^{F}\|_{\infty}+\frac{1} {4}\|\psi_{1}^{G}-\psi_{2}^{G}\|_{\infty}+\frac{1}{6}\|\psi_{1}^{F}-\psi_{2}^ {F}\|_{\infty}\\ &<\max\{\|\psi_{1}^{F}-\psi_{2}^{F}\|_{\infty},\|\psi_{1}^{G}- \psi_{2}^{G}\|_{\infty}\}=\|(\psi_{1}^{F},\psi_{1}^{G})-(\psi_{1}^{F},\psi_{2} ^{G})\|_{\infty}.\end{split}\]
Analogously,
\[\|\widetilde{\psi}_{1}^{G}-\widetilde{\psi}_{2}^{G}\|_{\infty}\leq\frac{1}{3} \|\psi_{1}^{G}-\psi_{2}^{G}\|_{\infty}+\frac{1}{6}\|\psi_{1}^{F}-\psi_{2}^{F} \|_{\infty}<\|(\psi_{1}^{F},\psi_{1}^{G})-(\psi_{1}^{F},\psi_{2}^{G})\|_{\infty}.\]
Therefore, \(\hat{F}_{T}\) is a contraction since
\[\|(\widetilde{\psi}_{1}^{F},\widetilde{\psi}_{1}^{G})-(\widetilde{\psi}_{2}^{ F},\widetilde{\psi}_{2}^{G})\|_{\infty}=\max\{\|\widetilde{\psi}_{1}^{F}- \widetilde{\psi}_{2}^{F}\|_{\infty},\|\widetilde{\psi}_{1}^{G}-\widetilde{\psi }_{2}^{G}\|_{\infty}\}<\|(\psi_{1}^{F},\psi_{1}^{G})-(\psi_{2}^{F},\psi_{2}^{G} )\|_{\infty}.\]
By Banach's fixed point theorem, there is a unique fixed point \(\big{(}\psi_{0}^{F}(t,\mathbf{x}),\psi_{0}^{G}(t,\mathbf{x})\big{)}\), \(t\in[0,T]\), for the operator \(\hat{F}_{T}\). Notice that \(\psi_{0}^{F}(t,\mathbf{x})\geq 0\) for every \(t\in[0,T]\) (see the next subsection) and therefore, see (22), \(\psi_{0}^{F}(t,\mathbf{x})+\psi_{0}^{G}(t,\mathbf{x})\leq\psi^{F}(0,\mathbf{x} )+\psi^{G}(0,\mathbf{x})\leq M\) for all \(t\in[0,T]\). Now, we can repeat the reasoning with \(f(\mathbf{x})=\psi_{0}^{F}(T,\mathbf{x})\), \(g(\mathbf{x})=\psi_{0}^{G}(T,\mathbf{x})\) as the initial condition and \(\hat{F}_{2T}:L^{\infty}([T,2T]\times\Sigma,dt\,d^{2}\mathbf{x})\oplus L^{ \infty}([T,2T]\times\Sigma,dt\,d^{2}\mathbf{x})\to L^{\infty}([T,2T]\times \Sigma,dt\,d^{2}\mathbf{x})\oplus L^{\infty}([T,2T]\times\Sigma,dt\,d^{2} \mathbf{x})\) defined by
\[\begin{split}\begin{pmatrix}\widetilde{\psi}^{F}(t,\mathbf{x})\\ \widetilde{\psi}^{G}(t,\mathbf{x})\end{pmatrix}&:=\hat{F}_{2T}\begin{pmatrix} \psi^{F}\\ \psi^{G}\end{pmatrix}(t,\mathbf{x})=\\ &=\begin{pmatrix}\psi_{0}^{F}(T,\mathbf{x})-\int_{T}^{t}\alpha(\tau, \mathbf{x})\psi^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{T}^{t}\,\psi^{G }(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\, d\tau\,d^{2}\mathbf{y}\\ &\psi_{0}^{G}(T,\mathbf{x})-\int_{\Sigma}\int_{T}^{t}\,\psi^{G}(\tau,\mathbf{x})W (\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y} \end{pmatrix}\end{split} \tag{23}\]
and such that \(\hat{F}_{2T}S_{2T}\subset S_{2T}\). We obtain a fixed point \(\big{(}\psi_{1}^{F}(t,\mathbf{x}),\psi_{1}^{G}(t,\mathbf{x})\big{)}\), \(t\in[T,2T]\), for the operator \(\hat{F}_{2T}\) such that \(\big{(}\psi_{1}^{F}(T,\mathbf{x}),\psi_{1}^{G}(T,\mathbf{x})\big{)}=\big{(}\psi_{0 }^{F}(T,\mathbf{x}),\psi_{0}^{G}(T,\mathbf{x})\big{)}\). The procedure can be iterated and at step \(n+1\) we obtain a fixed point \(\big{(}\psi_{n}^{F}(t,\mathbf{x}),\psi_{n}^{G}(t,\mathbf{x})\big{)}\), \(t\in[nT,(n+1)T]\), for the operator \(\hat{F}_{(n+1)T}\) such that \(\big{(}\psi_{n}^{F}(nT,\mathbf{x}),\psi_{n}^{G}(nT,\mathbf{x})\big{)}=\big{(} \psi_{n-1}^{F}(nT,\mathbf{x}),\psi_{n-1}^{G}(nT,\mathbf{x})\big{)}\), and
\[\begin{pmatrix}\psi_{n}^{F}(t,\mathbf{x})\\ \psi_{n}^{G}(t,\mathbf{x})\end{pmatrix}=\begin{pmatrix}\psi_{n-1}^{F}(nT, \mathbf{x})-\int_{nT}^{t}\alpha(\tau,\mathbf{x})\psi_{n}^{F}(\tau,\mathbf{x})\,d \tau+\int_{\Sigma}\int_{nT}^{t}\,\psi_{n}^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y} \\ \psi_{n-1}^{G}(nT,\mathbf{x})-\int_{\Sigma}\int_{nT}^{t}\,\psi_{n}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2} \mathbf{y}\end{pmatrix}. \tag{24}\]
It is straightforward to show that the vector \(\left(\bar{\psi}^{F}(t,\mathbf{x}),\bar{\psi}^{G}(t,\mathbf{x})\right)=\left( \psi_{k}^{F}(t,\mathbf{x}),\psi_{k}^{G}(t,\mathbf{x})\right)\), \(t\in[kT,(k+1)T]\), \(k\in\mathbb{N}\), is the unique fixed point for the operator \(\tilde{F}\). First, by induction, we prove that \(\hat{F}\big{(}\bar{\psi}^{F}(t,\mathbf{x}),\bar{\psi}^{G}(t,\mathbf{x})\big{)} =\left(\bar{\psi}^{F}(t,\mathbf{x}),\bar{\psi}^{G}(t,\mathbf{x})\right)\). At step 1, \(k=0\), for every \(t\in[0,T]\),
\[\begin{pmatrix}\bar{\psi}^{F}(t,\mathbf{x})\\ \bar{\psi}^{G}(t,\mathbf{x})\end{pmatrix}=\begin{pmatrix}\psi_{0}^{F}(t, \mathbf{x})\\ \psi_{0}^{G}(0,\mathbf{x})-\int_{\Sigma}^{t}\int_{0}^{t}\psi_{0}^{G}(\tau, \mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t}\psi_{n}^{G}(\tau,\mathbf{x}, \mathbf{y})|\psi_{0}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \psi_{0}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\psi_{0}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{0}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}\] \[=\begin{pmatrix}\bar{\psi}^{F}(0,\mathbf{x})-\int_{0}^{t}\alpha( \tau)\bar{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t}\bar{\psi }^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau, \mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \bar{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}.\end{pmatrix}\]
Inductive step: Suppose that for every \(t\in[nT,(n+1)T]\),
\[\begin{pmatrix}\bar{\psi}^{F}(t,\mathbf{x})\\ \bar{\psi}^{G}(t,\mathbf{x})\end{pmatrix} =\begin{pmatrix}\psi_{n}^{F}(t,\mathbf{x})\\ \psi_{n}^{G}(t,\mathbf{x})\end{pmatrix}\] \[=\begin{pmatrix}\psi_{n}^{F}(nT,\mathbf{x})-\int_{nT}^{t}\alpha( \tau)\psi_{n}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{nT}^{t}\psi_{n}^ {G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y} )|\,d\tau\,d^{2}\mathbf{x}\\ \psi_{n}^{G}(nT,\mathbf{x})-\int_{\Sigma}\int_{nT}^{t}\psi_{n}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}\] \[=\begin{pmatrix}\bar{\psi}^{F}(0,\mathbf{x})-\int_{0}^{t}\alpha( \tau)\bar{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t}\bar{\psi }^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau, \mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \bar{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}.\]
Then,
\[\begin{pmatrix}\bar{\psi}^{F}((n+1)T,\mathbf{x})\\ \bar{\psi}^{G}((n+1)T,\mathbf{x})\end{pmatrix}=\begin{pmatrix}\psi_{n}^{F}((n+1 )T,\mathbf{x})\\ \psi_{n}^{G}((n+1)T,\mathbf{x})\end{pmatrix}=\] \[=\begin{pmatrix}\bar{\psi}^{F}(0,\mathbf{x})-\int_{0}^{(n+1)T} \alpha(\tau)\bar{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{(n+1)T }\bar{\psi}^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}( \tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{x}\\ \bar{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{(n+1)T}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}.\]
Hence, for every \(t\in[(n+1)T,(n+2)T]\),
\[\begin{pmatrix}\bar{\psi}^{F}(0,x)-\int_{0}^{t}\alpha(\tau)\bar{ \psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{x}\\ \bar{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}=\] \[=\begin{pmatrix}\bar{\psi}^{F}(0,x)-\int_{0}^{(n+1)T}\alpha(\tau) \bar{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{(n+1)T}\bar{\psi} ^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y })|\,d\tau\,d^{2}\mathbf{x}\\ \bar{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{(n+1)T}\bar{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}+\] \[\qquad\qquad+\begin{pmatrix}-\int_{(n+1)T}^{t}\alpha(\tau)\bar{ \psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{(n+1)T}^{t}\bar{\psi}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})| \,d\tau\,d^{2}\mathbf{x}\\ -\int_{\Sigma}\int_{(n+1)T}^{t}\bar{\psi}^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\bar{\psi}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{x }\end{pmatrix}\] \[=\begin{pmatrix}\psi_{n}^{F}((n+1)T,x)-\int_{(n+1)T}^{t}\alpha( \tau)\psi_{n}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{(n+1)T}^{t}\psi_{n}^ {G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y })|\,d\tau\,d^{2}\mathbf{x}\\ \psi_{n}^{G}((n+1)T,\mathbf{x})-\int_{\Sigma}\int_{(n+1)T}^{t}\psi_{n}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi_{n}^{F}(\tau,\mathbf{y})|\,d\tau \,d^{2}\mathbf{y}\end{pmatrix}\] \[=\begin{pmatrix}\psi_{n+1}^{F}(t,\mathbf{x})\\ \psi_{n+1}^{G}(t,\mathbf{x})\end{pmatrix}.\]
It remains to prove the uniqueness. If \(\big{(}\hat{\psi}^{F}(t,\mathbf{x}),\hat{\psi}^{G}(t,\mathbf{x})\big{)}\) is a second fixed point for the operator \(\hat{F}\), then, for every \(n\in\mathbb{N}\), its restriction \(\big{(}\hat{\psi}^{F}(t,\mathbf{x}),\hat{\psi}^{G}(t,\mathbf{x})\big{)}_{[nT,(n+1)T]}\) to the interval \([nT,(n+1)T]\) is a fixed point for \(\hat{F}_{(
\[\hat{F}_{(n+1)T}\begin{pmatrix}\hat{\psi}^{F}(t,\mathbf{x})\\ \hat{\psi}^{G}(t,\mathbf{x})\end{pmatrix} =\begin{pmatrix}\hat{\psi}^{F}(nT,x)-\int_{nT}^{t}\alpha(\tau) \hat{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{nT}^{t}\,\hat{\psi}^{ G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2} \mathbf{x}\\ \hat{\psi}^{G}(nT,\mathbf{x})-\int_{\Sigma}\int_{nT}^{t}\,\hat{\psi}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})| \,d\tau\,d^{2}\mathbf{y}\end{pmatrix}=\] \[=\begin{pmatrix}\hat{\psi}^{F}(0,x)-\int_{0}^{nT}\alpha(\tau) \hat{\psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{nT}\,\hat{\psi}^ {G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{ y})|\,d\tau\,d^{2}\mathbf{x}\\ \hat{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{nT}\,\hat{\psi}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})| \,d\tau\,d^{2}\mathbf{y}\end{pmatrix}+\] \[\qquad\qquad+\begin{pmatrix}-\int_{nT}^{t}\alpha(\tau)\hat{\psi} ^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{nT}^{t}\,\hat{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{x}\\ -\int_{\Sigma}\int_{nT}^{t}\,\hat{\psi}^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x}, \mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\end{pmatrix}\] \[=\begin{pmatrix}\hat{\psi}^{F}(0,x)-\int_{0}^{t}\alpha(\tau)\hat{ \psi}^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t}\,\hat{\psi}^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})| \,d\tau\,d^{2}\mathbf{x}\\ \hat{\psi}^{G}(0,\mathbf{x})-\int_{\Sigma}\int_{0}^{t}\,\hat{\psi}^{G}(\tau, \mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\hat{\psi}^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\end{pmatrix}\] \[=\begin{pmatrix}\hat{\psi}^{F}(t,\mathbf{x})\\ \hat{\psi}^{G}(t,\mathbf{x})\end{pmatrix}=\begin{pmatrix}\hat{\psi}^{F}(t, \mathbf{x})\\ \hat{\psi}^{G}(t,\mathbf{x})\end{pmatrix}.\]
Since \(\hat{F}_{(n+1)T}\) admits a unique fixed point, it must be \(\begin{pmatrix}\hat{\psi}^{F}(t,\mathbf{x}),\hat{\psi}^{G}(t,\mathbf{x}) \end{pmatrix}_{[nT,(n+1)T]}=\begin{pmatrix}\bar{\psi}^{F}_{n}(t,\mathbf{x}), \bar{\psi}^{G}_{n}(t,\mathbf{x})\end{pmatrix}\).
#### Positive definitness of the solution
Here, we show that the solution \(\psi^{F}(t,\mathbf{x})\), whose existence has been proved above, is positive definite.
Suppose there is a point \((t,\mathbf{x})\) such that \(\psi^{F}(t,\mathbf{x})<0\). Then, by the continuity of the functions and by \(\psi^{F}(0,\mathbf{x})\geq 0\), there should be \(t_{0}>0\) and \(h>0\), such that \(\psi^{F}(t_{0},\mathbf{x})=0\) and \(\psi^{F}(t,\mathbf{x})<0\), \(\forall t\in(t_{0},t_{0}+h]\). Then,
\[0=\psi^{F}(t_{0},\mathbf{x})=\psi^{F}(0,\mathbf{x})-\int_{0}^{t_{0}}\alpha(\tau )\psi^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t_{0}}\,\psi^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}. \tag{25}\]
Moreover, for every \(\epsilon\in(0,h]\),
\[0>\psi^{F}(t_{0}+\epsilon,\mathbf{x}) =\psi^{F}(0,\mathbf{x})-\int_{0}^{t_{0}+\epsilon}\alpha(\tau)\psi ^{F}(\tau,\mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t_{0}+\epsilon}\,\psi^{G}( \tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d \tau\,d^{2}\mathbf{y}\] \[=\psi^{F}(0,\mathbf{x})-\int_{0}^{t_{0}}\alpha(\tau)\psi^{F}(\tau, \mathbf{x})\,d\tau+\int_{\Sigma}\int_{0}^{t_{0}}\psi^{G}(\tau,\mathbf{x})W(\tau, \mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\] \[-\int_{t_{0}}^{t_{0}+\epsilon}\alpha(\tau)\psi^{F}(\tau,\mathbf{x}) \,d\tau+\int_{\Sigma}\int_{t_{0}}^{t_{0}+\epsilon}\,\psi^{G}(\tau,\mathbf{x})W( \tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\] \[=-\int_{t_{0}}^{t_{0}+\epsilon}\alpha(\tau)\psi^{F}(\tau,\mathbf{x}) \,d\tau+\int_{\Sigma}\int_{t_{0}}^{t_{0}+\epsilon}\,\psi^{G}(\tau,\mathbf{x})W( \tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}\] \[=-\epsilon\alpha(\hat{t})\psi^{F}(\hat{t},\mathbf{x})+\int_{\Sigma} \int_{t_{0}}^{t_{0}+\epsilon}\,\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x}, \mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y},\]
where, \(\hat{t}\in(t_{0},t_{0}+\epsilon]\). Since \(\psi^{F}(\hat{t},\mathbf{x})<0\) and \(\alpha(t)\geq 0\), it must be
\[0>\psi^{F}(t+\epsilon,\mathbf{x}_{0})=-\epsilon\alpha(\hat{t})\psi^{F}(\hat{t}, \mathbf{x})+\int_{\Sigma}\int_{t_{0}}^{t_{0}+\epsilon}\,\psi^{G}(\tau,\mathbf{x})W( \tau,\mathbf{x},\mathbf{y})|\psi^{F}(\tau,\mathbf{y})|\,d\tau\,d^{2}\mathbf{y}>0,\]
which is an absurd. As a consequence, \(\psi^{F}(t,\mathbf{x})\geq 0\) for every \(\mathbf{x}\in\Sigma\) and \(t\in\mathbb{R}_{+}\).
### Existence and uniqueness of solutions II
Let us assume \(0\leq W(t,{\bf x},{\bf y})\leq\bar{W}\in\mathbb{R}_{+}\). In order to prove that system of equations (20) admits a unique solution, let us assign a non-negative and bounded function \(M:\Sigma\to\mathbb{R}^{+}\), with \(\overline{M}:=\sup_{{\bf x}\in\Sigma}\{M({\bf x})\}\), and consider the set \(S\) of functions \(\psi^{F},\psi^{G}\) satisfying the following conditions:
* \(\psi^{F},\psi^{G}\in L^{\infty}\big{(}\mathbb{R}\times\Sigma,dt\,d^{2}{\bf x} \big{)}\), continuous, with \(\psi^{G}(t,{\bf x})\) nonincreasing with respect to \(t\), \(\forall{\bf x}\in\Sigma\),
* \(0\leq\psi^{F}(t,{\bf x}),\psi^{G}(t,{\bf x})\leq M({\bf x})\), \(\forall t\in\mathbb{R},\ \forall{\bf x}\in\Sigma\),
* \(\psi^{F}(0,{\bf x})+\psi^{G}(0,{\bf x})=M({\bf x})\), \(\forall{\bf x}\in\Sigma\).
Let us define \(\mathcal{L}^{\infty}:=L^{\infty}\big{(}\mathbb{R}\times\Sigma,dt\,d^{2}{\bf x }\big{)}\oplus L^{\infty}\big{(}\mathbb{R}\times\Sigma,dt\,d^{2}{\bf x}\big{)}\), with the norm
\[\|(\psi^{F},\psi^{G})\|_{\infty}=\max\{\|\psi^{F}\|_{\infty},\|\psi^{G}\|_{ \infty}\}.\]
Note that \(\int_{\Sigma}\int_{0}^{t}\psi^{G}(\tau,{\bf x})\,W(\tau,{\bf x},{\bf y})\psi^{ F}(\tau,{\bf y})\,d\tau\,d^{2}{\bf y}\) is a nondecreasing function of \(t\) and introduce the function \(T({\bf x}),\ {\bf x}\in\Sigma\), which at \({\bf x}\) is equal to the time \(t\) at which
\[\int_{\Sigma}\int_{0}^{t}\psi^{G}(\tau,{\bf x})\,W(\tau,{\bf x},{\bf y})\psi^{ F}(\tau,{\bf y})\,d\tau\,d^{2}{\bf y}=\psi^{G}(0,{\bf x}),\]
if such a time exists, equal to \(\infty\) otherwise.
Define also the function
\[\eta(t,{\bf x})=\left\{\begin{array}{ll}1&\mbox{if}\quad t\leq T({\bf x}), \\ 0&\mbox{otherwise},\end{array}\right.\]
and the operators
\[(\psi^{F}(t,{\bf x}),\psi^{G}(t,{\bf x}))\mapsto R_{G}(t,{\bf x})=\int_{ \Sigma}\int_{0}^{t}\psi^{G}(\tau,{\bf x})\,W(\tau,{\bf x},{\bf y})\psi^{F}( \tau,{\bf y})\eta(\tau,{\bf x})\,d\tau\,d^{2}{\bf y},\] \[(\psi^{F}(t,{\bf x}),\psi^{G}(t,{\bf x}))\mapsto R_{F}(t,{\bf x})=\int_{ \Sigma}\int_{0}^{t}\psi^{G}(\tau,{\bf x})\,W(\tau,{\bf x},{\bf y})\psi^{F}( \tau,{\bf y})\pi(t-\tau)\eta(\tau,{\bf x})\,d\tau\,d^{2}{\bf y},\] \[\mathcal{T}:(\psi^{F}(t,{\bf x}),\psi^{G}(t,{\bf x}))\mapsto( \widetilde{\psi}^{F}(t,{\bf x}),\widetilde{\psi}^{G}(t,{\bf x}))=(\psi^{F}(0, {\bf x})\pi(t)+R_{F}(t,{\bf x}),\psi^{G}(0,{\bf x})-R_{G}(t,{\bf x})).\]
Note that if \(T({\bf x})\) is finite at \({\bf x}\), then \(\widetilde{\psi}^{G}(t,{\bf x})=0\) for \(t\geq T({\bf x})\). It is easy to show that \((\psi^{F}(t,{\bf x}),\psi^{G}(t,{\bf x}))\in S\) is a solution of system (20) if and only it is a fixed point of the operator \(\mathcal{T}.\) Moreover, the following properties hold.
**Property 4.1**.: \(\widetilde{\psi}^{G}\) _is a nonincreasing function of \(t\), and \(0\leq\widetilde{\psi}^{G}(t,{\bf x})\leq\psi^{G}(0,{\bf x})\leq M({\bf x})\leq \overline{M}.\)_
**Proof**. From its definition, it follows that \(R_{G}(t,{\bf x})\leq\psi^{G}(0,{\bf x}).\)
**Property 4.2**.: \(0\leq\widetilde{\psi}^{F}(t,{\bf x})\leq M({\bf x})\leq\overline{M}.\)__
**Proof**. \(\widetilde{\psi}^{F}(t,{\bf x})\leq\psi^{F}(0,{\bf x})+R_{G}(t,{\bf x})\leq \psi^{F}(0,{\bf x})+\psi^{G}(0,{\bf x})=M({\bf x})\leq\overline{M}.\)__
From the previous properties it follows that \(\mathcal{T}S\subseteq S.\) Now let us take \(\overline{t}>0\) to be suitably chosen and consider \(0\leq t\leq\overline{t}\), one has:
\[|\widetilde{\psi}^{F}_{2}(t,{\bf x})-\widetilde{\psi}^{F}_{1}(t,{\bf x})|=| \int_{\Sigma}\int_{0}^{t}\,W(\tau,{\bf x},{\bf y})\Big{[}\psi^{G}_{2}(\tau,{ \bf x})\psi^{F}_{2}(\tau,{\bf y})-\psi^{G}_{1}(\tau,{\bf x})\psi^{F}_{1}(\tau, {\bf y})\Big{]}\eta(\tau,{\bf x})\pi(t-\tau)\,d\tau\,d^{2}{\bf y}|\] \[\qquad\leq|\int_{\Sigma}\int_{0}^{t}\,W(\tau,{\bf x},{\bf y})\Big{[} \psi^{G}_{2}(\tau,{\bf x})\psi^{F}_{2}(\tau,{\bf y})-\psi^{G}_{1}(\tau,{\bf x}) \psi^{F}_{1}(\tau,{\bf y})\Big{]}\,d\tau\,d^{2}{\bf y}|\geq|\widetilde{\psi}^ {G}_{2}(t,{\bf x})-\widetilde{\psi}^{G}_{1}(t,{\bf x})|.\]
Being
\[\Big{|}\int_{\Sigma}\int_{0}^{t}\,W(\tau,\mathbf{x},\mathbf{y}) \Big{[}\psi_{2}^{G}(\tau,\mathbf{x})\psi_{2}^{F}(\tau,\mathbf{y})-\psi_{1}^{G}( \tau,\mathbf{x})\psi_{1}^{F}(\tau,\mathbf{y})\Big{]}\,d\tau\,d^{2}\mathbf{y} \Big{|}\] \[\leq\int_{\Sigma}\int_{0}^{t}\,W(\tau,\mathbf{x},\mathbf{y})\Big{[} |\psi_{2}^{G}(\tau,\mathbf{x})\psi_{2}^{F}(\tau,\mathbf{y})-\psi_{2}^{G}(\tau, \mathbf{x})\psi_{1}^{F}(\tau,\mathbf{y})|+|\psi_{2}^{G}(\tau,\mathbf{x})\psi_{ 1}^{F}(\tau,\mathbf{y})-\psi_{1}^{G}(\tau,\mathbf{x})\psi_{1}^{F}(\tau, \mathbf{y})|\Big{]}\,d\tau\,d^{2}\mathbf{y}\] \[\leq 2\overline{M}\|(\psi_{2}^{F}-\psi_{1}^{F},\psi_{2}^{G}-\psi_{ 1}^{G})\|_{\infty}\int_{\Sigma}\int_{0}^{t}\,W(\tau,\mathbf{x},\mathbf{y})\, d\tau\,d^{2}\mathbf{y}\leq 2\overline{M}|\Sigma|\overline{W}\,\overline{t}\|( \psi_{2}^{F}-\psi_{1}^{F},\psi_{2}^{G}-\psi_{1}^{G})\|_{\infty},\]
Taking \(\overline{t}<\frac{1}{2\overline{M}|\Sigma|\overline{W}}\), \(\mathcal{T}\) is a contraction and the solution exists and is unique up to time \(\overline{t}\). Now considering the time interval \(\left[\overline{t},2\overline{t}\right]\), using the value of the solution at time \(\overline{t}\) as initial datum, with the same reasoning as in the previous proof of existence and uniqueness, one can show that the solution exists up to time \(2\overline{t}\), and iterating the procedure the global existence in time follows. Therefore we have proved the following
**Theorem 4.3**.: _For any initial datum belonging to \(S\), system (20) admits a unique solution which is global in time._
### Asymptotic behaviour
Now, let us suppose that \(\pi(t)=\exp{(-\beta t)},\) with \(\beta\in\mathbb{R}^{+}\), then the following holds
**Theorem 4.4**.: _For \(t\) going to infinity, the solution \(\psi^{F}(t,\mathbf{x})\) of the system of equations (20) decays to zero, for any solution of the system._
**Proof.** One has
\[\frac{\partial\psi^{F}}{\partial t} =-\beta\psi_{0}^{F}\pi(t)+\int_{\Sigma}\,\psi^{G}(t,\mathbf{x})W( t,\mathbf{x},\mathbf{y})\psi^{F}(t,\mathbf{y})d^{2}\mathbf{y}-\beta\int_{ \Sigma}\int_{0}^{t}\,\psi^{G}(\tau,\mathbf{x})W(\tau,\mathbf{x},\mathbf{y}) \psi^{F}(\tau,\mathbf{y})\pi(t-\tau)\,d\tau\,d^{2}\mathbf{y}\] \[=-\beta\psi^{F}(t,\mathbf{x})-\frac{\partial\psi^{G}}{\partial t }(t,\mathbf{x}),\]
from which
\[\frac{\partial\psi^{F}}{\partial t}+\frac{\partial\psi^{G}}{\partial t}=- \beta\psi^{F}\leq 0.\]
Therefore, for any \(\mathbf{x}\in\Sigma\), there exists \(\lim_{t\rightarrow\infty}(\psi^{F}+\psi^{G})(t,\mathbf{x})\) and is finite and nonnegative, which implies that \(\lim_{t\rightarrow\infty}(\frac{\partial\psi^{F}}{\partial t}+\frac{\partial \psi^{G}}{\partial t})=0\), hence, from the previous equality we can conclude that \(\lim_{t\rightarrow\infty}\psi^{F}(t,\mathbf{x})=0,\,\,\,\forall\mathbf{x}\in\Sigma\). By using exactly the same reasoning we obtain the equivalent result for the memoryless case.
**Theorem 4.5**.: _For \(t\) going to infinity, the solution \(\psi^{F}(t,\mathbf{x})\) of the system of (8) decays to zero, \(\forall\mathbf{x}\in\Sigma\) and for any solution of the system._ |
2310.15468 | Empowering Distributed Solutions in Renewable Energy Systems and Grid
Optimization | This study delves into the shift from centralized to decentralized approaches
in the electricity industry, with a particular focus on how machine learning
(ML) advancements play a crucial role in empowering renewable energy sources
and improving grid management. ML models have become increasingly important in
predicting renewable energy generation and consumption, utilizing various
techniques like artificial neural networks, support vector machines, and
decision trees. Furthermore, data preprocessing methods, such as data
splitting, normalization, decomposition, and discretization, are employed to
enhance prediction accuracy.
The incorporation of big data and ML into smart grids offers several
advantages, including heightened energy efficiency, more effective responses to
demand, and better integration of renewable energy sources. Nevertheless,
challenges like handling large data volumes, ensuring cybersecurity, and
obtaining specialized expertise must be addressed. The research investigates
various ML applications within the realms of solar energy, wind energy, and
electric distribution and storage, illustrating their potential to optimize
energy systems. To sum up, this research demonstrates the evolving landscape of
the electricity sector as it shifts from centralized to decentralized solutions
through the application of ML innovations and distributed decision-making,
ultimately shaping a more efficient and sustainable energy future. | Mohammad Mohammadi, Ali Mohammadi | 2023-10-24T02:45:16Z | http://arxiv.org/abs/2310.15468v1 | # Empowering Distributed Solutions in Renewable Energy Systems and Grid Optimization
## 1 Abstract
This study delves into the shift from centralized to decentralized approaches in the electricity industry, with a particular focus on how machine learning (ML) advancements play a crucial role in empowering renewable energy sources and improving grid management. ML models have become increasingly important in predicting renewable energy generation and consumption, utilizing various techniques like artificial neural networks, support vector machines, and decision trees. Furthermore, data preprocessing methods, such as data splitting, normalization, decomposition, and discretization, are employed to enhance prediction accuracy.
The incorporation of big data and ML into smart grids offers several advantages, including heightened energy efficiency, more effective responses to demand, and better integration of renewable energy sources. Nevertheless, challenges like handling large data volumes, ensuring cybersecurity, and obtaining specialized expertise must be addressed. The research investigates various ML applications within the realms of solar energy, wind energy, and electric distribution and storage, illustrating their potential to optimize energy systems. To sum up, this research demonstrates the evolving landscape of the electricity sector as it shifts from centralized to decentralized solutions through the application of ML innovations and distributed decision-making, ultimately shaping a more efficient and sustainable energy future.
## 2 Introduction
Due to the rapid advancement of global industrialization, it is acknowledged that excessive use of fossil fuels will not just speed up the depletion of these resources but will also harm the environment. These consequences will lead to elevated health risks and the looming threat of worldwide climate change [1]. Traditional fossil fuel-based energy sources are under pressure due to uncertainty about their role in global warming [2]. Alongside fossil fuels and nuclear
power, renewable energy is currently the fastest-growing energy sector. The increasing focus on recent studies can be attributed to the growing popularity of sustainable and environmentally friendly renewable energy sources such as solar power, wind energy, hydroelectric power, biomass, waves, tides, and geothermal energy due to their minimal environmental impact. A primary challenge for renewable energy in the near future is energy supply. This refers to the integration of renewable energy resources into existing or future energy supply frameworks [3]. Consequently, focus is shifting to alternative energy sources that need to substantially increase their contributions to global energy production in the upcoming years, as societies move away from oil and coal. Clean and renewable energies play a pivotal role here, but their implementation presents unique challenges that necessitate the development of new technologies [2].
The growing energy demand has made the adoption of renewable sources inevitable. Over time, many power companies have established renewable energy facilities worldwide to offer economical and environmentally friendly energy [4]. Renewable sources like wind and solar power come with advantages such as lower delivery costs and reduced emissions. However, traditional grid energy storage designs are becoming less practical. Occasional large-scale blackouts have emphasized the need for an enhanced decision-making process that relies on timely and precise data regarding dynamic events, operating conditions, and sudden power changes [5].
Rifkin [6] has defined the energy internet as an innovative energy utilization system that merges renewable energy sources, decentralized power stations, hydrogen energy, storage technologies, and electric vehicles with Internet technologies. The author has outlined four attributes of the energy internet: reliance on renewable energy, support for access to large-scale generation and storage systems, facilitation of energy sharing, and promotion of electric transportation. In contrast to fossil fuel-based power plants, managing renewable energy necessitates more advanced control of power, equilibrium, and production capacity, attainable through smart grids [7]. These grids combine traditional power networks with advanced Information Technology (IT) and communication networks to distribute electricity with greater efficiency and reliability, while also reducing costs and environmental impacts [8].
The shift of the electricity grid towards the smart grid is facilitated by the Internet of Things (IoT), an interconnected network of sensing and actuating devices that enables seamless sharing of information across platforms through a unified framework. This framework creates a cohesive operating view to support innovative applications. This is achieved through pervasive and continuous sensing, data analysis, and information representation, with cloud computing serving as the overarching infrastructure. Each of these interconnected devices possesses its own embedded computing system, enabling identification and interaction with other devices, and often necessitates the use of advanced data analysis and machine learning techniques [6].
These technologies bring about various advantages, including enhanced energy efficiency, improved demand response, and better integration of renewable energy sources. However, it's important to acknowledge potential drawbacks,
such as the vulnerability to cyber-attacks and the complexity inherent in machine learning algorithms. Highlighting the need to address security concerns within the smart grid, it outlines potential security threats and corresponding countermeasures.
The applications of IoT extend across different layers of the smart grid. In the Generation layer, it facilitates monitoring energy generation, controlling units, tracking gas emissions and pollution, predicting power consumption, and managing distributed power plants and microgrids. In the Transmission layer, it enables monitoring and control of transmission lines and substations, along with the production management of transmission towers. The Distribution layer benefits from IoT by automating distribution processes, managing and safeguarding equipment, and handling faults. At the Consumption layer, IoT is applied to smart homes and appliances, intelligent charging and discharging of electric vehicles [9], power load control, and multi-network management [10].
This evolving landscape requires new strategies for energy production management, forecasting, and prediction. A significant challenge lies in the fact that renewable energies are not entirely controllable, as their generation heavily depends on environmental factors like wind, cloud cover, and rainfall. A potential solution to address these issues involves expert systems based on Machine Learning (ML) algorithms, capable of handling the nonlinearities and complex modeling prevalent in current systems [11].
Prediction holds significant importance in the management of renewable energy, particularly in critical sectors like wind and solar power. Prediction of resources is a widespread practice, encompassing various scales, ranging from large-scale facilities like wind and solar farms to smaller setups such as microgrids with limited generation resources [12]. Effective renewable energy forecasting serves as a crucial tool for addressing uncertainties, thereby supporting the planning, management, and operation of electrical power and energy systems [13].
Despite the inherent advantages of sustainable energy sources, several technical factors impede their broader adoption and higher integration within the power grid. The primary challenge is the intermittence of these resources, which presently hinders their more substantial involvement in the energy mix. The irregular availability of these resources underscores the pressing need for the development of precise prediction systems aimed at estimating the power generated by renewable sources [14]. Achieving accuracy in renewable energy forecasting remains complex due to the unpredictable and chaotic nature of renewable energy data. This unpredictability stems from various factors such as weather patterns, cloud cover, and wind speed, all of which influence the amount of energy produced by renewable sources [15]. The uncertainties introduced by these factors can introduce instability into the power system and diminish its overall stability margin [3].
Probabilistic forecasting models play a pivotal role in the realm of renewable energy prediction, offering a crucial advantage by furnishing quantitative measures of uncertainty associated with renewable energy data. The conventional deterministic point forecasts often fall short in capturing the inherent
variability of renewable energy data. In contrast, probabilistic forecasts emerge as a valuable tool for enhancing planning, management, and operation within electric energy systems. This is achieved by assigning probabilities to different prediction outcomes, thus yielding a comprehensive perspective.
Probabilistic forecasting encompasses two overarching categories: parametric and nonparametric methods, each with or without distributional assumptions. Parametric methods generally operate under the presumption that renewable energy time series data adheres to a predefined distribution, which could be Gaussian, beta, or Gamma distributions, among others. Once this distribution is established, diverse statistical techniques are employed to assess its parameters. Methods such as auto-regression models, maximum likelihood estimation, and rapid Bayesian approaches are often utilized to determine these parameters, thereby enabling the creation of a comprehensive probability set that underpins a probabilistic prediction [3].
## 3 Empowering Renewable Energy Through Machine Learning Innovations
Machine-learning techniques aim to uncover relationships between input and output data, either with or without explicit mathematical formulations. Once these machine-learning models are effectively trained using a dataset, decision-makers can generate accurate forecasted output values by inputting forecast data into the trained models. [16] While some studies have concentrated on forecasting renewable energy through the use of a single machine-learning model [17], the diverse nature of datasets, time intervals, prediction spans, configurations, and performance metrics make it challenging to enhance forecast performance through a single model. Consequently, to boost prediction accuracy, certain studies have developed hybrid machine-learning models or comprehensive prediction approaches tailored for renewable energy forecasts. Recently, support vector machines and deep learning methods have gained substantial popularity in the realm of machine learning [3].
Jung-Pin Lai [1] covers a range of machine-learning models, encompassing artificial neural networks, support vector machines, decision trees, and random forests. The paper outlines the merits and demerits of each model and furnishes instances of their applications in renewable energy predictions. For instance, artificial neural networks are widely adopted in predicting renewable energy due to their capacity to comprehend intricate connections between input and output variables. Nevertheless, they demand extensive training data and can be computationally intensive. Support vector machines are another prevalent machine-learning model utilized in renewable energy forecasts. Their prowess lies in handling high-dimensional data, suitable for both regression and classification tasks. However, they are sensitive to the selection of kernel functions and necessitate careful hyperparameter tuning. Decision trees and random forests also find frequent application in renewable energy prediction. While decision
trees are straightforward and interpretable, they might encounter issues of over-fitting and instability. Random forests, an extension of decision trees, mitigate overfitting and enhance predictive precision. Yet, they can be computationally demanding and require ample training data.
The data pre-processing phase holds a critical role in machine learning, significantly elevating the efficiency of machine-learning performance [16]. Numerous prevalent data pre-processing techniques are employed in machine-learning models for predicting renewable energy. These techniques are implemented to ready the data for analysis and enhance the accuracy of predictions. One commonly used pre-processing technique is data splitting, involving the division of data into training and testing sets. This separation enables the evaluation of machine-learning model performance on unseen data, guarding against over-fitting. Normalization is another technique, entailing the scaling of data to a consistent range. This practice ensures that all variables carry equal significance in the analysis, curbing bias towards variables with larger values. Decomposition is a pre-processing approach applied in renewable-energy predictions, involving the breakdown of time series data into its constituent trend, seasonal, and residual elements. This strategy aims to filter out noise and uncover underlying patterns within the data. Discretization is yet another pre-processing technique, converting continuous variables into discrete categories. This simplification aids in analysis and reduces data dimensionality. Further pre-processing techniques in renewable-energy predictions encompass feature selection, imputation of missing values, encoding categorical features, and standardization. The selection of a pre-processing technique hinges on the specific context and the type of renewable-energy source under consideration [1].
The integration of big data and machine learning within the smart grid holds several potential benefits, including improved energy efficiency, more effective demand response, enhanced incorporation of renewable energy sources, and refined load forecasting. Moreover, these technologies have the capability to lower operational expenses and heighten system reliability. Nonetheless, it is imperative to acknowledge potential drawbacks as well [10].
The researchers acknowledge that the dataset utilized in their study is relatively limited for comprehensive big data analytics. Nevertheless, they assert that the integration of cloud computing and real-time event analysis has established a fitting framework for big data analytics. Their suggestions for future investigations encompass the incorporation of larger datasets encompassing diverse renewable energy sources and demand patterns across multiple countries. Furthermore, they propose involving customers in data input and supplying information about their energy usage to enhance the precision of predictive models [5].
One challenge stems from the vast volume of data produced by the smart grid, which can prove challenging to manage and analyze, necessitating substantial computational resources. The risk of cyber-attacks and data breaches is also a critical concern, given the potential dire implications for both the power grid and its users. Furthermore, the utilization of machine learning algorithms can be intricate, demanding specialized expertise. This complexity might act
as a hindrance to adoption for certain organizations. Finally, apprehensions revolve around the privacy of user data and the potential misuse of this information. These concerns underscore the need for careful implementation and robust security measures when leveraging these technologies within the smart grid framework [10]. Several distinct instances highlight the application of big data science within the smart grid context. These instances encompass load forecasting [18], demand response [19][20], fault detection and diagnosis, detection of energy theft, determination of residential energy consumption [21], and the integration of renewable energy [22]. Additionally, machine learning-driven algorithms have been devised to oversee power quality events, analyze user preferences, and manage the scheduling of residential loads [10].
Machine learning (ML) models have found widespread utility across various aspects of energy systems, particularly in forecasting electrical energy and renewable energy demand and consumption. These models offer pathways to enhance energy efficiency and reduce costs in the energy sector through diverse avenues. For instance, accurate energy consumption and demand predictions generated by ML models can be harnessed by building commissioning project managers, utility companies, and facility managers to implement energy-saving strategies. ML models also serve load forecasting, power generation prediction, power quality estimation, time series forecasting, wind speed projection, and power demand anticipation, among other applications [23]. Moreover, the prediction of building energy consumption holds paramount importance in shaping decisions aimed at curbing energy usage and CO2 emissions. It aids in evaluating various building design alternatives, operational strategies for energy efficiency, and refining demand and supply management [17]. However, predicting building energy consumption remains challenging due to the multitude of factors influencing it, including building attributes, installed equipment, outdoor weather conditions, and occupants' energy-use patterns [24].
Deep learning is a type of machine learning that is capable of discovering the inherent nonlinear features and high-level invariant structures in data. This makes it particularly well-suited for forecasting renewable energy, which is characterized by intermittent and chaotic data. Deep learning algorithms are able to identify patterns and relationships in the data that may not be apparent using other machine learning techniques, which can lead to more accurate predictions. Deep learning stands as a promising avenue for enhancing renewable energy forecasting [3]. The ability to accurately predict energy demand and consumption equips energy companies to optimize their production and distribution operations, resulting in cost reductions and enhanced energy efficiency. Moreover, machine learning (ML) models demonstrate their proficiency in managing optimization tasks like storage planning, energy control, peak load management, dynamic energy pricing, cost minimization, and the estimation of battery charging requirements [23].
The hybrid machine learning technique enhances the predictability of renewable energy consumption by harnessing the strengths of different models. In a study by Rasha [25], a hybrid approach was employed that integrates three fundamental models: Cat-Boost, SVR, and MLP. Cat-Boost, a gradient boost
ing algorithm, is recognized for its adeptness in managing categorical features and delivering high accuracy and efficiency. SVR, a support vector regression technique, offers transparent computation during dataset prediction and estimation. MLP, a multilayer perceptron and a type of artificial neural network, is employed for supervised learning. In this hybrid model, historical load data from a previous time period serves as the training dataset, and a specific algorithm is selected to train the data network. The network's structure is meticulously designed to elevate the performance and predictive capacity of renewable energy consumption, particularly in challenging scenarios. Validation is performed using a separate test dataset, with diverse error metrics employed to analyze and assess the outcomes.
Cat-Boost's competence in handling categorical features, coupled with SVR's straightforward computation during dataset prediction, and MLP's potential to enhance performance and predictive accuracy of renewable energy consumption, contribute to the hybrid model's effectiveness. This hybrid approach exhibits superior accuracy and efficiency compared to other methods, positioning it as a promising strategy for forecasting renewable energy consumption. To evaluate the proposed system's effectiveness, a range of assessment metrics is employed. These metrics encompass mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), and the coefficient of determination (R2). These metrics collectively offer insights into the precision and efficacy of the hybrid machine learning model proposed for predicting renewable energy consumption [25].
Nevertheless, the integration of ML models into energy systems is not devoid of challenges. One key hurdle pertains to the availability and quality of data, significantly influencing the accuracy and dependability of ML forecasts. The inherent complexity of energy systems introduces another layer of difficulty, complicating the development of accurate and efficient ML models. Additionally, certain ML models lack transparency, posing challenges in comprehending their predictive logic - a concern for stakeholders. The computational demands of specific ML models, especially in real-time scenarios, also impose limitations. Despite these obstacles, the growing body of literature detailing ML's contributions to energy systems underscores the substantial potential of these models in this arena [23].
## 4 Revolutionizing Renewable Energy and Grid Management through Machine Learning
The implications for the future of renewable energy across diverse industries and households are poised to be substantial [25]. In the machine learning approach, historical load data from a previous time period is selected as the training dataset. This data is used to formulate an appropriate network structure, followed by the application of a specific training algorithm to fine-tune the network's performance, with the ultimate goal of meeting predefined accuracy
benchmarks [26]. The influence of machine learning on energy systems spans various dimensions. In this study, particular attention is directed towards solar energy, wind power, and electric distribution and storage. Among these, wind power production has garnered significant attention, as evidenced by the numerous prediction models that have been proposed [27]. This is attributed to wind power being one of the most advanced and widely utilized renewable sources. Nonetheless, there is an observable rise in the development of models for other renewable sources such as solar [28] and marine energy [29] in recent years.
### Unveiling ML Applications in Solar Energy
As the integration of solar energy into the energy system continues to expand, the accurate prediction of solar power generation becomes increasingly crucial. This prediction plays a vital role in managing energy quality and enhancing the overall reliability of the system [26].
In the domain of solar energy forecasting, Machine Learning (ML) techniques such as Deep Neural Networks, Support Vector Machines, and Random Forest models have been effectively utilized to enhance the precision of solar irradiance predictions. These methods offer advantages such as refined forecasts through their ability to capture nonlinear relationships, improved weather projections, and expedited forecasting. However, they necessitate sample datasets for training, longer training durations, and a broad spectrum of sensory input. The suitability of Support Vector Machines might be restricted in this context.
ML models, particularly Artificial Neural Networks, play a pivotal role in foreseeing and monitoring the performance of solar energy systems. These models act as substitutes to mechanistic models and facilitate system optimization. However, they can be influenced by decreased accuracy due to measurement errors.
In the realm of power output forecasting for solar energy systems, ML techniques like Artificial Neural Networks, Support Vector Machines, hybrid models, and regression trees have proven beneficial. These approaches tackle challenges such as instability and inaccuracies in measurements. Yet, they are better suited for short-term predictions and might face difficulties in handling abnormal weather conditions. The creation of hybrid models for such systems can be intricate.
For tasks like optimizing material composition and designing components of solar energy systems to enhance performance, ML techniques like Deep Neural Networks, Genetic Algorithms, and Random Forest models excel. They can identify optimal designs from a multitude of possibilities and propose innovative models rooted in existing knowledge. However, modeling the design of such systems can be complex, and acquiring experimental data can be costly and demanding.[2]
### Unveiling ML Applications in Wind Energy System
Leveraging Machine Learning (ML) for predicting wind power, both in the short and long term, involves the application of various models including Regression models, deep neural networks, support vector machines, and decision trees. These methodologies offer benefits like predictions closely matching power curves and the flexibility to adapt to extreme weather situations. However, they do encounter challenges such as potential inaccuracies in forecasting during atypical weather patterns and the complexities arising from intricate variable relationships.
Wind energy systems are exposed to vulnerabilities arising from external factors and the operation of moving components, particularly blade susceptibility to delamination. ML techniques like k-nearest neighborhood, neural networks, decision trees, and support vector machines are employed to anticipate maintenance requirements. These methods yield advantages such as minimizing on-site interventions through ML-driven wind farm monitoring. Nevertheless, they face challenges including the need for a high sampling frequency, a substantial number of variables for effective failure detection, and the wide array of potential faults.
For localizing wind speed predictions, uncovering feature correlations, and projecting wind speed patterns, ML approaches like Neural networks, decision trees, and support vector regression models are harnessed. These techniques offer benefits such as augmenting weather predictions with historical data. However, constraints arise from their applicability to short-term predictions, potential prediction failures during unusual weather conditions, and the requirement for multiple features to attain accurate training.[2]
### Unveiling ML Applications in Electric Distribution and Storage
In the realm of the smart grid, Machine Learning (ML) plays a pivotal role in facilitating decisions related to pricing, consumption, production, fraud detection, and security measures. Techniques such as Q-learning, deep neural networks, support vector machines, and reinforcement learning are harnessed to extract insights from smart grid data. These methodologies offer advantages like real-time data integration and organization for informed decision-making, ensuring the secure operation of the smart grid. However, they grapple with limitations including the need for substantial data volumes, diverse heterogeneous information sources, and susceptibility to minor errors during unforeseen events.
Energy load forecasting, which accounts for weather conditions, is enhanced through methods like Deep neural networks and support vector machines. These techniques provide benefits like accurate predictions of short- and long-term loads using ML models. Nonetheless, they require significant data volumes, time-intensive testing and validation procedures, and extensive CPU resources for training.
ML proves instrumental in predicting stable compounds for novel material design, especially in the quest for alternative battery materials. Support vector regression, generative models, artificial neural networks, and hybrid models combined with DFT models are employed. These approaches yield advantages such as identifying promising components like electrodes and electrolytes. However, they face challenges in terms of data volume requirements and often rely on simulated data to complement experimental data. For predicting properties of battery materials for component and structure projection in batteries, techniques such as Artificial neural networks, deep neural networks, and support vector regression models are utilized. These methods offer benefits like real-time optimization with reduced computational demands. Yet, the scarcity of experimental data and the potential need to integrate ML techniques with first-principles models (hybrid models) pose limitations.
Optimal management of battery charging, a complex task reliant on substantial battery data, is aided by ML methods like Extreme learning machines, artificial neural networks, and reinforcement learning. These approaches offer advantages such as real-time optimization with decreased computational requirements. Nevertheless, they encounter challenges tied to limited experimental data and the potential necessity for hybrid models that combine ML with first-principles models.[2]
## 5 Safeguarding the Future: Navigating Security Concerns in Smart Grid Technology
Across every facet of the smart grid--generation, transmission, and distribution--there exists a substantial vulnerability to cyber-attacks, with several such attacks having already occurred. Consequently, data security emerges as a primary apprehension within the smart grid framework, prompting extensive efforts towards identifying cyber-security threats and establishing protective measures to counter them. Many of these defense strategies have harnessed machine learning techniques, as traditional methods often prove inadequate in this novel data-driven, non-linear context. Nonetheless, further exploration is necessary to formulate effective solutions for other security considerations like physical threats, network assaults, and encryption breaches. The refinement of communication systems, bolstered by enhanced protective measures, is also imperative. Therefore, security concerns hold a pivotal role in shaping the implementation of these technologies within the power grid [10].
The paper [30] introduces a novel approach to address the challenges of secure and efficient data sharing in smart grids using federated learning-enabled AIoT (Artificial Intelligence of Things). The proposed scheme focuses on enhancing the security, efficiency, and privacy concerns that hinder the implementation of AIoT services based on federated learning within smart grids. The primary objective of the scheme is to create a secure and efficient framework for sharing private energy data while maintaining data privacy and optimizing
communication.
The scheme presents an innovative edge-cloud-assisted federated learning framework that aims to overcome issues related to low-quality shared models, non-IID (non-independent and identically distributed) data distributions, and unpredictable communication delays. By incorporating both edge and cloud resources, the framework facilitates communication-efficient and privacy-preserving sharing of energy data within smart grids. Additionally, the scheme introduces a mechanism for evaluating local data in the federated learning process, considering the heterogeneous and non-IID nature of user data.
Federated Learning (FL) is a decentralized approach to machine learning where edge agents generate models based on their local data and contribute to a global model on a server. This process maintains data privacy to some extent since raw data is not directly shared with the server. Only a fraction of interested agents participate in the FL training process, simplifying the process and improving efficiency. The FL process involves multiple iterations of server-agent interaction to train the model and produce a smart model by learning from local agents. Initially, a global model is shared with all participants, and each agent performs local training epochs by dividing their data into batches and optimizing the model. The prepared local models are then shared with the server. The FL process follows a series of three steps. Firstly, the FL server initializes a global model for a specific task and chooses a group of agents known as participants. This global model is then distributed to the participating agents. Secondly, each participant receives the global model and conducts on-device training using their local data while also benefiting from the global model. The participant generates a local model based on this training and shares it with the server. Finally, the FL server collects the local models from the participants and combines their model parameters to create an updated global model. A new subset of agents is selected by the server, and the updated global model is shared with them. This iterative process continues until the global model reaches the desired level of convergence.[31]
Furthermore, the scheme formulates the optimization problems and payoff functions for both users and Energy Service Providers (ESPs) under the context of the federated learning framework. To incentivize user participation and encourage high-quality contributions to the model, a two-layer deep reinforcement learning (DRL)-based incentive algorithm is devised. Overall, the proposed scheme presents a comprehensive approach to addressing security, efficiency, and privacy concerns in energy data sharing within smart grids. By leveraging federated learning, edge-cloud collaboration, and innovative incentive mechanisms, the scheme holds the potential to enhance the overall performance and effectiveness of AIoT services in the context of smart grids. [30]
The role of distributed decision-making and information processing in the future energy systems
The broad adoption of distributed energy resources (DERs), particularly renewable sources of decentralized generation, offers significant potential to greatly improve the efficiency of electricity distribution. However, as DERs become a substantial part of the total energy on the distribution network, inadequate integration processes could lead to imbalanced and unstable temporary behaviors. These behaviors could strain the existing infrastructure, possibly resulting in power outages and drops in voltage. In a future scenario of a smart grid, consumers equipped with renewable generation capabilities like solar panels and wind turbines could use predictive strategies to optimize how they consume energy. They would determine when to use, sell, or store the renewable energy they generate themselves. This active engagement with the electric grid and other consumers would replace the current passive energy consumption model.
Communication among distributed nodes (consumers) equipped with generation, storage, and consumption capabilities could establish a decentralized framework for decision-making and control. This approach would lead to improvements in both overall energy efficiency and cost reduction. However, to fully realize the potential benefits of the smart grid concept, a systematic set of design principles is necessary, along with a comprehensive protocol framework that facilitates interaction among the various entities in the grid. Furthermore, robust and computationally efficient control and optimization algorithms are crucial elements of this endeavor.
The Energy Management System (EMS) utilizes the optimal power flow (OPF) algorithm to optimize the performance of Distributed Energy Resources (DERs) within a microgrid. By analyzing power network data, the OPF algorithm determines optimal set points for DERs [32]. Its primary objective is twofold: minimize the overall cost of the power system while adhering to technical and physical constraints of both the power network and the DERs. The speed of optimization varies based on specific scenarios. To achieve this, the EMS relies on diverse input data, including predictions for renewable generator output, local load, Battery Energy Storage Systems (BESSs) charge status, operational limits of dispatchable generators and BESSs, microgrid security and reliability thresholds, ancillary service requirements from the utility grid, and anticipated energy prices [33].
After the OPF optimization, the determined optimal set points are relayed to the primary control systems of the DERs. These set points serve as references for active and reactive power, instructing the internal voltage and current control loops of the fast and dynamically efficient power electronic converters integrated within the DERs. This meticulous process ensures accurate tracking of the designated power values, leading to heightened stability and enhanced overall performance of the microgrid.
The interaction between utility companies and consumers, involving both
energy generation and consumption aspects, plays a crucial role in the design of smart grids. This interaction differs from traditional power grid planning, which mainly focuses on matching generation with demand. By integrating considerations of generation and consumption, the scheduling of power and loads can be optimized more effectively. However, the nature of this utility-consumer interaction varies depending on the time scales of interaction periods and the different consumer units within the hierarchical structure.
For example, in a microgrid setup, smart homes act as consumer units at the microgrid level, while the microgrid controller serves as a consumer unit at the feeder level, which is one level above. To accommodate the diverse ways in which generation and consumption sides interact, a two-stage model for utility-consumer interaction has been suggested. This model consists of two phases: initial scheduling, which corresponds to long-term planning, and real-time scheduling, which pertains to short-term planning.[34][35]
The implementation of demand response (DR) strategies during the initial scheduling establishes a baseline operating point for nodes in the grid, characterized by consistent consumer load patterns. However, since these DR schemes rely on predictions of renewable generation over the scheduling period (e.g., 12 or 24 hours), they may not effectively handle the real-time fluctuations and intermittency in the power grid due to the inevitable differences between actual and predicted renewable generation. To tackle this challenge and improve overall efficiency and stability, it's advantageous to introduce interactions at a finer time scale (short-term) between utility companies and consumers (both on the generation and consumption sides), nearly in real time. From the consumers' perspective, who often prioritize their own interests, the objective is to make optimal decisions that maximize cumulative profits or minimize expenses by utilizing their local distributed energy resources (DERs).
Given the relatively consistent load profiles established by DR schemes, consumers can choose to sell surplus renewable energy to the grid while storing the rest for future use. These decisions are guided by real-time pricing information, allowing consumers to dynamically respond to market conditions.
Designing a distributed decision scheme for selling excess energy within a microgrid entails addressing several important factors. Firstly, the scheme should be adaptable to changes in consumer behavior, accommodating the potential number of smart homes within the microgrid and their dynamic shifts between buying and selling modes. Secondly, the scheme must enable selling-mode smart homes to express their willingness to sell surplus energy units, ensuring that the units sold align with high willingness levels. To quantify this willingness, a measure needs to be established, considering potential variations across different smart homes and even within the same home due to changing energy unit availability. As a result, the microgrid controller must ensure that energy units sold consistently correspond to elevated willingness metrics across different consumers. Lastly, the scheme should be resilient against collusion among self-interested selling-mode smart homes, which might provide false willingness metrics for personal gain. By addressing these factors, a comprehensive distributed decision scheme can be developed to enhance the efficiency of energy
sales within the microgrid.[36]
|
2305.01873 | Morphological Classification of Galaxies Using SpinalNet | Deep neural networks (DNNs) with a step-by-step introduction of inputs, which
is constructed by imitating the somatosensory system in human body, known as
SpinalNet have been implemented in this work on a Galaxy Zoo dataset. The input
segmentation in SpinalNet has enabled the intermediate layers to take some of
the inputs as well as output of preceding layers thereby reducing the amount of
the collected weights in the intermediate layers. As a result of these, the
authors of SpinalNet reported to have achieved in most of the DNNs they tested,
not only a remarkable cut in the error but also in the large reduction of the
computational costs. Having applied it to the Galaxy Zoo dataset, we are able
to classify the different classes and/or sub-classes of the galaxies. Thus, we
have obtained higher classification accuracies of 98.2, 95 and 82 percents
between elliptical and spirals, between these two and irregulars, and between
10 sub-classes of galaxies, respectively. | Dim Shaiakhmetov, Remudin Reshid Mekuria, Ruslan Isaev, Fatma Unsal | 2023-05-03T03:20:18Z | http://arxiv.org/abs/2305.01873v1 | # Morphological Classification of Galaxies Using SpinalNet
###### Abstract
Deep neural networks (DNNs) with a step-by-step introduction of inputs, which is constructed by imitating the somatosensory system in human body, known as SpinalNet have been implemented in this work on a Galaxy Zoo dataset. The input segmentation in SpinalNet has enabled the intermediate layers to take some of the inputs as well as output of preceding layers thereby reducing the amount of the collected weights in the intermediate layers. As a result of these, the authors of SpinalNet reported to have achieved in most of the DNNs they tested, not only a remarkable cut in the error but also in the large reduction of the computational costs. Having applied it to the Galaxy Zoo dataset, we are able to classify the different classes and/or sub-classes of the galaxies. Thus, we have obtained higher classification accuracies of 98.2, 95 and 82 percents between elliptical and spirals, between these two and irregulars, and between 10 sub-classes of galaxies, respectively.
Galaxy Classifications, SpinalNet, Galaxy Zoo, Galaxy Morphology, DNN +
Footnote †: publication: pubid: 978-1-6654-0945-2/21/$31.00 ©2021 IEEE
## I Introduction
Galaxies are gravitationally bound objects in the universe composed of stellar objects, gases and dust particles which are also filling up the space between them, as well as the dark matter which is a quite less understood type of matter from which they are largely made. The evolution of these objects, which is believed to have formed more than \(\sim 10^{10}\) years ago, together with their visual appearance (shape, distribution of matter, and their structure) is expected to provide quite valuable information about their composition and their evolutionary changes. Categorizing galaxies into different classes is quite significant because astrophysicists routinely employ enormous databases of data to test existing ideas or generate new hypotheses to explain the physical processes that drive galaxies, star formation, and better understanding of the nature of the universe. Thus, galaxy morphology can be considered a basic quantity not only for obtaining all-encompassing information on the evolution of galaxies, but also for a wide range of science in observational cosmology (see for example [2] and references therein).
A Galaxy Zoo initiative has emerged out of the need of astronomers from Oxford University in order to categorize galaxies according to their morphological classes, to better comprehend galactic dynamics [3]. Galaxy Zoo adopted a unique approach in bringing astronomy to the general public, where they will log on and assist in the classification of a galaxies. There have been 4 different versions of the Galaxy Zoo projects. The first was concerned with determining if a galaxy was elliptical, spiral (together with its orientation), or the result of a merger of two galaxies. Galaxy Zoo 2 for instance has requested more information on bright or most prominent SDSS galaxies [4]. These thorough categories include (among other things) bulge size measurements, the presence of bars, and the magnitude of the bulge. Kaggle challenge has used a data obtained from this segment.
Various attempts have been made to examine galaxies and categorize them into various shapes. The authors of [5] for example have applied a set of uniform ensembles of classifiers which makes use of neural networks and a locally weighed regression technique. The later have been used for an easy extraction of features from the image datasets. They have reported that having pre-processed the galaxy images, they went on employing the principal component analysis which is not only effective in minimizing the dimensionality of the data but also in distilling quit useful information from them. The homogeneous ensemble of locally weighted regression delivers the greatest results, with over 95 percent accuracy when considering only the two classes of galaxies, namely elliptical and spirals, and over 91 percent when irregulars are also considered.
The author of [6] have implemented a deep neural networks (DNN) architecture which has a fixed set of scaling coefficients
known as EfficientNets for morphological classifications of galaxies. They have used \(\sim 8.0\times 10^{4}\) galaxy images which are obtained from Galaxy Zoo 2 datasets which were made available for Kaggle competition. They were able to classify galaxies into a total of 7 morphological classes. These are, 3 categories of elliptical shapes (i.e., completely, in-between and cigar shaped) smooths, 2 categories of spiral shapes (i.e., barred and unbarred) spirals and single categories of lenticulars and irregulars. An accuracy of \(~{}94\%\) and an F1 score of 0.8857 is reported to have been achieved with the implementation of EfficientNets.
In their attempt to develop an automated morphological classification of galaxies, authors of [7] have trained and validated a number of convolutional neural network (CNN) architectures. They have applied them to more than 10 thousands of images of visually-classified Sloan Digital Sky Survey (SDSS) objects to classify them into both 3- morphological classes (i.e., elliptical, lenticular, spiral) and 4- morphological classes (these three and irregulars/miscellaneous). They claim to have developed a novel CNN architecture that outperforms previous models in both 3- and 4-way classification, with overall classification accuracies of 83% and 81%, respectively. They have compared the accuracies of binary classifications across all of the above mentioned four classes, finding that elliptical and spirals are the easiest to discern (\(>\)98 percent accuracy), while spirals and irregulars are the most difficult to distinguish (78 percent accuracy). They have investigated to understand the plausible physical reasons for those images which are classified incorrectly, for example most of the lenticular galaxies which were incorrectly classified to ellipticals were having higher stellar masses, similar to other trends which has also been mentioned. In additional to these, they have implemented the same CNN to a small sample of Galaxy Zoo datasets in order to classify them into the above-mentioned morphological classes. And they were able to obtain an accuracy of 92% (for binary), 82% (for 3-way) and 77% (for 4-way) classifications.
Research focusing on similar aspect has also been using rather machine learning instead of deep learning approaches we have mentioned until now. For instance, the author of reference [8] have applied an ensemble of machine learning approaches for various objects in the SDSS data release 6 which were classified by the Galaxy Zoo project in order to classify the galaxies into three morphological classes, namely early types, spirals and point sources/artefacts. The authors have also concluded that using machine-learning algorithms to perform morphological classification for the next generation of wide-field imaging surveys is quite promising, and that the Galaxy Zoo catalogue would continue serving as an essential training set for this purpose. Following that various works have picked these datasets and conducted a galaxy classification research's some of which include [9, 10, 11].
## II Morphological Galaxy Classification
In general, a system that is widely used by astronomers in order to classify galaxies into various classes based on their structure and appearance is what is commonly referred to as morphological galaxy classification. The most common classification scheme is the system devised by Sir Edwin Hubble in 1936 which is shown in Fig. 1. Hubble's original classifications include the following (i.e., also they were modified and extended later on by others to include more types):
(i) elliptical galaxies: (E0, E1, E2, E3, E4, E5, E6, E7)
(ii) spirals (Sa, Sb, Sc), barred spiral (SBa, SBb, SBc) and (iii) irregulars.
This scheme is commonly referred to as the "Hubble Tuning Fork" [12].
In this work we will classify galaxies by proceeded in the following manner: first, we used 2-classes classifications between that of elliptical and spirals. Followed by the 3-classes by having introduced the irregulars. Lastly, we used 10-classes classifications namely, E0, E3, E7, Sa, Sb, Sc, SBa, SBb, SBc, Irr. Therefore, the contribution of this paper is to outline that having used SpinalNet which uses a DNN with a gradual input is an efficient technique for galaxy morphological (image) classifications as the higher and improved values of accuracies (\(\sim\)98.2%) would exhibit.
The remaining parts of the paper are organized as follows: in the next part (Section III), the experimental data used along with its pre-processing is briefly presented. In Section IV the implementation of the the SpinalNet is explained on the Galaxy Zoo datasets used in this work. In Section V a complete experimental results in respect to different classification accuracy measurements and confusion matrix for the test data are presented along with the corresponding discussions. A conclusion of this research work will be presented in Section VI.
## III Materials and Methods
In this research project, we have first classified galaxy image dataset into the three main morphological classes namely, elliptical, spirals and irregulars using SpinalNet, a deep neural network (DNN) code with a gradual input. The dataset was obtained from Kaggle [3]. Imageset taken from dataset was categorized into three (3) respective classes with images divided into part for training and part for the validations. First, we split the dataset in a 70/30 proportion for training and
Fig. 1: Hubble Tuning Fork.
testing sets, respectively. The training folder has been further divided into subfolders for the corresponding morphological classes. Similarly, the testing folder is also further divided into subfolders for the corresponding morphological classes. Following similar procedure, we have also added our experiment in classifying the galaxy image datasets into two (2) and ten (10) morphological classes.
In order to perform an image classification, we put the image datasets distributed to various classes corresponding to their folders. We take the raw data from Galaxy Zoo project datasets and perform a data preparation by removing images which do not pass manually configured threshold. We ought to say that even after having image dataset cleared out, we need to manually remove some images that are not correctly fitting the requirement of the individual galaxy classes.
At our first runs we have obtained an accuracy of around 97%, but we found that, the result is not conclusive as most of image in the dataset were taken are elliptical galaxies which were predicted more effectively, comparing them with the other classes of the galaxies in our datasets. Then we tried to align them to have comparably the same size. A total of 4,564 images were used for classification. Fig. 2 shows some typical examples of images taken from Kaggle datasets as predicted by SpinalNet with their corresponding class labels and their probabilities acquired from the voting.
But we have to state that we had data limitations of our image dataset and therefore some poorly represented types of galaxy types, such as irregular ones, with even setting lowest possible threshold hardly could have same amount of class images.
Irregular galaxies are also hardly recognized by non-expert volunteers involved in Galaxy Zoo project. Same problem was raised in assessing data quality in Citizen Science, where data collected using crowd-sourced scientific research method instead of hiring a group of experts. Not all crowd-sourced research projects can involve the help of non-expert volunteers. One example can be a situation when they have to apply series of methods or perform repetitive tasks for a longer period of time. In those kinds of cases, untrained or non-expert human resources used in project may have a high risk of corrupting the data [13]. Thus, we have manually removed the false positive images in order to achieve sensible results.
## IV Implementation of the SpinalNet on the Galaxy Zoo Datasets
Recent researches show that image classification is done by NN architectures have high accuracy even on small imagesets. Some of these works include [14, 15, 16, 17, 18, 19]. SpinalNet algorithm for example is an architecture which mimics the natural way of reacting [1]. We choose it among all others because it has been implemented on various benchmark datasets and proved to give the state-of-the-art performance, more over we found their code to be simple to use. Among all implementations we have tested out we picked up a simplest and the fastest one which uses PyTorch library. We observed that by comparing the implementations using Tensorflow with that of PyTorch, PyTorch can work with new CUDA API and exhibit backward compatibility.
Assuming all above SpinalNet was used to give another tryout to solve galaxy classification problem. SpinalNet is a deep neural network, the architecture of which is shown in Fig. 3. The proposed by [1] Neural Network consists of the input row, the intermediate rows of hidden layers, and the output row.
Furthermore, step by step training may allow us to gradually expand SpinalNet depth. Number of addition neurons are specified by number of classes passed as an input. In the middle row we have several hidden layers. As it is known, hidden layers accept a portion of input, and all layers not including the first one have outputs as an input data from prior layers. As a result, the output layer tots up all hidden neurons' outputs of the medium row. Depending on the number of classes, in our case we have 2, 3 and 10, architecture supposes to have exact same number of input nodes. Number of features will affect on how many hidden layers a given DNN will have. Eventually, the output layer has the same number of outputs as the number of provided galaxy data classes. These classes are: Elliptical and Spherical; E, S and Irr; E0, E3, E7, SBa, SBb, SBc, Sa, Sb, Sc, Irr. Nice side of this architecture is the fact that it discards irrelevant data. In some cases, data reduction can lead to a decrease in efficiency. Increasing the processing power will not lead to significant efficiency
Fig. 2: Examples of images from Kaggle dataset as predicted by SpinalNet with their labels and their probabilities.
improvement. For all our experiments and tests we have used PyTorch version of SpinalNet code with version 1.9.1 of torch library and to decrease processing time we used CUDA Toolkit version 11.1.1. PC specifications used for code running were - CPU AMD Ryzen 7 5800H 3.20GHz 8 cores, GPU NVIDIA GeForce RTX 3060, RAM 16GB.
## V Results and Discussions
We have divided the data into the training and testing sections. Further training and testing data were divided into sub-folders according to the number of classes we were going to recognize. As a part of our results, we would like to show and discuss Fig. 2. We can see that overall accuracy is very high and, in most cases reach values of 98% even having extraneous artifacts around target galaxy (i.e., row 1, col 4; row 2, col 3; row 5, col 2). However, you can mention examples what have been predicted incorrectly - spiral galaxy in first row was predicted as elliptical with predicted value 0.49973. Another interesting case we found is a side-turned spiral galaxy (row 2, col 2) which have been predicted with a low value of 0.53070 - this example may mean that SpinalNet needs additional features to be added, or that imageset needs more examples describing this case.
With a high degree of confidence, we can state that this DNN is the most effective tool to classify galaxies according to their morphology. Starting with the Kaggle classes E and S, we can confidently state that we can recreate the human eye classification with all methods and samples (accuracy \(>\) 98%) as shown in Table I. The task becomes much more complex when trying to discriminate between 10 classes, as it would be not that easy for a human eye, and the best result is 82% using SpinalNet. However, if we simply utilize three classifications, elliptical (E), spiral (S) and irregular (I) galaxies, we find an accuracy \(>\) 95% with the DNN for all data.
We have achieved remarkable results classifying on 2, 3 and 10-class datasets using SpinalNet architecture. Outperforming best result configuring hyperparameters of the Neural Network we could reach best values on all classes comparing to results reported in [7] which shown an accuracies of 92%, 82% and 77% for the binary, 3-way and 4-way classifications respectively.
Fig. 4 represents confusion matrix for 10-class classification case which shows the percentage of correctly predicted objects in each class. In the best-case scenario, the confusion matrix will be filled with ones only along the diagonal, and with zeros otherwise. But as we can see SpinalNet being highly confused predicting barred spiral galaxies. In some "barred" cases false-negative results. For instance, we can see that SBb has been classified wrongly as Sb. This gives us a room for improvement for future research work.
## VI Conclusion
In this work we have implemented a galaxy classification algorithm based on a deep neural network (DNN) with a gradual input. Original architecture known as SpinalNet [1], which mimics somatosensory system of human, was used on a preprocessed Galaxy Zoo data. Having performed a huge number of tasks we are able to prepare the image datasets which would perfectly fit the needs of the project. Eventually, we could reach a significant error reduction running the image classification process on a single laptop with CUDA framework, which proves that our approach could be reproduced
Fig. 4: Confusion matrix for the 10-class classification.
Fig. 3: The architecture of the used DNN.
even on a domestic computational unit. Applying it to the Galaxy Zoo dataset we are able to distinguish between the two main classes of galaxies namely, elliptical, spiral above 98% and (+irregulars) close to 95% and 10-class 82%.
## Acknowledgment
D. Shaiakhmetov and F. Unsal would like to give thanks to the Ala-Too International University for the scholarship opportunities which is provided to them. We would like to thank Dr. S. Cankurt and Dr. D. Kabir for the fruitful discussions we had on this work.
|
2310.13019 | Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class
Manipulation Using DeepFool Algorithm | The susceptibility of deep neural networks (DNNs) to adversarial attacks
undermines their reliability across numerous applications, underscoring the
necessity for an in-depth exploration of these vulnerabilities and the
formulation of robust defense strategies. The DeepFool algorithm by
Moosavi-Dezfooli et al. (2016) represents a pivotal step in identifying minimal
perturbations required to induce misclassification of input images.
Nonetheless, its generic methodology falls short in scenarios necessitating
targeted interventions. Additionally, previous research studies have
predominantly concentrated on the success rate of attacks without adequately
addressing the consequential distortion of images, the maintenance of image
quality, or the confidence threshold required for misclassification. To bridge
these gaps, we introduce the Enhanced Targeted DeepFool (ET DeepFool)
algorithm, an evolution of DeepFool that not only facilitates the specification
of desired misclassification targets but also incorporates a configurable
minimum confidence score. Our empirical investigations demonstrate the
superiority of this refined approach in maintaining the integrity of images and
minimizing perturbations across a variety of DNN architectures. Unlike previous
iterations, such as the Targeted DeepFool by Gajjar et al. (2022), our method
grants unparalleled control over the perturbation process, enabling precise
manipulation of model responses. Preliminary outcomes reveal that certain
models, including AlexNet and the advanced Vision Transformer, display
commendable robustness to such manipulations. This discovery of varying levels
of model robustness, as unveiled through our confidence level adjustments,
could have far-reaching implications for the field of image recognition. Our
code will be made public upon acceptance of the paper. | S. M. Fazle Rabby Labib, Joyanta Jyoti Mondal, Meem Arafat Manab, Sarfaraz Newaz, Xi Xiao | 2023-10-18T18:50:39Z | http://arxiv.org/abs/2310.13019v4 | Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
###### Abstract
Deep neural networks (DNNs) have significantly advanced various domains, but their vulnerability to adversarial attacks poses serious concerns. Understanding these vulnerabilities and developing effective defense mechanisms is crucial. DeepFool, an algorithm proposed by Moosavi-Dezfooli et al. (2016), finds minimal perturbations to misclassify input images. However, DeepFool lacks a targeted approach, making it less effective in specific attack scenarios. Also, in previous related works, researchers primarily focus on success, not considering how much an image is getting distorted; the integrity of the image quality, and the confidence level to misclassifying. So, in this paper, we propose Enhanced Targeted DeepFool, an augmented version of DeepFool that allows targeting specific classes for misclassification and also introduce a minimum confidence score requirement hyperparameter to enhance flexibility. Our experiments demonstrate the effectiveness and efficiency of the proposed method across different deep neural network architectures while preserving image integrity as much and perturbation rate as less as possible. By using our approach, the behavior of models can be manipulated arbitrarily using the perturbed images, as we can specify both the target class and the associated confidence score, unlike other DeepFool-derivative works, such as Targeted DeepFool by Gajjar et al. (2022). Results show that one of the deep convolutional neural network architectures, AlexNet, and one of the state-of-the-art model Vision Transformer exhibit high robustness to getting fooled. This approach can have larger implication, as our tuning of confidence level can expose the robustness of image recognition models. Our code will be made public upon acceptance of the paper.
## 1 Introduction
Deep neural networks (DNNs) have revolutionized many fields including but not limited to speech recognition [5, 16], computer vision [4, 14], natural language processing [31], and even game playing [26]. However, their high accuracy and robustness can be compromised by adversaries who intentionally manipulate the input data to fool the model. Such attacks can have serious consequences in real-world applications such as autonomous driving, medical diagnosis, and security systems. Therefore, understanding the vulnerabilities of DNNs to adversarial attacks and developing effective defense mechanisms has become an important research area in machine learning and computer security. DeepFool is one of the algorithms, proposed by Moosavi-Dezfooli _et al_. [20], which iteratively finds the minimum amount of perturbations required to push a given input image to a misclassified region of the feature space. They use the following equation that defines an adversarial perturbation as the minimal perturbation \(r\) that is sufficient to change the estimated label \(\hat{k}(x)\):
\[\Delta(x;\hat{k}):=\min_{r}||r||_{2}\text{ subject to }\hat{k}(x+r)\neq\hat{k}(x) \tag{1}\]
where, \(x\) is an image, and \(\hat{k}(x)\) is the estimated label. With this, an image can be misclassified with a minimal amount of perturbations. However, this approach is not focused on any specific target. Instead, the images are classified as a different class with a minimal amount of perturbation. Thus, if an image \(x\) can be misclassified as some class \(A\) with less perturbation than some other class \(B\), DeepFool
will choose to use the perturbation that misclassifies \(x\) as class \(A\).
While small perturbations by untargeted attacks can fool a deep neural network by misclassifying data, targeted attacks may be more harmful as they aim to deceive the DNN into producing a specific output. An attacker may be able to deceive a self-driving car into misidentifying a stop sign as a green light. Or it can be used to fool security systems that use DNNs for face recognition. Therefore an accurate method of targeted attack to fool DNNs are necessary to make the models more robust against these type of attacks. While the DeepFool algorithm is a good approach to finding minimal perturbations to misclassify data to an unspecific target, a targeted approach is much needed.
This approach by Gajjar _et al_. [10] proposes an algorithm that can misclassify an image into a specific target class. However, it has a limited success rate and additionally, does not have any option to control the different hyperparameters inside the algorithm.
To fill the gap, in this paper, we propose Enhanced Targeted Deepfool or ET DeepFool in short which is an approach to the DeepFool algorithm where we can not only target the class that we want to misclassify but also augment DeepFool and make it more parametrized by giving the option to set minimum confidence score requirements. We show that the algorithm is simpler than the original, in terms of time complexity, and effective in fooling different deep neural network architectures towards specific classes. Afterward, we examine the performance of the proposed method. Our experiments show that the proposed system performs very efficiently on different machines while keeping the integrity of the image almost similar to the original one. It can be used to find the robustness of a model, as this is the first perturbation method to the best of our knowledge, where a performance metric like the confidence model can be arbitrarily specified. This effectively tells us with what amount of perturbation an existing image recognition model can be fooled into classifying one class of images as that of another class with a strong error rate and confidence score. Previous works would stop at fooling the model, without looking at the confidence, so while those perturbations worked, the models would often misclassify the sample images with low levels of confidence.
## 2 Related Works
Adversarial attacks are done on data to perturb it to some extent so that it gets misclassified by an ML model. These attacks can be implemented in several ways in the form of black-box, white-box, and grey-box attacks. In this section, we cover existing literature related to different adversarial attacks against image classification models as well as works done on adversarial defense.
Figure 1: Comparison between original DeepFool and our proposed Enhanced Targeted DeepFool. Here, the perturbation image is scaled 20 times for visibility.
### White-Box Attacks:
In white-box attacks, the attackers have complete knowledge of the target model's architecture, weights, gradients, parameters, and training data. By having access to the model's internals, an attacker can explore its vulnerabilities more effectively and create adversarial examples that are highly effective at deceiving the model's predictions. There are several common white-box adversarial attack methods used in the field of image classification. One such method is Fast Gradient Sign Method (FGSM) [13], a type of adversarial attack for image classification that involves adding minimal noise to each pixel of an image, based on the gradient of the loss function with respect to the image. Another notable algorithm proposed by Carlini and Wagner [3] finds the smallest noise to be added to an image to misclassify it. This method goes beyond the FGSM approach by seeking the most effective perturbation for achieving misclassification. Jacobian-based Saliency Map Approach as proposed by Papernot [23] works by iteratively modifying the input features of a sample to maximize the difference between the predicted output and the true output. Additionally, the Universal Adversarial Perturbations by Moosavi-Dezfooli [21] fools a deep neural network by adding the same perturbations to multiple images, causing it to misclassify all of the affected images. Furthermore, Duan [9] proposes an attack that drops information from the image instead of perturbing it.
### Black-Box Attacks:
In black box attack scenarios, the internal workings of the models are not available. The attacker usually has the input-output behavior and the probability labels of the target models. Gao [11] presents a black-box attack method called Patch-wise Iterative Fast Gradient Sign Method that outperforms pixel-wise methods in generating transferable adversarial examples against various mainstream models. Another approach by Zhao [35] is a GAN-based method that involves training a generator network to produce perturbations that can be added to the original input to create an adversarial example. This method proposed by Li [19], integrates Poincare distance into iterative FGSM and uses a metric learning approach to regularize iterative attacks. It generates transferable targeted adversarial examples by iteratively perturbing the input image in the direction of the target class while simultaneously minimizing the Poincare distance between the original and perturbed images. The Adversarial Patch attack, as proposed by Brown [2], takes a different approach. Instead of modifying the entire image, it focuses on creating a small patch that can be strategically placed in the real world. Furthermore, Su [27] creates a method that fools a network by only altering a single pixel of an image. Wei [33] propose a very different approach by manipulating image attributes such as brightness, contrast, sharpness instead of generating any adversarial noise.
### Data Poisoning Attacks
The paper by Shafahi [25] applies a one-shot poisoning attack by injecting a single poison instance into a clean dataset, causing the model to misclassify a specific target instance without negatively impacting its performance on other examples. Huang [17] propose MetaPoison, a meta-learning approach for crafting poisons to fool neural networks using clean-label data poisoning. Di [6] presents a camouflaging approach for targeted poisoning attacks based on the gradient-matching approach of Geiping [12]. Munoz-Gonzalez [22] propose pGAN, a scheme that generates poisoning points to maximize the error of a target classifier while minimizing detectability by a discriminator.
### Adversarial Defense
To protect DNNs from adversarial attacks and improve their robustness various methods are applied. These methods include training a network with adversarial examples, detecting adversarial examples instead of classifying them, and using randomness to defend the networks. One notable improvement in performance was introduced by Ding [7] combines the cross-entropy loss with a margin maximization loss term, which is applied to correctly classified examples. In a different vein, Xu [34] propose a method called feature squeezing, which decreases the search space of an adversary by combining samples analogous with multiple different vectors into a single sample. Furthermore, Zheng [36], propose a method that requires modification of the output of the classifier by performing hypothesis testing and using Gaussian mixture models to detect adversarial examples.
## 3 Methodology
In this section, we initially discuss about the background of Vanilla DeepFool and what we introduce: Enhanced Targeted DeepFool.
### Background of Vanilla DeepFool
For the multiclass classifier, each class is on a hyperplane that divides one class from another, and \(\mathbf{x}\) is the input that gets classified to whichever hyperplane it lies on. The original DeepFool algorithm finds the closest hyperplane and pushes \(\mathbf{x}\) towards it and makes it misclassified with the minimum amount of perturbation. This is done iteratively until the image is misclassified. In the end, the algorithm returns the total perturbations \(\mathbf{\hat{r}}\). The following equations are used to calculate the closest hyperplane, \(\hat{l}\) where \(\mathbf{w}_{k}^{\prime}\) is a vector that points in the direction of the decision boundary
between the predicted label and the \(k_{th}\) largest activation. This is done by subtracting the gradient of the predicted label from the gradient of the \(k_{th}\) largest activation. \(f^{\prime}_{k}\) is the difference between the labels:
\[\mathbf{w}^{\prime}_{k}\leftarrow\nabla f_{k}(\mathbf{x}_{i})-\nabla f_{\hat{k}(\mathbf{x}_ {0})}(\mathbf{x}_{i}) \tag{2}\]
\[f^{\prime}_{k}\gets f_{k}(\mathbf{x}_{i})-f_{\hat{k}(\mathbf{x}_{0})}(\mathbf{x}_{i}) \tag{3}\]
After calculating \(\mathbf{w}^{\prime}_{k}\) and \(f^{\prime}_{k}\), the following equation calculates the closest hyperplane \(\hat{l}\) and the minimum amount of perturbation for \(k_{th}\) iteration, \(r_{i}\):
\[\hat{l}\leftarrow\operatorname*{arg\,min}_{k\neq\hat{k}(\mathbf{x}_{0})}\frac{|f^ {\prime}_{k}|}{||\mathbf{w}^{\prime}_{k}||_{2}} \tag{4}\]
\[\mathbf{r}_{i}\leftarrow\frac{|f^{\prime}_{i}|}{||\mathbf{w}^{\prime}_{\hat{l}}||_{2 }^{2}}\mathbf{w}^{\prime}_{\hat{l}} \tag{5}\]
Whenever \(\hat{k}(\mathbf{x}_{i})\) changes into a different label, the loop stops, and the value of total perturbation, \(\hat{r}\) is returned.
### Targeted DeepFool
Gajjar _et al_. [10] proposes two algorithms, the first one is a basic iterative approach and the other is a recursive approach. The first approach iteratively goes through the image and adds perturbations until it gets misclassified as the target class or reaches a threshold. In the recursive approach, the algorithm is applied repeatedly until the adversarial sample reaches the target hyperplane. If the adversarial sample cannot reach the target hyperplane after a certain number of iterations, the algorithm is applied again with a new target hyperplane. The recursive approach is found to be more effective than the basic algorithm on experimental grounds.
### Our Approach: Enhanced Targeted DeepFool
Now to turn the original DeepFool algorithm to misclassify an image into a specific target class we propose the algorithm shown in Algorithm 2 below.
Instead of running the loop till the image gets misclassified, we run it till the current label is not equal to the target label. We also remove the for loop that is shown in line 6 of Algorithm 1, because we are not calculating the gradients of the best \(n\) classes that have the most probability to be classified after the original class. This also decreases the time complexity by \(O(n)\). We change equations 2 and 3 to the ones shown below where \(\mathbf{w}^{\prime}_{k}\) is now calculating the difference between the gradients for the target class and the true class. \(f^{\prime}_{k}\) is calculating the perturbations needed with respect to the target class and true class.
\[\mathbf{w}^{\prime}k\leftarrow\nabla f_{t}(\mathbf{x}_{i})-\nabla f_{k(\mathbf{x}_{0})}( \mathbf{x}_{i}) \tag{6}\]
\[f^{\prime}_{k}\gets f_{t}(\mathbf{x}_{i})-f_{k(\mathbf{x}_{0})}(\mathbf{x}_{i}) \tag{7}\]
Since we are not comparing between the best \(n\) classes anymore we change the equation 4 to the one below:
\[\hat{l}\leftarrow\frac{|f^{\prime}_{k}|}{||\mathbf{w}^{\prime}_{k}||_{2}} \tag{8}\]
Only with these small changes, we are able to successfully misclassify an image to a specific class of our choosing.
We have also added another condition to the algorithm called minimum confidence, \(\mathbf{c}_{min}\). This lets users define a minimum confidence requirement as a hyperparameter. This results into a perturbed image which not only gets misclassified to a specific target class but also retains a high confidence score.
## 4 Experimental Evaluation and Results
Here we apply our proposed method to multiple state-of-the-art image classification models and show our findings.
### Experiments
**Dataset.** We use the validation images of the ILSVRC2012 [24] or ImageNet dataset for our experiments. It contains 50 thousand images with a thousand different classes. It is widely regarded as a benchmark dataset in the field of computer vision and has played a crucial role in advancing the development of deep learning models. The dataset is large and diverse which makes it a comprehensive representation of real-world visual data. Due to its size and diversity, the models pre-trained on this dataset can learn rich feature representation that captures a good amount of visual information.
**Models.** We execute different pre-trained deep convolutional neural networks to experiment with the Enhanced Targeted Deepfool. For instance, ResNet50 [15], AlexNet [18], EfficientNet_v2 [30], GoogLeNet [28], and Inception_v3 [29] to work on our proposed architecture. Additionally, we also use one of the state-of-the-art architecture, Vision Transformer (ViT) [8] image classification model to test our method.
**Testbed Setups.** We use two different testbed devices to experiment with our classifiers for this targeted attack. One of them includes Intel Core i7-12700K processor, RTX 3070 Ti, and 32 GB RAM. The other one consists of an Intel Core i5 13400F, RTX 3060 Ti, and 32 GB RAM. We install PyTorch 2.0 and Torchmetrics 0.11.4 libraries in these testbed systems, keeping the version of Python on 3.10.
**Setting up Hyperparameter and Test Approach.** For the tests, we use the validation images and generate a random target class that is not its true class. These images along with the target classes that were generated are fed into our function. We use several hyper-parameters such as overshoot which is set to the default value of 0.02, this is used as a termination criterion to prevent vanishing updates and
oscillations. We set the minimum amount of confidence needed as 95% and the maximum iterations as 100. This is done because in most cases the confidence score of the perturbed image is usually lower than expected (\(\sim\) 60%), therefore we add another condition in the while loop to make the code run until desired confidence is reached. Although, this will lead to more perturbations. The code will run until these conditions are met or until maximum iterations are reached. These hyper-parameters can be tuned to one's needs.
**Metrics.** We calculate the confidence score for the target class by passing the output tensor through the softmax function. It reflects the classifier's level of confidence in its predictions for the perturbed images. The magnitude of perturbations added to the images referred to as "Perturbations", quantifies the level of changes required to deceive the classifier. We find the change in an image by calculating the L2 distance between the perturbed and the original image and dividing it by the maximum L2 distance. We also calculate the Structural Similarity Index Measure (SSIM) [32] between the perturbed and original image and the number of iterations needed to perturb an image. The "Iterations" metric indicates the mean number of iterations required to achieve a successful misclassification. The "success" metric shows the percentage of images being successfully misclassified as the randomly selected target image. Finally, the computational time needed to execute the attack against a single image is denoted as "Time".
```
1:Input: Image \(\mathbf{x}\), classifier \(f\),
2:target class \(t\), minimum confidence \(\mathbf{c}_{min}\).
3:Output: Perturbation \(\mathbf{\hat{r}}\).
4:Initialize \(\mathbf{x}_{0}\leftarrow\mathbf{x},i\gets 0\).
5:while\(\hat{k}(\mathbf{x}_{i})\neq t\) or \(c<\mathbf{c}_{min}\)do
6:
7:\(\mathbf{w}^{\prime}k\leftarrow\nabla f_{k}(\mathbf{x}_{i})-\nabla f_{\hat{k}(\mathbf{x}_{0 })}(\mathbf{x}_{i})\)
8:\(f^{\prime}_{k}\gets f_{k}(\mathbf{x}_{i})-f_{\hat{k}(\mathbf{x}_{0})}(\mathbf{x}_{i})\)
9:endfor
10:\(\hat{l}\leftarrow\frac{|f^{\prime}_{l}|}{||\mathbf{w}^{\prime}_{k}||_{2}}\)
11:\(\mathbf{r}_{i}\leftarrow\frac{|f^{\prime}_{l}|}{||\mathbf{w}^{\prime}_{l}||_{2}^{2}} \mathbf{w}^{\prime}_{l}\)
12:\(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}+\mathbf{r}_{i}\)
13:\(i\gets i+1\)
14:endwhile
15:return\(\mathbf{\hat{r}}=\sum_{i}\mathbf{r}_{i}\)
```
**Algorithm 1** DeepFool: multi-class case
### Results
In this subsection, we describe the findings after successfully experimenting with our proposed model under different scenarios.
After running the dataset in different classifiers through our architecture, we see some interesting results. in Table 1. Our method generates perturbed images with a mean confidence score of 0.97 for almost all of the models. Moreover, we see some classifiers, for instance, ResNet50, EfficientNet_v2, GoogLeNet, and Inception_v3 show a considerable vulnerability to our approach. Here, the perturbation rates range from 2.14% to 3.37%. To compare with that, ET Deepfool does not perform well with AlexNet and ViT. These architectures require the most amount of perturbations to fool, with rates of 9.08% and 11.27% respectively. When it comes to image integrity, ResNet50, EfficientNet_v2, GoogLeNet, and Inception_v3 consistently exhibit the highest percentile. On the other hand, AlexNet and ViT have the lowest mean SSIM scores, with 92% and 89%, respectively. The attack against classifiers EfficientNet_v2, GoogLeNet, and Inception_v3 perform consistently well, with an iteration count ranging from 33 to 38. However, compared to other architectures, ET Deepfool struggles in ViT. It requires a significantly higher number of iterations to fool it, with an average of 67 iterations per image. The attack consistently succeeds against 97% of the dataset when applied against ResNet50, EfficientNet_v2, GoogLeNet and Inception_v3. Keeping up with the trend of other metrics, the attack has the lowest success rate against AlexNet and ViT, achieving success rates of 94% and 89% respectively.
Figure 2: Comparison between the original DeepFool and our proposed Enhanced Targeted DeepFool algorithm.
Finally, the attack against EfficientNet_v2 and Inception_v3 shows faster execution times, requiring approximately 0.31 seconds and 1.14 seconds per image respectively. On the other hand, ViT requires the highest computational overhead, with an average execution time of 2.36 seconds per image.
In Table 2, we can see that after running the method on five sample images, the successful samples require only a small amount of perturbations. And the ones that fail do so even after a large amount of perturbation in the image. Overall, we find our method to be effective in various degrees against most of these classifiers. The results provide insights into the comparative strengths and weaknesses of the given image classifiers under adversarial conditions, which can aid in developing improved defense mechanisms and enhancing the overall robustness of image classifiers.
## 5 Discussion
During the development of the ET DeepFool method, we observe an interesting phenomenon. While the original method exhibits the ability to misclassify an image with a minimal amount of perturbation, we notice that the confidence score associated with the perturbed image is often very low, shown in Figure 1.
To tackle this issue, we introduce confidence threshold as a specific hyperparameter that allows us to specify a minimum confidence level to be attained. We aim to enhance the overall confidence of the perturbed images while maintaining their effectiveness in inducing misclassification to
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Confidence\(\uparrow\) & Perturbation & SSIM\(\uparrow\) & Iterations & Success\(\uparrow\) & Time\(\downarrow\) \\ \hline ResNet50* & 0.97 & 2.14\% & 0.99 & 29 & 0.97 & 0.37 s \\ AlexNet* & 0.97 & 9.08\% & 0.92 & 25 & 0.94 & 0.52 s \\ EfficientNet\_v2** & 0.97 & 3.37\% & 0.98 & 33 & 0.97 & 0.31 s \\ GoogLeNet** & 0.97 & 3.45\% & 0.97 & 33 & 0.97 & 1.48 s \\ Inception\_v3* & 0.97 & 2.35\% & 0.99 & 38 & 0.97 & 1.14 s \\ ViT* & 0.96 & 11.27\% & 0.89 & 67 & 0.89 & 2.36 s \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of ET Deepfool on various classifiers. These are the mean values from our experiment results. Here, * means the model is run on an RTX 3070 Ti, and ** means RTX 3060 Ti is used to run the classifier. Also, time means the average time it takes to run on a single image. Moreover, \(\uparrow\) means the higher the score, the better results, and vice versa.
Figure 3: Few sample images from our experiments. The perturbed classes are as follows: Traffic light as Manhole cover, School bus as Ambilance, Acoustic guitar as Assault Rifle. Perturbations are shown in the first row, scaled 20 times for visibility.
specific classes by incorporating this hyperparameter into our Enhanced Targeted DeepFool approach which is not an option for the original DeepFool method as well as other existing works. One consequence of introducing this hyperparameter is an increase in the number of perturbations added to the original image. However, we find that the additional perturbations are negligible in magnitude. In Table 3, we can see that our method outperforms other attacks. FGSM shows a 0.97 confidence rate, which is easily obtainable with our method by tuning the minimum confidence threshold. Moreover, only JSMA has a higher success rate but the experiments are done on MNIST which has only 10 classes and only contains numerical images. C&W shows a surprising 100% success rate but they only do experiments with 1000 images and 100 classes whereas we use 50000 images and 1000 classes. Moreover, all of the existing works focus solely on the success rate. However, we make sure wherever it gets successful, it does that with the highest confidence and lowest perturbation possible.
The Targeted DeepFool algorithm by Gajjar _et al_. [10] uses MNIST dataset, which is a \(28\times 28\) pixel grayscale dataset of numbers, and CIFAR10 dataset which only contains 10 classes. The images in these datasets are not suitable to compare against real-life images. They get a 77% success rate for CIFAR10 whereas our approach gets a 95% success rate on Imagenet with 1000 classes. Additionally, their approach results in a higher average distortion for the MNIST dataset compared to our approach with the Imagenet dataset. They also claim that the adversarial images are visually indistinguishable from the original ones, however, this claim is questionable, as the images are too small to clearly assess the level of distortion. Furthermore, their most efficient approach, the recursive algorithm, is more time-consuming.
Furthermore, in contrast to the majority of untargeted attacks in the literature, our study does not seek to elucidate the degradation in model performance when retrained with perturbed images; consequently, we have chosen not to incorporate this aspect in our analysis.
In our results, we observe that ViT performs better against our attack. Benz _et al_. [1] also supports this claim as their results show that ViT has a better success rate against vanilla DeepFool. One possible reason behind this is the architecture of the model. Since it turns an image into multiple patches in the first place, therefore every patch is executed individually and perturbations are added to every patch, which is pretty visible in Figure 3. This is one of the likeliest core reasons for getting the comparatively higher amount of mean perturbations, as seen in Table 1.
Our attack has far-ranging implications, as this can tell us what kind of image inputs are most likely to fail for a given model. Our current implementation is using only one target class, but this attack can be extended to include multiple classes with individual confidence levels, which can then expose which classes of images are more likely to be mistaken in an image recognition model.
## 6 Conclusion
In this paper, we propose an algorithm, Enhanced Targeted DeepFool, which improves on the original DeepFool algorithm and makes it not only able to misclassify an image to a specific target class but also achieve the desired amount of confidence needed to fool a classifier. We show that our algorithm is simple and requires less time complexity. We demonstrate that the algorithm performs well against various state-of-the-art image classifiers. We also provide evi
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{Perturbation Amount} \\ \cline{2-7} Image No. & ResNet50 & AlexNet & EfficientNet & GoogLeNet & Inception\_v3 & ViT \\ \hline
3.jpg & 3.31\% & 4.90\% & 2.91\% & 5.59\% & 5.59\% & 11.92\% \\
63.jpg & 0.79\% & 20.10\% & 0.79\% & 1.24\% & 1.27\% & 3.34\% \\
328.jpg & 2.59\% & 7.13\% & 5.83\% & 4.99\% & 4.00\% & 56.26\%* \\
1125.jpg & 13.68\% & 98.27\%* & 1.49\% & 2.46\% & 1.47\% & 2.87\% \\
1398.jpg & 1.50\% & 6.94\% & 3.35\% & 3.09\% & 2.74\% & 65.38\%* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Percentage of perturbations needed, on five sample images from the dataset. Here, * beside the perturbation value signifies that the attack does not succeed.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Attack Name & Success & Confidence & Perturbation \\ \hline FGSM [13] & 0.75 & 0.97 & - \\ C\&W [3] & **1.00** & - & - \\ JSMA [23] & 0.97 & - & 4.01 \\ UAP [21] & 0.95 & - & - \\ One Pixel [27] & 0.35 & 0.60 & - \\ PoTrip [19] & 0.88 & - & - \\ Targeted DeepFool [10] & 0.77 & - & 2.28** \\ ET DeepFool (Ours) & 0.95 & 0.95* & **2.14** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative comparison among existing attacks and our proposed attack. Here, * represents it can be a variable (from 0 to 1), according to our preference. ** represents the average perturbation for MNIST dataset.
dence on how our approach performs better than an existing one. We are convinced that by training and fine-tuning the classifiers on the images generated by the algorithm, they will become more robust to future attacks.
Our future plans include the extension of this attack for multiple classes, each with its own confidence level, which would make our method effectively capable of finding which classes of images are easier to be mistaken for another class, leading to a measurement of the robustness of an image recognition model given a specific sample of images. To the best of our knowledge, our work is the first perturbation procedure where a post-perturbation performance metric like the confidence level can be tuned to its arbitrary precision level. This allows us to have a look at how little perturbation, i.e. the minimal level of change, it requires to make any model mistake a class of images as another with strong error rate and misplaced confidence. Another area of potential future research lies in devising an approach that minimizes computational requirements. Although the proposed algorithm already demonstrates improved time complexity, there can be more investigations done to optimize its computational demands to ensure broader practical applicability. Moreover, why the attack works less well for ViT and AlexNet can be further explored, so that a perturbation attack specifically tailored for them can be constructed. We hope that these findings contribute to the advancement of adversarial machine learning and provide a foundation for further exploration and refinement of targeted attack methods.
|
2302.04774 | 3D Human Pose and Shape Estimation via HybrIK-Transformer | HybrIK relies on a combination of analytical inverse kinematics and deep
learning to produce more accurate 3D pose estimation from 2D monocular images.
HybrIK has three major components: (1) pretrained convolution backbone, (2)
deconvolution to lift 3D pose from 2D convolution features, (3) analytical
inverse kinematics pass correcting deep learning prediction using learned
distribution of plausible twist and swing angles. In this paper we propose an
enhancement of the 2D to 3D lifting module, replacing deconvolution with
Transformer, resulting in accuracy and computational efficiency improvement
relative to the original HybrIK method. We demonstrate our results on commonly
used H36M, PW3D, COCO and HP3D datasets. Our code is publicly available
https://github.com/boreshkinai/hybrik-transformer. | Boris N. Oreshkin | 2023-02-09T17:08:43Z | http://arxiv.org/abs/2302.04774v4 | # 3D Human Pose and Shape Estimation via HybrIK-Transformer
###### Abstract.
HybrIK relies on a combination of analytical inverse kinematics and deep learning to produce more accurate 3D pose estimation from 2D monocular images (Li et al., 2021). HybrIK has three major components: (1) pretrained convolution backbone, (2) deconvolution to lift 3D pose from 2D convolution features, (3) analytical inverse kinematics pass correcting deep learning prediction using learned distribution of plausible twist and swing angles. In this paper we propose an enhancement of the 2D to 3D lifting module, replacing deconvolution with Transformer, resulting in accuracy and computational efficiency improvement relative to the original HybrIK method. We demonstrate our results on commonly used H36M, PW3D, COCO and HP3D datasets. Our code is publicly available [https://github.com/boreshkinai/hybrik-transformer](https://github.com/boreshkinai/hybrik-transformer).
## 1. Method
We use the same backbone and analytical IK approach (components 1 and 3 outlined in abstract) as original HybrIK (Li et al., 2021). However, we note that the bulk of the processing happens in component 2, the 3D lifting/keypoint estimation block, inherited from (Sun et al., 2018). This consists of three deconvolution layers followed by a \(1\times 1\) convolution to generate the 3D joint heatmaps. The soft-argmax operation is used to obtain 3D pose from the heatmap in a differentiable manner.
This operation is very costly both in terms of parameter counts and GPU memory consumption. We propose to replace it with Transformer blocks conveniently described in terms of the original primitives (Vaswani et al., 2017). The attention unit is defined as:
\[\begin{split}\text{ATT}(\mathbf{Q},\mathbf{K},\mathbf{V})= \text{softmax}(\mathbf{Q}\mathbf{K}^{T}/\sqrt{d})\mathbf{V},\end{split} \tag{1}\]
where \(d\) is model-wide hidden dimension width. The multi-head attention unit is defined as:
\[\begin{split}\mathbf{h}_{i}=\text{ATT}(\mathbf{Q}\mathbf{W}_{i}^ {Q},\mathbf{K}\mathbf{W}_{i}^{K},\mathbf{V}\mathbf{W}_{i}^{V}),\\ \text{MHA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{concat}( \mathbf{h}_{1},\dots,\mathbf{h}_{h})\mathbf{W}^{O}.\end{split}\]
Transformer also uses feed-forward network FFN, which is a simple multi-layer perceptron.
Our proposed 3D keypoint estimation procedure consists of \(L\) Transformer blocks and each block \(\ell\in[1,L]\) performs the following computation. First, encode 2d image features:
\[\begin{split}\mathbf{e}_{\ell}^{\text{2d}}=\text{MHA}_{\ell}^{ \text{2d}}(\mathbf{e}_{\ell-1}^{\text{2d}},\mathbf{e}_{\ell-1}^{\text{2d}}, \mathbf{e}_{\ell-1}^{\text{2d}}),\\ \mathbf{e}_{\ell}^{\text{2d}}=\text{Relu}(\text{LayerNorm}_{\ell} ^{\text{2d}}(\mathbf{e}_{\ell}^{\text{2d}}+\mathbf{e}_{\ell-1}^{\text{2d}})), \\ \mathbf{e}_{\ell}^{\text{2d}}=\text{FN}_{\ell}^{\text{2d}}( \mathbf{e}_{\ell}^{\text{2d}}).\end{split}\]
Here \(\mathbf{e}_{0}^{\text{2d}}\) is the source input of transformer, which is a tensor of flattened 2D image features derived from the output of ResNet backbone, summed with learnable 2D position encoding and projected internal transformer width \(d\).
Second, encode target 3D templates:
\[\begin{split}\mathbf{e}_{\ell}^{\text{3d}+}=\text{MHA}_{\ell}^{ \text{3d}}(\mathbf{e}_{\ell-1}^{\text{3d}},\mathbf{e}_{\ell-1}^{\text{3d}}, \mathbf{e}_{\ell-1}^{\text{3d}}),\\ \mathbf{e}_{\ell}^{\text{3d}+}=\text{Relu}(\text{LayerNorm}_{\ell} ^{\text{3d}}(\mathbf{e}_{\ell}^{\text{3d}+}+\mathbf{e}_{\ell-1}^{\text{3d}})),\end{split}\]
Here \(\mathbf{e}_{0}^{\text{3d}}\) is the target input of transformer, which is a \(28\times d\) tensor containing templates for the following desired outputs: (i) 24 3D keypoints, (ii) 23 twist angles \(\Phi\), (iii) SMPL shape parameters
\begin{table}
\begin{tabular}{l|c c c|c c c|c c} \hline \hline & \multicolumn{3}{c}{3DPW} & \multicolumn{3}{c}{Human3.6M} & \multicolumn{3}{c}{MPI-INF-3DHP} \\ \cline{2-10} Method & PA-MPJPE \(\downarrow\) & MPJPE \(\downarrow\) & PVE \(\downarrow\) & PA-MPJPE \(\downarrow\) & MPJPE \(\downarrow\) & PCK \(\uparrow\) & AUC \(\uparrow\) & MPJPE \(\downarrow\) \\ \hline HMR (Kanazawa et al., 2018) & 81.3 & 130.0 & - & 56.8 & 88.0 & 72.9 & 36.5 & 124.2 \\ CMR (Kolotouros et al., 2019) & 70.2 & - & - & 50.1 & - & - & - & - \\ Pavlakos et al. (2018) & - & - & - & 75.9 & - & - & - & - \\ Arnab et al. (2019) & 72.2 & - & - & 54.3 & 77.8 & - & - & - \\ SPIN (Kolotouros et al., 2019) & 59.2 & 96.9 & 116.4 & 41.1 & - & 76.4 & 37.1 & 105.2 \\ IZL (Moon and Lee, 2020)* & 58.6 & 93.2 & - & 41.7 & 55.7 & - & - & - \\ Mesh Graphormer (Lin et al., 2021) & 45.6 & 74.7 & 87.7 & 41.2 & 34.5 & - & - & - \\ PARE (Kocabas et al., 2021) w. 3DPW & 46.4 & 74.7 & 87.7 & - & - & - & - \\ \hline HybrIK (Li et al., 2021) & 48.8 & 80.0 & 94.5 & 34.5 & 54.4 & 86.2 & 42.2 & 91.0 \\ Ours (HybrIK-Transformer ResNet-34) & 47.6 & 75.7 & 91.4 & 34.9 & 52.5 & 88.3 & 48.4 & 88.3 \\ Ours (HybrIK-Transformer HrNet-48) & **43.4** & **73.6** & **87.3** & **29.8** & **48.8** & **89.3** & **49.0** & **85.2** \\ \hline HybrIK (Li et al., 2021) w. 3DPW & 45.0 & 74.1 & 86.5 & 33.6 & 55.4 & 87.5 & 46.9 & 93.9 \\ Ours (HybrIK-Transformer ResNet-34) w. 3DPW & 46.0 & 74.9 & 88.1 & 34.6 & 50.2 & 86.7 & 47.6 & 90.2 \\ Ours (HybrIK-Transformer HrNet-48) w. 3DPW & **42.3** & **71.6** & **83.6** & **29.5** & **47.5** & **88.6** & **48.9** & **86.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Benchmark of state-of-the-art models on 3DPW, Human3.6M and MPI-INF-3DHP datasets. *-* denotes the method is trained on different datasets. *-* shows the results that are not available.
\(\beta\). Each template consists of the sum of the corresponding joint embedding and the output type embedding. There are 24 joint embeddings for each SMPL joint and 3 output type embeddings: 3D keypoint, twist angle, SMPL shape.
Finally, decode 3D outputs from encoded 2D inputs and encoded 3D templates:
\[\mathbf{e}_{t}^{\text{3d}} =\text{MHA}_{t}^{\text{2d-3d}}(\mathbf{e}_{t}^{\text{3d-4}}, \mathbf{e}_{t}^{\text{2d}},\mathbf{e}_{t}^{\text{2d}}),\] \[\mathbf{e}_{t}^{\text{3d}} =\text{ReLu}(\text{LayerNorm}_{t}^{\text{2d-3d}}(\mathbf{e}_{t}^{ \text{3d}}+\mathbf{e}_{t}^{\text{3d-t}})),\] \[\mathbf{e}_{t}^{\text{3d}} =\text{FFN}_{t}^{\text{3d}}(\mathbf{e}_{t}^{\text{3d}}).\]
Transformer output \(\mathbf{e}_{t}^{\text{3d}}\) is a \(28\times d\) tensor. Its leading 24 elements are linearly projected into 3D joint keypoints, next 23 positions are linearly projected into cosine and sine of the twist angle and the final element is linearly projected into 10-dimensional SMPL shape \(\beta\).
A small note on training methodology is in order. At train time we employ augmentation of \(\mathbf{e}_{0}^{\text{2d}}\) by sampling a subset of ResNet patches to pass to the transformer input. We implement it using multinomial sampling without replacement guided by a randomly generated integer specifying the number of patches to retain in each batch. At inference time, all ResNet patches are passed to transformer input. We observed a small but consistent positive regularizing effect resulting from this augmentation.
## 2. Results
The following datasets are used in our empirical studies. We use the same protocols and dataset preprocessing as Li et al. (2021). **MSCOCO**(Lin et al., 2014) is a large-scale in-the-wild 2D human pose dataset used only for training. **3DPW**(von Marcard et al., 2018) is a challenging outdoor benchmark for 3D pose and shape estimation used both for training and evaluation. **MPI-INF-3DHP**(Mehta, 2017) consists of both constrained indoor and complex outdoor multi-camera 3D scenes. We use its train set for training and evaluate on its test set. **Human3.6M**(Ionescu et al., 2011) is an indoor 3D pose estimation benchmark used for training and evaluation. We use 5 subjects (S1, S5, S6, S7, S8) for training and 2 subjects (S9, S11) for evaluation.
**Key accuracy results** are presented in Table 1. Specifically, comparing our rest against original HybrIK we see consistent improvement when trained on **MSCOCO**, **Human3.6M**, **MPI-INF-3DHP**. The results of our algorithm when trained on these datasets plus **3DPW** are more mixed (which is also the case for original HybrIK -- some of its metrics decline when trained with **3DPW**), but we see that the inclusion of **3DPW** improves our results on **3DPW** itself, which is intuitively plausible.
**Computational efficiency**. Based on the ResNet-34 backbone, proposed architecture fits in 28GB GPU memory total on 4 NVIDIA M40 GPUs and batch size 256 (64 per GPU). The training speed is 63 min per epoch. The original HybrIK architecture in the same setup fits in 60GB GPU memory total and trains at 82 min per epoch. This is two times less GPU memory footprint and 23% training speed improvement. Moreover, HybrIK-Transformer fits on 2xP100 NVIDIA GPUs occupying total 24GB of GPU memory with 2x128 batch size, whereas the original HybrIK does not fit this hardware setup. So, the proposed modification can be trained in some very resource limited setups, unlike the original HybrIK architecture.
**Hyperparamter settings** used to report results in Table 1 are as follows. The architecture is implemented in PyTorch. We train for 200 epochs using Adam optimizer and inverse square root learning rate schedule with max learning rate 0.0005 and warmup period of 4000 batches. The models of last 10 epochs are averaged and the results of this average model are reported in Table 1. Following (Li et al., 2021) we use ResNet-34 (He et al., 2016) backbone, initialized with ImageNet pre-trained weights, the analytical IK pass converting 3D keypoint predictions and twist angles into full pose and root joint prediction from RootNet (Moon et al., 2019). The Transformer part uses \(L=6\) blocks, \(h=8\) heads, model width \(d=512\), dropout 0.1, FFN has 3 layers, all of width \(d=512\). We also trained and evaluated the variant of the architecture based on the HrNet-48 backbone (Sun et al., 2019). To integrate the HrNet backbone with transformer, we discard HrNet's mesh generation step as well as average pooling. Thus, we take only \(8\times 8\times 2048\) raw features obtained from HrNet's final feature layer and feed them directly into transformer, exactly the way we do with the output of the ResNet-34 backbone. HrNet variant is trained with batch size \(4\times 56\) to fit into 4 GPUs.
|
2304.11949 | Geometric Relational Embeddings: A Survey | Geometric relational embeddings map relational data as geometric objects that
combine vector information suitable for machine learning and
structured/relational information for structured/relational reasoning,
typically in low dimensions. Their preservation of relational structures and
their appealing properties and interpretability have led to their uptake for
tasks such as knowledge graph completion, ontology and hierarchy reasoning,
logical query answering, and hierarchical multi-label classification. We survey
methods that underly geometric relational embeddings and categorize them based
on (i) the embedding geometries that are used to represent the data; and (ii)
the relational reasoning tasks that they aim to improve. We identify the
desired properties (i.e., inductive biases) of each kind of embedding and
discuss some potential future work. | Bo Xiong, Mojtaba Nayyeri, Ming Jin, Yunjie He, Michael Cochez, Shirui Pan, Steffen Staab | 2023-04-24T09:33:30Z | http://arxiv.org/abs/2304.11949v1 | # Geometric Relational Embeddings: A Survey+
###### Abstract
Geometric relational embeddings map relational data as geometric objects that combine vector information suitable for machine learning and structured/relational information for structured/relational reasoning, typically in low dimensions. Their preservation of relational structures and their appealing properties and interpretability have led to their uptake for tasks such as knowledge graph completion, ontology and hierarchy reasoning, logical query answering, and hierarchical multi-label classification. We survey methods that underly geometric relational embeddings and categorize them based on (i) the embedding geometries that are used to represent the data; and (ii) the relational reasoning tasks that they aim to improve. We identify the desired properties (i.e., inductive biases) of each kind of embedding and discuss some potential future work.
## 1 Introduction
Vector embeddings map objects to a low-dimensional vector space. Vector embeddings have been developed to learn representations of objects that allow for distinguishing relevant differences and ignoring irrelevant variations between objects. When vector embeddings are used to embed structural/relational data, they fail to represent key properties of relational data that cannot be easily modeled in a plain, low-dimensional vector space. For example, relational data may have been defined by applying set operators such as set inclusion and exclusion [21], logical operations such as negation [14], or they may exhibit relational patterns like the symmetry of relations [1] and structural patterns (e.g., trees and cycles) [15, 21].
Going beyond plain vector embeddings, _geometric relational embeddings_ replace the vector representations with more advanced geometric objects, such as convex regions [17, 16, 20], density functions [23, 15], elements of hyperbolic manifolds [15, 16], and their combinations [20]. Geometric relational embeddings provide a rich geometric inductive bias for modeling relational/structured data. For example, embedding objects as convex regions allows for modeling not only similarity but also set-based and logical operators such as set inclusion, set intersection [20] and logical negation [20] (cf. Fig. 1) while representing data on non-Euclidean manifolds allows for capturing complex structural patterns, such as representing hierarchies in hyperbolic space [15].
Geometric relational embeddings have been successfully applied in many relational reasoning tasks including but not limited to knowledge graph (KG) completion [1, 21], ontology/hierarchy reasoning [22, 23], hierarchical multi-label classification (HMC) [24, 25], and logical query answering [16]. Different downstream applications require varying capabilities from underlying embeddings and, hence, appropriate choice requires a sufficiently precise understanding of embeddings' characteristics and task requirements.
In this paper, we identify the desired capabilities of geometric relational embeddings on different relational reasoning tasks. We conduct a comprehensive survey on the methods of geometric relational embeddings. The methods are classified based on the underlying geometries used to represent objects as well as the targeted relational reasoning tasks. Fig. 2 shows a taxonomy of these methods. We systematically discuss and compare the inductive biases of different embeddings and discuss the applications in four relational reasoning tasks: KG
Figure 1: An illustration of region-based embeddings. (a) Ball embedding represents concepts as \(n\)-ball allowing for modeling set inclusion; (b) Box embedding represents concepts as boxes that further support intersectional closure; (c) Cone embedding represents concepts as cones that support intersectional and negation closures.
completion, ontology and hierarchy completion, hierarchical multi-label classification, and logical query answering.
## 2 Preliminaries and Problem Definition
We describe input relational data as a directed edge-labeled graph or multi-relational graph [1].
**Definition 1** (directed edge-labeled graph).: _A directed edge-labeled graph is a tuple \(G=(V,E,L)\), where \(V\subseteq\textbf{Con}\) is a set of nodes, \(L\subseteq\textbf{Con}\) is a set of edge labels, and \(E\subseteq V\times L\times V\) is a set of edges, where **Con** denote a countably infinite set of constants._
Such data structures can be used to represent a knowledge graph. For example, in a KG about proteins, nodes represent proteins and edges represent (binary) interactions between proteins. Using reserved keywords, such as defined for RDFS or OWL [13], directed edge-labeled graphs can also be used to define ontologies. E.g., in the gene ontology, nodes represent gene functional concepts and edges represent (binary) relations between them. Ontology-based KGs comprise facts about entities and complex concepts definitions.
Classical logical reasoning on structured knowledge remains unchallenged when it comes to sound and complete deduction of new facts from given, consistent relational data. Geometric relational embeddings operate on the premise that relational data is incomplete and some missing facts must be derived by induction or transduction. To this end, all geometric relational embeddings establish principles of analogy. Embeddings of relations are sought such that all the pairs of nodes in a relation are located in analogous manner to each other in geometric space. Then, key functional capabilities of geometric relational embeddings include the querying for similar nodes and the completion of triple patterns in the absence of directly matching triples. If a triple \((h,r,t)\notin E\) is plausible, then queries in form of the triple patterns \((h,r,?t)\) or \((h,?r,t)\) should bind the query variable \(?t\) to \(t\) or the query variable \(?r\) to \(r\), respectively.
Desiderata.The key functional capability of a geometric relational embedding model is based on the proper judgement of plausibility, whose quality benefits if relational properties and patterns can be represented geometrically in the embedding space as follows: 1) **similarity:** an embedding model should produce similar embeddings for similar entities/relations; 2) **uncertainty:** an embedding model should capture the _degree of truthfulness_; 3) **set theory:** an embedding model should capture set-theoretical semantics including set inclusion, set exclusion, set overlap, and set difference; 4) **logical operations:** an embedding model should capture logical connectives such as logical conjunction, negation, and union. 5) **relational patterns:** an embedding model should capture some relational patterns like symmetry, inversion, composition, and transitivity; and 6) **structural patterns:** an embedding model should capture structural/topological patterns such as trees, cycles, and their combination. The desiderata differ w.r.t different relational reasoning tasks. For example, logical operations only matter when dealing with ontologies, and queries with conjunction and negation might exist. Table 1 summarizes the desiderata of each relational reasoning task.
\begin{table}
\begin{tabular}{c c c c c c c} \hline & properties & hierarchy & KG & ontology & HMC & query \\ \hline similarity & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline uncertainty & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline inclusion & \(\times\) & \(\times\) & ✓ & ✓ & ✓ \\ set theory & exclusion & \(\times\) & \(\times\) & ✓ & ✓ & ✓ \\ & overlap & \(\times\) & \(\times\) & ✓ & ✓ & ✓ \\ & difference & \(\times\) & \(\times\) & ✓ & ✓ & ✓ \\ \hline & intersection & \(\times\) & \(\times\) & ✓ & \(\times\) & ✓ \\ logical & union & \(\times\) & \(\times\) & ✓ & \(\times\) & ✓ \\ & negation & \(\times\) & \(\times\) & ✓ & \(\times\) & ✓ \\ \hline & symmetry & \(\times\) & ✓ & ✓ & \(\times\) & ✓ \\ & inversion & \(\times\) & ✓ & ✓ & \(\times\) & ✓ \\ & composition & \(\times\) & ✓ & ✓ & \(\times\) & ✓ \\ & transitivity & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline & cycles & \(\times\) & ✓ & \(\times\) & \(\times\) & ✓ \\ & trees & ✓ & ✓ & ✓ & ✓ & ✓ \\ & tree-cycle & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: The desiderata of different relational reasoning tasks.
Figure 2: A systematic categorization of geometric relational embeddings. The methods for geometric relational embeddings can be classified into distribution-based, region-based, and manifold-based. Each two of them can be combined and form a hybrid method, e.g., the hyperbolic cone is a combination of hyperbolic (manifold-based) and cone embedding (region-based).
## 3 Geometric Relational Embeddings
As Table 2 summarizes, the methodologies can be divided into four categories based on the underlying geometries.
### Distribution-based Embeddings
Probability distributions provide a rich geometry of the latent space. Their density can be interpreted as soft regions and it allows us to model uncertainty, asymmetry, set inclusion/exclusion, entailment, and so on.
Gaussian embeddings.Word2Gauss [22] maps words to multi-dimensional Gaussian distributions over a latent embedding space such that the linguistic properties of the words are captured by the relationships between the distributions. A Gaussian \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\) is parameterized by a mean vector \(\mathbf{\mu}\) and a covariance matrix \(\mathbf{\Sigma}\) (usually a diagonal matrix for the sake of computing efficiency). The model can be optimized by an energy function \(-E\left(\mathcal{N}_{i},\mathcal{N}_{j}\right)\) that is equivalent to the KL-divergence \(D_{\mathrm{KL}}\left(\mathcal{N}_{j}\|\mathcal{N}_{i}\right)\) defined as
\[D_{\mathrm{KL}}\left(\mathcal{N}_{j}\|\mathcal{N}_{i}\right)=\int_{x\in\mathbb{ R}^{n}}\mathcal{N}\left(x;\mu_{i},\Sigma_{i}\right)\log\frac{\mathcal{N}\left(x; \mu_{j},\Sigma_{j}\right)}{\mathcal{N}\left(x;\mu_{i},\Sigma_{i}\right)}dx. \tag{1}\]
KG2E [11] extends this idea to KG embedding by mapping entities and relations as Gaussians. Given a fact \((h,r,t)\), the scoring function is defined as \(f(h,r,t)=\frac{1}{2}\left(\mathcal{D}_{\mathrm{KL}}\left(\mathcal{N}_{h}, \mathcal{N}_{r}\right)+\mathcal{D}_{\mathrm{KL}}\left(\mathcal{N}_{r},\mathcal{ N}_{t}\right)\right)\). The covariances of entity and relation embeddings allow us to model uncertainties in KGs. While modeling the scores of triples as KL-divergence allows us to capture asymmetry. TransG [20] generalizes KG2E to a Gaussian mixture distribution to deal with multiple relation semantics revealed by the entity pairs. For example, the relation HasPart has at least two latent semantics: composition-related as (_Table_, _HasPart_, _Leg_) and location-related as (_Atlantics_, _HasPart_, _NewYorkBay_).
Dirichlet embeddings.DiriE [13] embeds entities as Dirichlet distributions \(f(\mathbf{x}|\mathbf{\alpha})=\frac{1}{\mathrm{B}(\mathbf{\alpha})}\prod_{i=1}^{d}x_{i}^{ \alpha_{i}-1}\) where \(\mathbf{\alpha}=(\alpha_{1},\alpha_{2},\cdots,\alpha_{d})>0\)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Category & Method & Embedding & Task & Institute & Venue & Year \\ \hline \multirow{8}{*}{Distribution-based} & KG2E [11] & Gauss & KG & CAS & CIKM & 2015 \\ & TransG [20] & Gauss & KG & Tsinghua & ACL & 2016 \\ & HCGE [21] & Gauss & HMC & Sorbonne & ECML PKDD & 2016 \\ & PERM [20] & Gauss & query & Virginia Tec & NeurIPS & 2021 \\ & Beta [21] & Beta & query & Stanford & NeurIPS & 2020 \\ & DiriE [13] & Dirichlet & KG & BUPT & WWW & 2022 \\ & Gamma [14] & Gamma & query & OPPO & EMNLP & 2022 \\ \hline \multirow{8}{*}{Region-based} & ELEM [22] & Ball & ontology & KAUST & IJCAI & 2019 \\ & Ball & hierarchy & U. Bonn & ICLR & 2019 \\ & Box Lattice [22] & Box & hierarchy & UMass & ACL & 2018 \\ & Joint Box [20] & Box & hierarchy & UMass & AKBC & 2020 \\ & BEUrRE [2] & Box & KG & UCLA & NAACL-HLT & 2021 \\ & BoxE [2] & Box & KG & Oxford & NeurIPS & 2021 \\ & BoxType [2] & Box & HMC & UT Austin & ACL & 2021 \\ & Quer2Box [2] & Box & query & Stanford & ICLR & 2019 \\ & MBM [20] & Box & HMC & UMass & ICLR & 2022 \\ & BoxE.[20] & Box & ontology & USTUT & ISWC & 2022 \\ & BC-BOX [14] & Box & hierarchy & UMass & NeurIPS & 2022 \\ & OE [21] & Cone & hierarchy & U. Toronto & ICLR & 2016 \\ & Cone Semantic [21] & Cone & ontology & U. Labeck & IJCAI & 2021 \\ & Cone [14] & Cone & query & CAS & NeurIPS & 2021 \\ & ExpressE [20] & Polygons & KGs & TU Vienna & ICLR & 2023 \\ \hline \multirow{8}{*}{Manifold-based} & 5\(\pm\)[21] & Spherical & KG & U. Bonn & AAAI & 2020 \\ & MuRS [13] & Spherical & KG & UIUC & WWW & 2021 \\ & MuRP [15] & Hyperbolic & KG & Edinburgh & NeurIPS & 2019 \\ & AttH [1] & Hyperbolic & KG & Stanford & ACL & 2020 \\ & HyperMl [20] & Hyperbolic & HMC & BJTU & AAAI & 2020 \\ & HBE [21] & Hyperbolic & KG & Southeast & EMNLP & 2021 \\ & HybNet [20] & Hyperbolic & KG & Tsinghua & ACL & 2022 \\ & HMM [20] & Hyperbolic & HMC & USTUT & NeurIPS & 2022 \\ & MuRM [13] & Mixed & KG & UIUC & WWW & 2021 \\ & GIE [20] & Mixed & KG & CAS & EMNLP & 2022 \\ & UltraE [20] & Mixed & KG & USTUT & KDD & 2022 \\ & Dogs [21] & Mixed & KG & UCLA & KDD & 2022 \\ \hline \multirow{8}{*}{Hybrid} & Smooth Box [21] & Gauss+Box & hierarchy & UMass & ICLR & 2019 \\ & Gumbel Box [20] & Gumbel+Box & hierarchy & UMass & NeurIPS & 2020 \\ & HWN [20] & Hyperbolic+Gauss & hierarchy & U. Tokyo & ICML & 2019 \\ & ROWN [20] & Hyperbolic+Gauss & hierarchy & POSTECH & NeurIPS & 2022 \\ & HypDisk [21] & Hyperbolic+Ball & hierarchy & LAPRA & ICML & 2019 \\ & HypE[20] & Hyperbolic+Box & query & Virginia Tech & WWW & 2021 \\ & HypCone [1] & Hyperbolic+Cone & hierarchy & ETH & ICML & 2018 \\ & ConeE [1] & Hyperbolic+Cone & KG & Tsinghua & NeurIPS & 2021 \\ & [20] & Hyperbolic+Cone & HMC & ETH & CVPR & 2020 \\ & Polygonle [21] & Hyperbolic+Polygon & KG & CAS & AAAI & 2022 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the geometric relational embedding methods. HMC means hierarchical multi-label classification.
is the distribution parameter, \(\mathrm{B}(\mathbf{\alpha})=\frac{\prod_{i=1}^{d}\Gamma(\alpha_{i})}{\Gamma\left( \sum_{i=1}^{d}\alpha_{i}\right)}\) and \(\Gamma(\alpha)=\int_{0}^{\infty}t^{\alpha-1}e^{-t}dt\) are the multivariate beta function and Gamma function, respectively. DiriE embeds relations as multinomial distributions \(f(\mathbf{x}|\mathbf{\mu})=\frac{\Gamma\left(\sum_{i}\mu_{i}+1\right)}{\prod_{i}\Gamma \left(\mu_{i}+1\right)}\prod_{i=1}^{k}x_{i}^{\mu_{i}}\) where \(\mu_{i}\in\mathbb{R}^{+}\)is the distribution parameter. Given a triple \((h,r,t)\), since the Dirichlet distribution is a conjugate prior of the multinomial distribution and according to Bayesian inference, DiriE views the head entity as a prior distribution, the tail entity as a posterior distribution and the relation as a likelihood function that transforms the prior to the posterior distribution, i.e., \(q_{t}\propto b_{r}p_{h}\) and \(q_{h}\propto b_{r^{-1}}p_{t}\), where \(p\) and \(q\) are PDFs of Dirichlet distributions, and \(b\) is PMF of the multinomial distribution. Due to the flexibility of the multinomial distributions of relation modeling, DiriE is able to model various relational patterns, including symmetry, inversion and composition, e.g., a symmetric relation can be modeled by learning \(b_{r}\) as \(b_{r^{-1}}\).
Beta embeddings.Logical query answering includes logical connectives such as negation which can be properly modeled by distributions. BetaE [11] embeds entities and queries as multivariate beta distributions \(f(\mathbf{x}|\mathbf{\alpha},\mathbf{\beta})=\frac{\mathbf{x}^{\mathbf{\alpha}-1}(1-\mathbf{x}^{\mathbf{ \beta}-1})}{\mathrm{B}(\mathbf{\alpha},\mathbf{\beta})}\) and models the negation by taking the reciprocal of the parameters \(\alpha\) and \(\beta\). By doing so, BetaE converts the regions with high density into low-density areas and vice versa, which resembles the desired property of negation. In addition, a conjunction of entity sets \(S_{1},\ldots,S_{n}\) can be modeled as a weighted product of involved Beta distributions,i.e., \(f_{\mathrm{inter}}=\frac{1}{Z}\prod f_{S_{i}}^{w_{i}}\), where \(Z\) is a normalization constant and \(w_{1},\ldots,w_{n}\) are the attention weights summing to 1. Such weighted product of Beta distribution follows the intuition of conjunction, namely the regions of high density in the intersection set should also have high density in all input distributions.
Gamma embeddings.GammaE [10] improves BetaE by embedding entities and queries as Gamma distributions defined as \(f(x|\alpha,\beta)=\frac{x^{\alpha-1}e^{-\beta\alpha}g^{\alpha}}{\Gamma\left( \alpha\right)}\), where \(x>0,\alpha>0\) is the shape, \(\beta>0\) is the rate, and \(\Gamma(*)\) is the Gamma function. GammaE models intersection as product of Gamma distributions \(P_{S_{\text{linear}}}=\frac{1}{Z}\prod_{i=1}^{k}P_{S_{i}}^{w_{i}}\) and union as Gamma mixture models \(P_{S_{\text{linear}}}=\sum_{i=1}^{k}\theta_{i}P_{S_{i}}\) where \(\theta_{i}=\frac{\exp\left(\left(P_{S_{i}}\right)\right)}{\sum_{j}\exp\left(P_ {S_{i}}\right)}\), and \(P_{S_{i}}=f\left(x|\alpha_{i},\beta_{i}\right)\). Similar to BetaE, GammaE models negation by reversing the shape of the density function \(N_{P_{S}}=f\left(x|\frac{1}{\alpha},\beta\right)+\epsilon\) where \(P_{S}=f(x|\alpha,\beta)\) and \(\epsilon\in(0,1)\) is the elasticity.
### Region-based Embeddings
Mapping data as convex regions is inspired by the Venn diagram. Region-based embeddings nicely model the set theory that can be used to capture uncertainty [1], logical rules [1], transitive closure [13], logical operations [11], etc. Several convex regions have been explored including balls, boxes, and cones. Fig. 1 shows examples of the three embeddings.
Ball embeddingsBall embedding associates each object \(w\) with an \(n\)-dimensional ball \(\mathbb{B}_{w}\left(\mathbf{c}_{w},r_{w}\right)\), where \(\mathbf{c}_{w}\) and \(r_{w}\) are the central point and its radius of the ball, respectively. A ball is defined as the set of vectors whose Euclidean distance to \(\mathbf{c}_{w}\) is less than \(r_{w}\): \(\mathbb{B}\left(\mathbf{c}_{w},r_{w}\right)=\left\{\mathbf{p}\,||\,\mathbf{c} _{w}-\mathbf{p}||<r_{w}\right\}\). In ElEm [12], each concept in \(\mathcal{EL}^{++}\) ontologies is represented as an open \(n\)-ball, and subsumption relations between concepts are modeled as ball containment. This explicit modeling of subsumption structure leads to significant improvements in predicting human protein-protein interactions. [10] represents categories with \(n\)-balls while considering tree-structured category information. By embedding subordinate relations between categories as ball containment, it shows promising results on NLP tasks compared to conventional word embeddings.
Box embeddingsBox embeddings represent objects with \(d\)-dimensional rectangles, i.e., a Cartesian product of \(d\) closed intervals denoted by \(\prod_{i=1}^{d}\left[x_{i}^{\mathrm{m}},x_{i}^{\mathrm{M}}\right],\quad\text{ where }x_{i}^{\mathrm{m}}<x_{i}^{\mathrm{M}}\) and \(x_{i}^{\mathrm{M}}\) are lower-left and top-right coordinates of boxes [13]. Box embeddings capture anticorrelation and disjoint concepts better than order embeddings. Joint Box [11] improves box embeddings' ability to express multiple hierarchical relations (e.g., hypernymy and meronymy) and proposes joint embedding of these relations in the same subspace to enhance entity characterization, resulting in improved performance. BoxE [1] proposes embedding entities and relations as points and boxes in knowledge bases, improving expressivity by modeling rich logic hierarchies in higher-arity relations. BEUrRE [1] is similar to BoxE but it differs in embedding entities and relations as boxes and affine transformations, respectively, which enables better modeling marginal and joint entity probabilities. Query2Box [11] shares BoxE's idea, embedding queries as boxes, with answer entities inside, to support a range of querying operations, including disjunctions, in large-scale knowledge graphs. BoxEL [12], ELBE [10] and Box\({}^{2}\)EL [13] propose to capture concept-level knowledge in the description logic \(\mathcal{EL}^{++}\) by modeling concepts and/or roles as axis-aligned boxes such that concept and role inclusions can be geometrically captured.
Cone embeddingsCone embedding was first proposed in Order embedding (OE) [10] to represent a partial ordered set (poset). However, this method is restricted to axis-parallel cones and it only captures positive correlations as any two axis-parallel cones intersect at infinite. Recent cone embeddings formulate cones with additional angle parameters. As one of the main advantages, angular cone embedding has been used to model the negation operator since the angular cone is closed under negation. For example, negation can be modeled as the polarity of a cone as used in ALC ontology [10] or the complement of a cone as used in ConE for logical queries [10].
Polygon embeddingsPolygonE [12] models n-ary relational data as gyro-polygons in hyperbolic space, where entities in facts are represented as vertexes of gyro-polygons and relations as entity translocation operations. ExpressivE [13] embeds pairs of en
ities as points and relations as hyper-parallelograms in the virtual triple space \(\mathbb{R}^{2d}\). Such more flexible modeling allows for not only capturing a rich set of relational patterns jointly but additionally to display any supported relational pattern through the spatial relation of hyper-parallelograms.
### Manifold-based Embeddings
The ability of embedding models to capture complex structural patterns intrinsically bounded by the volume of the embedding space. This consequently leads to distortion issues during the embedding of relational data with complex structural patterns. A thread of work tries to mitigate this problem for a certain class of patterns e.g., hierarchy by embedding on a non-Euclidean manifold which is a smooth geometric object \(\mathcal{M}\) (e.g., Poincare ball) which is locally Euclidean, but not globally. A Riemannian manifold is a pair \((\mathcal{M},g)\) where \(g\) is a metric tensor used to define the distance between any two points on the manifold. Several methods have been proposed to embed relational data on non-Euclidean manifolds such as Poincare balls and Sphere [23] to model hierarchical and loop structures, respectively.
Hyperbolic embeddingsThe Poincare ball has been used for modeling hierarchical structures [23]. MuRP [1] models multi-relational data by transforming entity embeddings by Mobius matrix-vector multiplication and addition. To capture both hierarchy and logical patterns e.g., symmetry and composition, AttH [1] models relation transformation by rotation/reflection and also embeds entities on the Poincare ball. FieldH and FieldP [20] embed entities on trajectories that lie on hyperbolic manifolds namely Hyperpoloid and Poincare ball to capture heterogeneous patterns formed by a single relation (e.g., a combination of loop and path) besides hierarchy and logical patterns. [20] embeds nodes on the extended Poincare Ball and polar coordinate system to address numerical issues when dealing with the embedding of neighboring nodes on the boundary of the ball on previous models. HyboNet [3] proposes a fully hyperbolic method that uses Lorentz transformations to overcome the incapability of previous hyperbolic methods of fully exploiting the advantages of hyperbolic space. Poincare Glove [17] adapts the GloVe algorithm to learn unsupervised embedding in a Cartesian product of hyperbolic spaces which is theoretically connected to the Gaussian word embeddings and their Fisher geometry. The application of hyperbolic geometry in entity alignment has been done in [24] where HypKA embeds two graphs on a Poincare ball and then aligns the corresponding node pairs in two graphs. HMI [25] targets the problem of structured multi-label classification where the labels are organized under implication and mutual exclusion constraints, and are encoded as Poincare hyperplanes that work as linear decision boundaries. In addition to the mentioned works, the application of hyperbolic manifolds in temporal knowledge graph embedding has been done in [19] which is an extension of AttH [1] with time-dependent curvature.
Spherical embeddingsA spherical manifold \(\mathcal{M}=\{x\in\mathbb{R}^{d}|\ \ \|x\|=1\}\) has been used as embedding space in several works to model loops in graphs. MuRS [20] models relations as linear transformations on the tangent space of spherical manifold, together with spherical distance in score formulation. A more sophisticated approach is FiledP [23] in which entity spaces are trajectories on a sphere based on relations to model more complex patterns compared to MuRS. 5*E [23] embeds entities on flows on the product space of a complex projective line which is called the Riemann sphere. The projective transformation then covers translating, rotation, homothety, reflection, and inversion which are powerful for modeling various structures and patterns such as combination of loop and loop, loop and path, and two connected path structures.
Mixed manifold embeddingsRelational data exhibit heterogeneous structures and patterns where each class of patterns and structures can be modeled efficiently by a particular manifold. Therefore, combining several manifolds for embedding is beneficial and addressed in the literature. MuRMP [20] extends MuRP [1] to a product space of spherical, Euclidean, and hyperbolic manifolds. GIE [1] improves the previous mixed models by computing the geometric interaction on tangent space of Euclidean, spherical and hyperbolic manifolds via an attention mechanism to emphasize on the most relevant geometry and then projects back the obtained vector to the hyperbolic manifold for score calculation. DGS [22] targets the same problem of heterogeneity of structures and utilizes the hyperbolic space, spherical space, and intersecting space in a unified framework for learning embeddings of different portions of two-view KGs (ontology and instance levels) in different geometric space. While the reviewed works provide separate spherical, Euclidean, and hyperbolic manifolds and act on Riemannian manifolds, UltraE [25] embeds graphs on the Ultrahyperbolic manifold which is a semi-Riemannian manifold that seamlessly interleaves Euclidean, hyperbolic and spherical manifolds. In this model, relations are modeled by the pseudo orthogonal transformation which is decomposed into cosine-sine forms covering hyperbolic/circular rotation and reflection. This model can capture heterogeneous structures and logical patterns.
### Hybrid Embeddings
Many methods try to combine the advantages of using distributions/regions and the advantages of using manifolds. Classical box embeddings use a hard volume in the objective function, leading to gradient issues. Soft volume-based methods like smoothed box [12] and Gumbel box [26] alleviate this problem by modeling the box via a Gaussian process and Gumbel process, respectively. To simultaneously model hierarchies and set-theoretic semantics, Hyperbolic disk embedding [27] embeds relational objects as disks defined in hyperbolic space. HypE [1], on the other hand, simulates the box in hyperbolic space, but, this is no longer closed under intersection. A successful combination of manifolds and region-based methods is the hyperbolic cone embedding [1] that models relational objects as angular cones in
hyperbolic space. ConE[1] further extends it to logical queries on KGs. To combine the advantages of distribution-based methods and manifolds, HWN [21] and RoHWN [14] generalize Gaussian distributions to hyperbolic space, namely hyperbolic wrapped Gaussian.
## 4 Applications in Relational Reasoning
### Knowledge Graph Completion.
To perform KG completion, a KGE model takes a triple pattern \((h,r,?)\) as a query, and replaces "?" by an entities in the KG to generate a new triple \((h,r,e)\). Table 3 summarizes the characteristics of some prominent geometric KG embeddings.
To capture the uncertainty in a KG, KG2E [1] embeds each entity and relation as a Gaussian distribution parameterized by a mean vector and diagonal covariance matrix. KG2E measures the plausibility of the triple by \(\mathcal{D}_{\mathcal{K}\mathcal{L}}\left(\mathcal{P}_{e},\mathcal{P}_{r}\right)\), where \(\mathcal{P}_{e}\sim\mathcal{N}\left(\boldsymbol{\mu}_{h}-\boldsymbol{\mu}_{t},\boldsymbol{\Sigma}_{h}+\boldsymbol{\Sigma}_{t}\right)\) and \(\mathcal{D}_{\mathcal{K}\mathcal{L}}\) is the KL divergence. TransG [21] generalizes the idea of KG2E to a mixture of Gaussian distribution to deal with the issue of multiple relation semantics. DiriE models entities as Dirichlet distributions and relations as multinomial distributions, which is able to model relational patterns.
BEUrRE [1] models uncertainty of KGs by using box embedding, where entities are mapped as boxes (axis-parallel rectangulars) and relations are modeled as affine transformations involving a translation of the position of boxes and a scaling of the size of boxes. BoxE, on the other hand, embeds entities as points, and relations as a set of boxes with the number of boxes depending on the arity of the relation. Hence, BoxE is able to model n-ary relations. Besides, the relational boxes spatially characterize the basic logical properties of relations and BoxE is proven to be fully expressive. However, BoxE is not able to capture composition. To further support composition, ExpressE [22] embeds relations as hyper-parallelograms in the virtual triple space \(\mathbb{R}^{2d}\) where the relational patterns are characterized through the spatial relationship of hyper-parallelograms. To embed hierarchical structure in KGs, HAKE [23] maps entities into the polar coordinate system to model semantic hierarchies. On the other hand, MuRP [1] embeds KGs into hyperbolic space and significantly reduces the dimensionality. To further embed relational patterns such as symmetry, inversion and composition, AttH [1] represents relations as hyperbolic isometries including hyperbolic rotations and reflections parameterized by Givens matrices. HyboNet [1] formulates the relational transformation matrices by a Lorentz transformation that is decomposed into a Lorentz rotation and a Lorentz boost. HBE [20] proposes a KGE model with an extended Poincare Ball where the hierarchy-specific parameters are optimized on the polar coordinate system. Recently, some works [14, 21, 22] argue that embedding KGs in a single homogeneous curvature space, no matter the zero-curvature Euclidean space, negatively curved hyperbolic space or positively curved hyperspheric space, cannot faithfully model the complex structures of KGs. MuRMP [21] proposes to embed KGs into a mixed curvature space, namely a product of Euclidean, spherical and hyperbolic space. However, they do not explicitly model the possible interactions between these spaces. GIE [14] combines Euclidean, hyperbolic and hyperspherical spaces and models their interactions by learning these complex spatial structures interactively between these different spaces. DGS [15] further distinguishes the representations of the instance-view entities and the ontology-view concepts as they have inherently quite different structures (i.e., instance-view entities are organized more cyclically while ontology-view concepts are organized more hierarchically ). UltraE [21], on the other hand, considers a pseudo-Riemannian space, namely, pseudo-hyperboloid equipped with a pseudo-Riemannian metric. This is inspired by the fact that pseudo-hyperboloid generalizes hyperbolic and spherical spaces. Besides, UltraE models relations by hyperbolic and spherical rotation, allowing the inference of complex relation patterns.
### Ontology/Hierarchy Completion.
Ontology completion aims to predict subsumption relations between classes in ontologies whose logical structure is expressed in the ontology axioms. Ontology embedding aims at embedding classes and relations such that the logical structure and the deductive closure can be preserved [15]. ELEm [15] proposes to embed each class and instance as a \(n\)-ball with a learnable radius and models each relation as a translation of the \(n\)-ball. This method is not able to represent intersectional closure because that \(n\)-ball is not closed under intersection, and the translational relation modeling suffers from a key issue, namely, not able to model relations between classes with different sizes. BoxEL [21] and ELBE [20] both consider boxes as the class embedding due to their advantages of modeling intersectional closure. One major difference is that ELBE still uses translation as relation embeddings while BoxEL replaces it with affine transformation similar to BEUrRE [1]. Furthermore, different from ELEm and ELBE which do not distinguish classes and instances, BoxEL embeds entities as points while classes as boxes, which is consistent with the classical conceptual space model where points denote objects, and regions denote concepts [1]. Box\({}^{2}\)EL[21] further improves BoxEL and ELBE by embedding roles as boxes, in a similar way like BoxE [1]. All
\begin{table}
\begin{tabular}{c|c c c c} Method & uncertainty & relational & set theory & structural \\ \hline KG2E & \(\checkmark\) & \(\star\) & \(\star\) & \(\times\) \\ DiriE & \(\checkmark\) & \(\star\) & \(\times\) & \(\times\) \\ BEUrRE & \(\checkmark\) & \(\star\) & \(\star\) & \(\times\) \\ BoxE & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ ExpressE & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ MuRP & \(\times\) & \(\star\) & \(\times\) & \(\star\) \\ MuRMP & \(\times\) & - & - & \(\checkmark\) \\ GIE & \(\times\) & \(\star\) & \(\times\) & \(\checkmark\) \\ UltraE & \(\times\) & \(\star\) & \(\times\) & \(\checkmark\) \\ DGS & \(\times\) & \(\star\) & \(\times\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 3: Characteristics of different geometric embeddings for KGs. \(\star\) means ”partially”, – means not available.
these methods are limited to the embedding of ontology expressed by the lightweight Description Logics EL++, where full concept negation is not expressible. For the embeddings of ontologies expressed by \(\mathcal{ALC}\) description logic where full negation is of interest, [14] embeds each class as an angular convex cone and models the negation as the polarity of cone, which is inspired by the Farkas' Lemma [11]
Hierarchy completion can be viewed as a special case of ontology completion where only one hierarchical relation (e.g., _is_a_) exists and the graph can be viewed as a hierarchy, partial order set/lattice or directed acyclic graph (DAG). Hence, the modeling of transitivity is an essential part of hierarchy completion. Order embedding [23] represents entities as axis-parallel cones. Probabilistic box embedding [24, 25] extends it to axis-parallel boxes while [11] further extend it to deal with directed graphs with cycles. Hyperbolic semantic cone [1] combines the advantages of hyperbolic space and cones.
### Hierarchical multi-label classification.
HMC aims to associate each instance with multiple labels that are hierarchically organized in a directed acyclic graph. HCGE, the first geometric embedding model for HMC, models the uncertainty of node representations using Gaussian embeddings and shows improvement on multi-label node classification. Recent works [12] demonstrate that enforcing the output of an HMC model to respect the hierarchy constraint can boost the performance of HMC. To this end, MBM [26] embeds instances and labels as boxes so that the logical structure can be respected, i.e., if an instance has a label then it also has all of the parent labels. Similar to MBM, Box2Type [14] considers an instance of HMC, fine-grained entity typing, and embed entity types as boxes. Some methods [3, 25] propose to do HMC in hyperbolic space but they do not consider regions as type embeddings and hence cannot preserve logical structures. HMI [23], on the other hand, embed each label as a curved hyperbolic hyperplane that corresponds to an enclosing \(n\)-ball and model the relationships between labels as geometric relations between the corresponding \(n\)-balls. HMI is able to preserve logical structure including both implication and exclusion. [10] leverage hyperbolic cones as label embeddings that aims to preserve label hierarchy.
### Logical Query Answering.
Answering complex logical queries, for example, _"list the countries that have hosted the Olympic Games but not the World Cup"_, is an important yet challenging task due to KGs incompleteness and the difficulty in properly modeling logic operations. Geometric and probabilistic query embedding methods provide a way to tractably handle logic operators in queries and equip excellent computing efficiency. Essentially, existing query embedding models represent logical queries by either geometry shapes or distributions. Logical operations, such as conjunction, disjunction, and negation, are performed as geometric or algebraic manipulations on them. Query2Box [27] is the earliest work that maps each entity as a point and each query as a box. By doing so, this method models the conjunction of two query embeddings as the intersection of two boxes and evaluates the plausibility of each candidate answer based on the distance between the query box embedding and each entity point. Query2Box is limited in modeling negation due to the non-closure property of negation set in Euclidean space, i.e. the negation of a box embedding in Euclidean space is not a box. BetaE [26] remedies this problem by modeling queries as Beta distributions. It models the conjunction by computing the weighted product of beta distributions, and the negation by taking the reciprocal of the shape parameters \(\alpha\) and \(\beta\). Alternatively, ConE [11] models conjunction and negation by representing queries as cones, i.e. the conjunction is the intersection of cones and the negation is the closure-complement. All of the above methods are not able to directly handle disjunctions, but can only transform the entire query into Disjunctive Normal Form (DNF) and perform the disjunction in the last step. GammaE [11] and PERM [12] enclose all logical operations in distributional (Gamma and Gaussian) spaces and allow them to be handled independently. Specifically, PERM encodes entities as Multivariate Gaussian distributions and queries as a mixture of Gaussian distributions. GammaE represents entities and queries using Gamma distributions. Similar to BetaE, these two methods also model the conjunction of query embeddings by the product of input distributions. However, both GammaE and PERM get rid of the DNF technique and model disjunction as the weighted mixture of Gamma and Gaussian distributions respectively.
## 5 Summary and Future Work
This paper conducts a comprehensive survey on geometric embedding methods for modeling relational data. Methods are systematically classified, discussed, and compared according to the proposed taxonomy. As we discussed, different geometric embeddings provide different inductive biases that are suitable for different tasks. We believe our comparison and analysis will shed light on more applications of geometric embeddings on more relational reasoning tasks.
Despite its success, there are still challenges to be solved in this research field. We, therefore, suggest some promising directions for future research as follows
Heterogeneous hierarchiesMost current geometric embedding methods can only encode one hierarchy relation (e.g., _is_a_). However, a real-world KG might simultaneously contain multiple hierarchical relations (e.g., _is_a_ and _has_part_) [26]. An embedding model should be able to distinguish these different hierarchical relations, especially when they are intertwined with each other.
Deep geometric embeddingsMost current geometric relational embeddings are non-parametric models that do not require neural network parameterization. Developing deep architectures for current geometric relational embeddings is an exciting topic of future research.
Learning with symbolic knowledgeIncorporating symbolic knowledge, e.g., logical constraints over labels, into machine learning is in general a very promising direction as it enhances the robustness and interpretability of machine learning models. Geometric embeddings provide rich inductive biases for representing such logical constraints and have great potential for doing this. Besides modeling hierarchical constraints as in HMC, more kinds of logical constraints such as exclusion and intersection equivalence are worth exploring.
|
2310.04565 | Binary Quantification and Dataset Shift: An Experimental Investigation | Quantification is the supervised learning task that consists of training
predictors of the class prevalence values of sets of unlabelled data, and is of
special interest when the labelled data on which the predictor has been trained
and the unlabelled data are not IID, i.e., suffer from dataset shift. To date,
quantification methods have mostly been tested only on a special case of
dataset shift, i.e., prior probability shift; the relationship between
quantification and other types of dataset shift remains, by and large,
unexplored. In this work we carry out an experimental analysis of how current
quantification algorithms behave under different types of dataset shift, in
order to identify limitations of current approaches and hopefully pave the way
for the development of more broadly applicable methods. We do this by proposing
a fine-grained taxonomy of types of dataset shift, by establishing protocols
for the generation of datasets affected by these types of shift, and by testing
existing quantification methods on the datasets thus generated. One finding
that results from this investigation is that many existing quantification
methods that had been found robust to prior probability shift are not
necessarily robust to other types of dataset shift. A second finding is that no
existing quantification method seems to be robust enough to dealing with all
the types of dataset shift we simulate in our experiments. The code needed to
reproduce all our experiments is publicly available at
https://github.com/pglez82/quant_datasetshift. | Pablo González, Alejandro Moreo, Fabrizio Sebastiani | 2023-10-06T20:11:27Z | http://arxiv.org/abs/2310.04565v1 | # Binary Quantification and Dataset Shift:
###### Abstract
Quantification is the supervised learning task that consists of training predictors of the class prevalence values of sets of unlabelled data, and is of special interest when the labelled data on which the predictor has been trained and the unlabelled data are not IID, i.e., suffer from _dataset shift_. To date, quantification methods have mostly been tested only on a special case of dataset shift, i.e., _prior probability shift_; the relationship between quantification and other types of dataset shift remains, by and large, unexplored. In this work we carry out an experimental analysis of how current quantification algorithms behave under different types of dataset shift, in order to identify limitations of current approaches and hopefully pave the way for the development of more broadly applicable methods. We do this by proposing a fine-grained taxonomy of types of dataset shift, by establishing protocols for the generation of datasets affected by these types of shift, and by testing existing quantification methods on the datasets thus generated. One finding that results from this investigation is that many existing quantification methods that had been found robust to prior probability shift are not necessarily robust to other types of dataset shift. A second finding is that no existing quantification method seems to be robust enough to dealing with all the types of dataset shift we simulate in our experiments. The code needed to reproduce all our experiments is publicly available at [https://github.com/pglez82/quant_datasetshift](https://github.com/pglez82/quant_datasetshift).
###### Abstract
We consider the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_QAM) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _Q) algorithm for the _quasi-deterministic_ (_Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _Q) algorithm for the _ algorithm the _Q) algorithm for the _Q) algorithm for the _ algorithm the _Q) algorithm for the (Q) algorithm for the _Q) algorithm for the
Since quantification aims at estimating class prevalence, most experimental evaluations of quantification systems (see, e.g., (Barranquero et al., 2015; Bella et al., 2010; Esuli et al., 2018; Forman, 2008; Hassan et al., 2020; Milli et al., 2013; Moreo and Sebastiani, 2022; Perez-Gallego et al., 2019; Schumacher et al., 2021)) have focused on situations characterised by prior probability shift, while the other two types of shift mentioned above have not received comparable attention. A question then naturally arises: _How do existing quantification methods fare when confronted with types of dataset shift other than prior probability shift_?
This paper offers a systematic exploration of the performance of existing quantification methods under different types of dataset shift. To this aim we first propose a fine-grained taxonomy of dataset shift types; in particular, we pay special attention to the case of covariate shift, and identify variants of it (mostly having to do with additional changes in the priors) that we contend to be of special relevance in quantification endeavours, and that are under-studied. We then follow an empirical approach, devising specific experimental protocols for simulating all the types of dataset shift that we have identified, at various degrees of intensity and in a tightly controlled manner. Using the experimental setups generated by means of these protocols, we then test a number of existing quantification methods; here, the ultimate goal we pursue is to better understand the relative merits and limitations of existing quantification algorithms, to understand the conditions under which they tend to perform well, and to identify the situations in which they instead tend to generate unreliable predictions.
The rest of this paper is organised as follows. In Section 2, we discuss previous work on establishing protocols to recreate different types of dataset shift, with special attention to work done in the quantification arena, and the (still scarce) work aimed at drawing connections between quantification and different types of dataset shift. In Section 3, we illustrate our notation and provide definitions of relevant concepts and of the quantification methods we use in this study. Section 4 goes on by introducing formal definitions of the types of shift we investigate. Section 5 illustrates the experimental protocols we propose for simulating the above types of shift, and discusses the results we have obtained by generating datasets via these protocols and using them for testing quantification systems. Section 6 wraps up, summarising our main findings and also pointing to interesting directions for future work.
## 2 Related Work
Since quantification targets the estimation of class frequencies, it is fairly natural that prior probability shift has been, in the related literature, the dominant type of dataset shift on which the robustness of quantification methods has been tested. Indeed, when Forman (2005) first proposed (along with novel quantification methods) to consider quantification as a task in its own right (and proposed "quantification" as the name for this task), he also proposed an
experimental protocol for testing quantification systems. This protocol consisted of generating a number of test samples, to be used for evaluating a quantification method, characterised by prior probability shift. Given a dataset consisting of a set \(L\) of labelled datapoints and a set \(U\) of unlabelled datapoints (both with binary labels), the protocol consists of drawing from \(U\) a number of test samples each characterised by a prevalence value (of the "positive class") lying on a predefined grid (say, \(G=[0.00,0.05,\ldots,0.95,1.00]\)). This protocol has come to be known as the "artificial prevalence protocol" (APP), and has since been at the heart of most empirical evaluations conducted in the quantification literature; see, e.g., (Barranquero et al., 2015; Bella et al., 2010; Moreo and Sebastiani, 2022; Moreo et al., 2021; Schumacher et al., 2021).1 Actually, the protocol proposed by Forman (2005) also simulates different prevalence values in the training set, drawing from \(L\) a number of training samples characterised by prevalence values lying on grid \(G\). In such a way, by systematically varying both the training prevalence _and_ the test prevalence of the positive class across the entire grid, one could subject a quantification method to the widest possible range of scenarios characterised by prior probability shift. Some empirical evaluations conducted nowadays only extract test samples from \(U\), while others extract training samples from \(L\)_and_ test samples from \(U\).
Footnote 1: Although the protocol was originally proposed for binary quantification problems only, an extension to the multiclass regime based on so-called _Kraemer sampling_ was later proposed by Esuli et al. (2022).
The APP has sometimes been criticised (see e.g., (Esuli and Sebastiani, 2015; Hassan et al., 2021)) for generating training-test sample pairs exhibiting "unrealistic" or "implausible" class prevalence values and/or degrees of prior probability shift. For instance, Esuli and Sebastiani (2015) and Gonzalez et al. (2019) indeed renounce to using the APP in favour of using datasets containing a large amount of timestamped test datapoints, which allows splitting the test data into sizeable enough, temporally coherent chunks, in which the class prevalence values naturally fluctuate over time. However, this practice is rarely used in the literature, since it has to overcome at least three important obstacles: (i) the amount of test samples thus available is often too limited to allow statistically significant conclusions, (ii) datasets with the above characteristics are rare (and expensive to create, if not available), and (iii) the degree of shift which the quantifiers must confront is (as in (Esuli and Sebastiani, 2015)) sometimes limited.
Conversely, the other two types of shift that we have mentioned above (covariate shift and concept shift) have received essentially no attention in the quantification literature. An exception to this includes the theoretical analysis performed in (Tasche, 2022, 2023), and the work on classifier calibration of Card and Smith (2018), both of them having to do with covariate shift. More in general, we are unaware of the existence of specific evaluation protocols for quantification, or quantification methods, that explicitly address covariate shift or concept shift.
Some discussion of protocols for simulating different kinds of prior probability shift can be found in the work of Lipton et al. (2018), who propose
protocols for generating prior probability shift in multiclass datasets. They propose protocols for addressing "knock-out shift", which they define as the shift generated by subsampling a specific class out of the \(n\) classes; "tweak-one shift", that generates samples in which a specific class out of the \(n\) classes has a predefined prevalence value while the rest of the probability mass is evenly distributed across the remaining classes; and "Dirichlet shift", in which a distribution \(P(Y)\) across the classes is picked from a Dirichlet distribution with concentration parameter \(\alpha\), after which samples are drawn according to \(P(Y)\). Other works (Alexandari et al., 2020; Azizzadenesheli et al., 2019; Rabanser et al., 2019) have come to subsequently adopt these protocols. We do not explore "knock-out shift" nor "tweak-one shift" since these sample generation protocols are only meaningful in the multiclass regime, and since we here address the binary case only. The protocol we end up adopting (the APP) is similar in spirit to the "Dirichlet shift" protocol (i.e., both are designed to cover the entire spectrum of legitimate prevalence values), although the APP allows for a tighter control on the test prevalence values being generated.
Using image datasets for their experiments, Rabanser et al. (2019) bring into play (and define protocols for) other types of shift having to do with covariate shift, such as "adversarial shift", in which a fraction of the unlabelled samples are adversarial samples (i.e., images that have been manipulated with the aim of confounding a neural model, by means of modifications that are imperceptible to the human eye); "image shift", in which the unlabelled images result from the application of a series of random transformations (rotation, translation, zoom-in); "Gaussian noise shift", in which Gaussian noise affects a fraction of the unlabelled images; and combinations of all these. We do not explore these types of shift since they are specific to the world of images and computer vision.
Dataset shift has been widely studied in the field of classification in order to support the development of models robust to the presence of shift. In the machine learning literature this problem is also known as _domain adaptation_. For instance, the combination of covariate shift and prior probability shift has recently been studied by Chen et al. (2022), who focus on detecting the presence of shift in the data and on predicting classifier performance on non-IID (a.k.a. "out-of-distribution") unlabelled data. This and other similar works are mostly concerned with improving the performance of a classifier on non-IID unlabelled data (a concern that goes back at least to (Saerens et al., 2002; Vucetic and Obradovic, 2001), and that has given rise to works such as (Alaiz-Rodriguez et al., 2011; Bickel et al., 2009; Chan and Ng, 2006)); in these works, estimating class prevalence in non-IID unlabelled data is merely an intermediate step for calculating the class weights needed for adapting the classifier to these data, and not a primary concern in itself.
As a final note, we should mention that, despite several efforts for unifying the terminology related to dataset shift (see (Moreno-Torres et al., 2012) for an example), this terminology is still somewhat confusing. For example, _prior probability shift_(Storkey, 2009) is sometimes called "distribution drift" (Moreo and Sebastiani, 2022), "class-distribution shift" (Beijbom et al., 2015), "class
prior change" (du Plessis and Sugiyama, 2012; Iyer et al., 2014), "global drift" Hofer and Krempl (2012), "target shift" (Nguyen et al., 2015; Zhang et al., 2013), "label shift" (Alexandari et al., 2020; Azizzadenesheli et al., 2019; Lipton et al., 2018; Rabanser et al., 2019), or "prior shift" (Sipka et al., 2022). The terms "shift" and "drift" are often used interchangeably (in this paper we will stick to the former), although some authors (e.g., Souza et al. (2020)) establish a difference between "concept shift" and "concept drift"; in Section 4.3 we will precisely define what we mean by concept shift. Note also that, until recently, most works in the quantification literature hardly even mentioned (any type of) "shift" or "drift" (despite using an experimental protocol that recreated prior probability shift), certainly due to the fact that the awareness of dataset shift and the problems it entails has become widespread only in recent years.
## 3 Preliminaries
### Notation and Definitions
In this paper we restrict our attention to the case of binary quantification, and adopt the following notation. By \(\mathbf{x}\) we indicate a datapoint drawn from a domain \(\mathcal{X}\). By \(y\) we indicate a class drawn from a set \(\mathcal{Y}=\{0,1\}\), which we call the _classification scheme_ (or _codeframe_), and by \(\overline{y}\) we indicate the complement of \(y\) in \(\mathcal{Y}\). Without loss of generality, we assume \(0\) to represent the "negative" class and \(1\) to represent the "positive" class. By \(L\) we denote a collection of \(k\) labelled datapoints \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{k}\), where \(\mathbf{x}_{i}\in\mathcal{X}\) is a datapoint and \(y_{i}\in\mathcal{Y}\) is a class label, that we use for training purposes. By \(U\) we instead denote a collection \(\{(\mathbf{x}^{\prime}_{i},y^{\prime}_{i})\}_{i=1}^{k^{\prime}}\) of \(k^{\prime}\) unlabelled datapoints, i.e., datapoints \(\mathbf{x}^{\prime}_{i}\) whose label \(y^{\prime}_{i}\) is unknown, that we typically use for testing purposes. We hereafter refer to \(L\) and \(U\) as "the training set" and "the test set", respectively.
We use symbol \(\sigma\) to denote a _sample_, i.e., a non-empty set of (labelled or unlabelled) datapoints from \(\mathcal{X}\). We use \(p_{\sigma}(y)\) to denote the (true) prevalence of class \(y\) in sample \(\sigma\) (i.e., the fraction of items in \(\sigma\) that belong to \(y\)), and we use \(\hat{p}^{q}_{\sigma}(y)\) to denote the estimate of \(p_{\sigma}(y)\) as computed by a quantification method \(q\); note that \(p_{\sigma}(y)\) is just a shorthand of \(P(Y=y\mid\mathbf{x}\in\sigma)\), where \(P\) indicates probability and \(Y\) is a random variable that ranges on \(\mathcal{Y}\). Since in the binary case it holds that \(p_{\sigma}(y)=1-p_{\sigma}(\overline{y})\), binary quantification reduces to estimating the prevalence of the positive class only. Throughout this paper we will simply write \(p_{\sigma}\) instead of \(p_{\sigma}(1)\), i.e., as a shortcut for the true prevalence of the positive class in sample \(\sigma\); similarly, we will shorten \(\hat{p}_{\sigma}(1)\) as \(\hat{p}_{\sigma}\).
We define a _binary quantifier_ as a function \(q:2^{\mathcal{X}}\rightarrow[0,1]\), i.e., one that acts as a predictor of the prevalence \(p_{\sigma}\) of the positive class in sample \(\sigma\). Quantifiers are generated by means of an inductive learning algorithm trained on \(L\). We take a (binary) _hard classifier_ to be a function \(h:\mathcal{X}\rightarrow\mathcal{Y}\), i.e., a predictor of the class label of a datapoint \(\mathbf{x}\in\mathcal{X}\) which returns \(1\) if \(h\) predicts \(\mathbf{x}\) to belong to the positive class and \(0\) otherwise. Classifier \(h\) is trained by means of an inductive learning algorithm that uses a set \(L\) of labelled datapoints, and
usually returns crisp decisions by thresholding the output of an underlying real-valued decision function \(f\) whose internal parameters have been tuned to fit the training data. Likewise, we take a (binary) _soft classifier_ to be a function \(s:\mathcal{X}\rightarrow[0,1]\), i.e., a function mapping a datapoint \(\mathbf{x}\) into a _posterior probability_\(s(\mathbf{x})\equiv P(Y=1|X=\mathbf{x})\) and represents the probability that \(s\) subjectively attributes to the fact that \(\mathbf{x}\) belongs to the positive class. Classifier \(s\) is either trained on \(L\) by a probabilistic inductive algorithm, or obtained by _calibrating_ a (possibly non-probabilistic) classifier \(s^{\prime}\) also trained on \(L\).2
Footnote 2: A binary soft classifier \(s\) is said to be _well calibrated_(Flach, 2017) for a given sample \(\sigma\) if, for every \(\alpha\in[0,1]\), it holds that
\[\frac{|\{(\mathbf{x},y)\in\sigma\mid s(\mathbf{x})=\alpha,y=1\}|}{|\{(\mathbf{ x},y)\in\sigma\mid s(\mathbf{x})=\alpha\}|}=\alpha \tag{1}\]
Note that calibration is defined with respect to a sample \(\sigma\), which means that a classifier cannot, in general, be well calibrated for two different samples (e.g., for \(L\) and \(U\)) that are affected by prior probability shift.
We take an _evaluation measure_ for binary quantification to be a real-valued function \(D:[0,1]\times[0,1]\rightarrow\mathbb{R}\) which measures the amount of discrepancy between the true distribution and the predicted distribution of \(\mathcal{Y}\) in \(\sigma\); higher values of \(D\) represent higher discrepancy, and the distributions are represented (since we are in the binary case) by the prevalence values of the positive class. In the quantification literature, these measures are typically _divergences_, i.e., functions that, given two distributions \(p^{\prime}\), \(p^{\prime\prime}\), satisfy (i) \(D(p^{\prime},p^{\prime\prime})\geq 0\), and (ii) \(D(p^{\prime},p^{\prime\prime})=0\) if and only if \(p^{\prime}=p^{\prime\prime}\). By \(D(p_{\sigma},\hat{p}_{\sigma}^{q})\) we thus denote the divergence between the true class distribution in sample \(\sigma\) and the estimate of this distribution returned by binary quantifier \(q\).
### The IID Assumption, Dataset Shift, and Quantification
One of the main reasons why we study quantification is the fact that most scenarios in which estimating class prevalence values via supervised learning is of interest, _violate the IID assumption_, i.e., the fundamental assumption (that most machine learning endeavours are based on) according to which the labelled datapoints used for training and the unlabelled datapoints we want to issue predictions for, are assumed to be drawn independently and identically from the same (unknown) distribution.3 If the IID assumption were not violated, the supervised class prevalence estimation problem would admit a trivial solution, consisting of returning, as the estimated prevalence \(\hat{p}_{\sigma}^{q}\) for _any_ sample \(\sigma\) of unlabelled datapoints, the true prevalence \(p_{L}\) that characterises
the training set, since both \(L\) and \(\sigma\) would be expected to display the same prevalence values. This "method" is called, in the quantification literature, the _maximum likelihood prevalence estimator_ (MLPE), and is considered a trivial baseline that any genuine quantification system is expected to beat in situations characterised by dataset shift.
We will thus assume the existence of two unknown joint probability distributions \(P_{L}(X,Y)\) and \(P_{U}(X,Y)\) such that \(P_{L}(X,Y)\neq P_{U}(X,Y)\) (the _dataset shift assumption_). The ways in which the training distribution and the test distribution may differ, and the effect these differences can have on the performance of quantification systems, will be the main subject of the following sections.
### Quantification Methods
The six quantification methods that we use in the experiments of Section 5 are the following.
_Classify and Count_ (CC), already hinted at in the introduction, is the naive quantification method, and the one that is used as a baseline that all genuine quantification methods are supposed to beat. Given a hard classifier \(h\) and a sample \(\sigma\), CC is formally defined as
\[\hat{p}_{\sigma}^{\text{CC}}=\frac{1}{|\sigma|}\sum_{\mathbf{x}\in\sigma}h( \mathbf{x}) \tag{2}\]
In other words, the prevalence of the positive class is estimated by classifying all the unlabelled datapoints, counting the number of datapoints that have been assigned to the positive class, and dividing the result by the total number of datapoints in the sample.
The _Adjusted Classify and Count_ (ACC) method (see (Forman, 2008)) attempts to correct the estimates returned by CC by relying on the law of total probability, according to which, for any \(\mathbf{x}\in\mathcal{X}\), it holds that
\[P(h(\mathbf{x})=1)=P(h(\mathbf{x})=1|Y=1)\cdot p+P(h(\mathbf{x})=1|Y=0)\cdot(1-p) \tag{3}\]
which can be more conveniently rewritten as
\[\hat{p}_{\sigma}^{\text{CC}}\ =\ \text{tpr}_{h}\cdot p_{\sigma}+\text{fpr}_{h} \cdot(1-p_{\sigma}) \tag{4}\]
where \(\text{tpr}_{h}\) and \(\text{fpr}_{h}\) are the true positive rate and the false positive rate, respectively, that \(h\) has on samples of unseen datapoints. From Equation 4 we can obtain
\[p_{\sigma}=\frac{\hat{p}_{\sigma}^{\text{CC}}-\text{fpr}_{h}}{\text{tpr}_{h}- \text{fpr}_{h}} \tag{5}\]
The values of \(\text{tpr}_{h}\) and \(\text{fpr}_{h}\) are unknown, but their estimates \(\hat{\text{tpr}}_{h}\) and \(\hat{\text{fpr}}_{h}\) can be obtained by performing \(k\)-fold cross-validation on the training set \(L\), or by
using a held-out validation set. The ACC method thus consists of estimating \(p_{\sigma}\) by plugging the estimates of tpr and fpr into Equation 5, to obtain
\[\hat{p}_{\sigma}^{\text{ACC}}=\frac{\hat{p}_{\sigma}^{\text{CC}}-\hat{\text{fpr} }_{h}}{\text{tpr}_{h}-\hat{\text{fpr}}_{h}} \tag{6}\]
While CC and ACC rely on the crisp counts returned by a hard classifier \(h\), it is possible to define variants of them that use instead the _expected_ counts computed from the posterior probabilities returned by a calibrated probabilistic classifier \(s\)(Bella et al., 2010). This is the core idea behind _Probabilistic Classify and Count_ (PCC) and _Probabilistic Adjusted Classify and Count_ (PACC). PCC is defined as
\[\hat{p}_{\sigma}^{\text{PCC}} =\frac{1}{|\sigma|}\sum_{\mathbf{x}\in\sigma}s(\mathbf{x}) \tag{7}\] \[=\frac{1}{|\sigma|}\sum_{\mathbf{x}\in\sigma}P(Y=1|\mathbf{x})\]
while PACC is defined as
\[\hat{p}_{\sigma}^{\text{PACC}}=\frac{\hat{p}_{\sigma}^{\text{PCC}}-\hat{\text {fpr}}_{s}}{\hat{\text{tpr}}_{s}-\hat{\text{fpr}}_{s}} \tag{8}\]
Equation 8 is identical to Equation 6, but for the fact that the estimate \(\hat{p}_{\sigma}^{\text{CC}}\) is replaced with the estimate \(\hat{p}_{\sigma}^{\text{PCC}}\), and for the fact that the true positive rate and the false positive rate of the probabilistic classifier \(s\) (i.e., the rates computed as expectations using the posterior probabilities) are used in place of their crisp counterparts.
_Distribution y-Similarity_ (DyS) (Maletzke et al., 2019) is instead a generalisation of the HDy quantification method of Gonzalez-Castro et al. (2013). HDy is a probabilistic binary quantification method that views quantification as the problem of minimising the divergence (measured in terms of the Hellinger Distance, from which the name of the method derives) between two distributions of posterior probabilities returned by a soft classifier \(s\), one coming from the unlabelled examples and the other coming from a validation set. HDy looks for the mixture parameter \(\alpha\) (since we are considering a mixture of two distributions, one of examples of the positive class and one of examples of the negative class) that best fits the validation distribution to the unlabelled distribution, and returns \(\alpha\) as the estimated prevalence of the positive class. Here, robustness to distribution shift is achieved by the analysis of the distribution of the posterior probabilities in the unlabelled set, that reveals how conditions have changed with respect to the training data. DyS generalises HDy by viewing the divergence function to be used as a parameter.
A further, very popular aggregative quantification method is the one proposed by Saerens et al. (2002) and often called SLD, from the names of its proposers. SLD was the best performer in a recent data challenge devoted to quantification (Esuli et al., 2022), and consists of training a (calibrated) soft classifier and then using expectation maximisation (Dempster et al., 1977) (i)
to tune the posterior probabilities that the classifier returns, and (ii) to re-estimate the prevalence of the positive class in the unlabelled set. Steps (i) and (ii) are carried out in an iterative, mutually recursive way, until convergence (when the estimated prior gets fairly close to the mean of the recalibrated posteriors).
## 4 Types of Dataset Shift
Any joint probability distribution \(P(X,Y)\) can be factorised, alternatively and equivalently, as:
* \(P(X,Y)=P(X|Y)P(Y)\), in which the marginal distribution \(P(Y)\) is the distribution of the class labels, and the conditional distribution \(P(X|Y)\) is the class-conditional distribution of the covariates. This factorization is convenient in _anti-causal learning_ (i.e., when predicting causes from effects) (Scholkopf et al., 2012), i.e., in _problems of type \(Y\to X\)_(Fawcett and Flach, 2005).
* \(P(X,Y)=P(Y|X)P(X)\), in which the marginal distribution \(P(X)\) is the distribution of the covariates and the conditional distribution \(P(Y|X)\) is the distribution of the labels conditional on the covariates. This factorization is convenient in _causal learning_ (i.e., when predicting effects from causes) (Scholkopf et al., 2012), i.e., in _problems of type \(X\to Y\)_(Fawcett and Flach, 2005).
Which of these four ingredients (i.e., \(P(X)\), \(P(Y)\), \(P(X|Y)\), \(P(Y|X)\)) change or remain the same across \(L\) and \(U\), gives rise to different types of shift, as discussed in (Moreno-Torres et al., 2012; Storkey, 2009). In this section we turn to describing the types of shift that we consider in this study. To this aim, also recalling that the related terminology is sometimes confusing in this respect (as also noticed by Moreno-Torres et al. (2012)), we clearly define each type of shift that we consider.
When training a model, using our labelled data, to issue predictions about unlabelled data, we expect some relevant general conditions to be invariant across the training distribution and the unlabelled distribution, since otherwise the problem would be unlearnable. In Table 1, we list the three main types of dataset shift that have been discussed in the literature. For each such type, we indicate which distributions are assumed (according to general consensus in the field) to vary across \(L\) and \(U\), and which others are assumed to remain constant. In the following sections, we will thoroughly discuss the relationships between these three types of shift and quantification.
It is immediate to note from Table 1 that, for any given type of shift, there are some distributions (corresponding to the blank cells in the table - e.g., \(P(X)\) for prior probability shift) for which it is not specified if they change or not across \(L\) and \(U\); indeed, concerning what happens in these cases, the literature is often silent. In the next sections, we will try to fill these gaps. We will identify explicitatively interesting subtypes of
different ways to fill the blank cells of Table 1, and will propose experimental protocols that recreate them in order for quantification systems to be tested under those conditions.
### Prior Probability Shift
_Prior probability shift_ (see Figure 1 for a graphical example) describes a situation in which (a) there is a change in the distribution \(P(Y)\) of the class labels (i.e., \(P_{L}(Y)\neq P_{U}(Y)\)) while (b) the class-conditional distribution of the covariates remains constant (i.e., \(P_{L}(X|Y)=P_{U}(X|Y)\)).
In this type of shift, no further assumption is usually made as to whether the distribution \(P(X)\) of the covariates and the conditional distribution \(P(Y|X)\) change or not across \(L\) and \(U\). Notwithstanding this, it is reasonable to think that the change in \(P(Y)\) indeed causes a variation in \(P(X)\), i.e., that \(P_{L}(X)\neq P_{U}(X)\); if this were not the case, the class-conditional distributions \(P(X|Y=1)\) and \(P(X|Y=0)\) would be indistinguishable, i.e., the problem would not be learnable. We will thus assume that prior probability shift does indeed imply a change in \(P(X)\) across \(L\) and \(U\). The following is an example of this scenario.
Example 1: Assume our application is one of handwritten digit recognition. Here, the classes are all the possible types of digits and the covariates are features of the handwritten realizations of these digits. Assume our training data are (labelled) handwritten digits in the decimal system (digits from 0 to 9) while our unlabelled data are handwritten digits in the binary system (0 or 1); assume also that all other properties of the data (e.g., authors of these handwritings, etc.) are the same as in the training data. In this scenario, it is the case that \(P_{L}(Y)\neq P_{U}(Y)\) (since, e.g., the prevalence values in \(U\) of the digits from 2 to 9 are all equal to 0, unlike in \(L\)), and it is the case that \(P_{L}(X|Y)=P_{U}(X|Y)\) (since the 0's and 1's in the unlabelled data look the same as the 0's and 1's in the training data). Therefore, this is an example of prior probability shift. Note that it is also the case that \(P_{L}(X)\neq P_{U}(X)\), since in \(P_{U}(X)\) the values of the covariates are just those typical of 0's and 1's, unlike in \(P_{L}(Y)\), and it is also the case that \(P_{L}(Y|X)=P_{U}(Y|X)\), since nothing in the functional relationship between \(X\) and \(Y\) has changed.
Concerning the issue of whether, in prior probability shift, the posterior distribution \(P(Y|X)\) is invariant or not across \(L\) and \(U\), it seems, at first glance,
\begin{table}
\begin{tabular}{l||c|c|c|c||c} & \(P(X)\) & \(P(Y)\) & \(P(X|Y)\) & \(P(Y|X)\) & Section \\ \hline \hline Prior probability shift & & \(\neq\) & \(=\) & & §4.1 \\ \hline Covariate shift & \(\neq\) & & & \(=\) & §4.2 \\ \hline Concept shift & & & \(\neq\) & \(\neq\) & §4.3 \\ \hline \end{tabular}
\end{table}
Table 1: Main types of dataset shift discussed in the literature. For the type of dataset shift on the row, symbol “\(\neq\)” indicates that the distribution on the column is assumed to change across \(L\) and \(U\), while symbol \(=\) indicates that the distribution is assumed to remain invariant. The last column indicates the section of the present paper where this type of shift is discussed in detail.
sensible to assume that it indeed is, i.e., \(P_{L}(Y|X)=P_{U}(Y|X)\), since there is nothing in prior probability shift that implies a change in the functional relationship between \(X\) and \(Y\) (in the binary case: in what being a member of the positive class or of the negative class actually means). However, it turns out that a change in the priors has an impact on the _a posteriori_ distribution of the response variable \(Y\), i.e., that \(P_{L}(Y|X)\neq P_{U}(Y|X)\). This is indeed the reason why the posterior probabilities issued by a probabilistic classifier \(s\) (which has been trained and calibrated for the training distribution) would need to be recalibrated for the target distribution before attempting to estimate \(P_{U}(Y)\) as \(\frac{1}{|U|}\sum_{\mathbf{x}\in U}s(\mathbf{x})\). This is exactly the rationale behind the SLD method proposed by Saerens et al. (2002). Following this assumption, prior probability shift is defined as in Row 1 of Table 2.
Prior probability shift is the type of shift which quantification methods have mostly been tested on, and the invariance assumption \(P_{L}(X|Y)=P_{U}(X|Y)\) that is made in prior probability shift indeed guarantees that a number of quantification methods work well in these scenarios. In order to show this, let us take ACC as an example. The correction implemented in Equation 6 does not attempt to counter prior probability shift, but attempts to counter classi
Figure 1: Example of prior probability shift generated with synthetic data using a normal distribution for each class. Scenario \(A\) (1st row): original data distribution, in which the positive class (orange) and the negative class (blue) have the same prevalence, i.e., \(p_{A}=0.5\). Scenario \(B\) (2nd row): with respect to Scenario \(A\) there is a shift in the prevalence such that \(p_{B}=0.1\). Dashed lines represent linear hypotheses learnt from the corresponding empirical distributions. Note that, although the positive class and the negative class may have not changed in meaning between \(A\) and \(B\), i.e., \(P_{A}(Y|X)=P_{B}(Y|X)\), the posteriors we would obtain by calibrating two soft classifiers trained from the two empirical distributions would likely differ. Note also that \(P_{A}(X)\neq P_{B}(X)\) (2nd column) but \(P_{A}(X|Y)=P_{B}(X|Y)\) (3rd column).
fier bias (indeed, note that this correction is meaningful even in the absence of prior probability shift). This adjustment relies on Equation 4, which depends on two quantities, the tpr and the fpr of classifier \(h\), that must be estimated on the training data \(L\). Since \(h(\mathbf{x})\) is the same for \(L\) and \(U\), the fact that \(P_{L}(X|Y)=P_{U}(X|Y)\) (which is assumed to hold under probability shift) implies that \(\hat{\mathrm{tpr}}_{h}=\mathrm{tpr}_{h}\) and \(\hat{\mathrm{fpr}}_{h}=\mathrm{fpr}_{h}\). In other words, under prior probability shift ACC works well, since the assumption that the class-conditional distribution \(P(X|Y)\) is invariant across \(L\) and \(U\) guarantees that our estimates of tpr and fpr are good estimates. Similar considerations apply to different quantification methods as well.
Prior probability shift has been widely studied in the quantification literature, both from a theoretical point of view (Fernandes Vaz et al., 2019; Tasche, 2017) and from an empirical point of view (Schumacher et al., 2021). Indeed, note that the artificial prevalence protocol (APP - see Section 2), on which most experimentation of quantification systems has been based, does nothing else than generate a set of samples characterised by prior probability shift with respect to the set from which they have been extracted; the APP recreates the \(P_{L}(Y)\neq P_{U}(Y)\) condition by subsampling _one_ of the two classes, and recreates the \(P_{L}(X|Y)=P_{U}(X|Y)\) condition by performing this subsampling in a random fashion.
Most of the quantification literature is concerned with ways of devising robust estimators of class prevalence values in the presence of prior probability shift. Tasche (2017) proves that, when \(P_{L}(Y)\neq P_{U}(Y)\) and \(P_{L}(X|Y)=P_{U}(X|Y)\) (i.e., when we are in the presence of prior probability shift) the method ACC is _Fisher-consistent_, i.e., the error of ACC tends to zero when the size of the sample increases. Unfortunately, in practice, the condition of an unchanging \(P(X|Y)\) is difficult to fulfil or verify.
At this point, it may be worth stressing that not every change in \(P(Y)\) can be considered an instance of prior probability shift. Indeed, in Section 4.2 we present different cases of shift in the priors that are _not_ instances of prior probability shift, and that we deem of particular interest for realistic applications of quantification.
### Covariate Shift
_Covariate shift_ (see Figure 3 for a graphical example) describes a situation in which (a) there is a change in the distribution \(P(X)\) of the covariates (i.e., \(P_{L}(X)\neq P_{U}(X)\)), while (b) the distribution of the classes conditional on the covariates remains constant (i.e., \(P_{L}(Y|X)=P_{U}(Y|X)\)). In this type of shift, no further assumption is usually made as to whether the distribution \(P(Y)\) of the classes and the class-conditional distribution \(P(X|Y)\) change across \(L\) and \(U\).
In this paper, we are going to assume that also a change in the class-conditional distribution takes place, i.e., \(P_{L}(X|Y)\neq P_{U}(X|Y)\). The rationale of this choice is that, without this assumption, there would be a possible
overlap between the notion of prior probability shift and the notion of covariate shift. To see why, imagine a situation in which the positive and the negative examples are numerical univariate data each following a uniform distribution \(\mathbf{U}(a,b)\) and \(\mathbf{U}(c,d)\), with different parameters \(a<b<c<d\). A change in the priors (i.e., \(P_{L}(Y)\neq P_{U}(Y)\)) would not cause any modification in the class-conditional distribution (i.e., \(P_{L}(X|Y)=P_{U}(X|Y)\) would hold). Thus, by definition, this would squarely count as an example of prior probability shift, since these are the same conditions listed in Row 1 of Table 2. However, at the same time, the distribution of the covariates has also changed (i.e., \(P_{L}(X)\neq P_{U}(X)\)), since \(P(X)=\mathbf{U}(a,b)P(Y=1)+\mathbf{U}(c,d)P(Y=0)\) and since the priors have changed, with the posterior distribution \(P(Y|X)\) remaining stable across \(L\) and \(U\). Thus, this would _also_ count as an example of covariate shift; see Figure 2 for a graphical explanation. For this reason, and for the sake of clarity in the exposition, in this work we will break the ambiguity by assuming that covariate shift implies that \(P(X|Y)\) is _not_ invariant across \(L\) and \(U\). As a final observation, note that the conditions of covariate shift are incompatible with a situation in which both \(P(Y)\) and \(P(X|Y)\) remain invariant. The reason is that \(P(X)\) is assumed to change under the covariate shift assumptions, but, since \(P(X)=P(X|Y=1)P(Y=1)+P(X|Y=0)P(Y=0)\), the only way in which this condition can hold true comes down to assuming either a change in \(P(Y)\) or in \(P(X|Y)\).
We will further distinguish between two types of covariate shift, i.e., (i) _global_ covariate shift, in which the changes in the covariates occur globally, i.e., affect the entire population, and (ii) _local_ covariate shift, in which the changes in the covariates occur locally, i.e., only affect certain subregions of the entire population. These two types of covariate shift will be the subject of Sections 4.2.1 and 4.2.2, respectively.
Figure 2: Possible overlap between the notions of prior probability shift and covariate shift, unless we assume that \(P_{L}(X|Y)\neq P_{U}(X|Y)\) in covariate shift.
#### 4.2.1 Global Covariate Shift
_Global covariate shift_ occurs when there is an overall change in the representation function. We will study two variants of it that differ in terms of whether \(P(Y)\) is invariant or not across \(L\) and \(U\): _global pure covariate shift_, in which \(P_{L}(Y)=P_{U}(Y)\), and _global mixed covariate shift_, in which \(P_{L}(Y)\neq P_{U}(Y)\) (the name "mixed" of course refers to the fact that there is a change in the distribution of the covariates _and_ in the distribution of the labels). Both scenarios are interesting to test quantification methods on, but the latter is probably even more interesting, since changes in the priors are something that quantification methods are expected to be robust to.
Global pure covariate shift might occur when, for example, a sensor (in charge of generating the covariates) experiences a change (e.g., a partial damage, or a change in the lighting conditions for a camera); in this case, the prevalence values of the classes of interest do not change, but the measurements (covariates) might have been affected.4
Footnote 4: This example is what Kull and Flach (2014) called _covariate observation shift_.
Global mixed covariate shift might occur when, for example, a quantifier is trained to monitor the proportion of positive opinions on a certain politician on Twitter on a daily basis and, after the quantifier has been deployed to production, there is a change in Twitter's policy that allows for longer tweets.5 In this case, there is a variation in \(P(X)\), since longer tweets become more likely; there is variation in \(P(X|Y)\), since there will likely be longer positive tweets and longer negative tweets; \(P(Y|X)\) will remain constant, since a change in the length of tweets does not make positive comments more likely or less likely; and \(P(Y)\) can change too (because opinions on politicians do change in time), although not as a result of the change in tweet length.
Footnote 5: This actually happened in 2017, when Twitter raised the maximum allowed size of tweets from 140 to 280 characters.
By taking into account the underlying conditions of pure covariate shift, it seems pretty clear that PCC (see Section 3.3) would represent the best possible choice. The reason is that PCC computes the estimate of the class prevalence values by relying on the posterior probabilities returned by a soft classifier \(s\) (see Equation 7). Inasmuch as these posterior probabilities are reliable enough (i.e., when the soft classifier is well calibrated (Card and Smith, 2018)), the class prevalence values would be well estimated without further manipulations (i.e., there is no need to adjust for possible changes in the priors since, in the pure version, we assume \(P(Y)\) has not changed); see Figure 3, 2nd row.
However, in practice, the posterior probabilities returned by \(s\) might not align well with the underlying concept of the positive class (the soft classifier \(s\) might not be well calibrated for the unlabelled distribution). This might be due to several reasons, but a relevant possibility is due to the inability of the learning device to find good parameters for the classifier. This might happen whenever the hypothesis (i.e., the soft classifier \(s\)) learnt by means of an inductive learning method (e.g., logistic regression) comes from an empirical distribution in which certain regions of the input space were insufficiently
represented during training, and have later become more prevalent during test as a result of a change in \(P(X)\); see Figure 2, 3rd row. This situation is certainly problematic, and would lead to a deterioration in performance of most aggregative quantifiers (including PCC). An in-depth theoretical analysis of the implications of pure covariate shift is offered by Tasche (2022), in which the author concludes that in order for PCC to prove resilient to covariate shift, the test distribution should be _absolutely continuous_ with respect to the training distribution. This assumption also restricts the amount of divergence
Figure 3: Example of global pure covariate shift generated with synthetic data using a normal distribution for each cluster. Situation (_a_) (1st row): original data distribution. Each class consists on two clusters of data (for example positive or negative opinions of two different categories: Electronics and Books). Situation (_b_) (2nd row): there is a shift in the number of opinions of one category, that affects both classes. \(P(X)\) changes (see 2nd column) but \(P(Y|X)\) remains invariant. Situation \(C\) (3rd row), \(P(X)\) changes abruptly, affecting the posterior probabilities \(s(\mathbf{x})\) that a soft classifier, trained via induction on this scenario, would issue.
between training and test samples distributions. However, these are theoretical considerations that are hard (if not impossible) to verify in practice.
#### 4.2.2 Local Covariate Shift
Consider a binary problem in which the positive class is a mixture of two (differently parameterised) Gaussians \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\), i.e., that \(P(X|Y=1)=\alpha\mathcal{N}_{1}+(1-\alpha)\mathcal{N}_{2}\). Assume there are analogous Gaussians \(\mathcal{N}_{3}\) and \(\mathcal{N}_{4}\) governing the distribution of negatives; see Figure 4. Assume now that there is a change (say, an increase) in the prevalence of datapoints from \(\mathcal{N}_{1}\) leading to an overall change in the priors \(P(Y)\). Note that this also implies an overall change in \(P(X)\). There is also a change in \(P(X|Y=1)\) (therefore, in \(P(X|Y)\)) since the parameter \(\alpha\) of the mixture has changed (it is now more likely to find positive examples from \(\mathcal{N}_{1}\)). However, the change in the covariates is _asymmetric_, i.e., \(P(X|Y=0)\) has not changed.
Situations like this naturally occur in real scenarios of interest for quantification. For example, in ecological modelling, researchers might be interested in estimating the prevalence of, e.g., different species of plankton in the sea. To do so, they analyse pictures of water samples taken by an automatic optical device, identify individual examplars of plankton, and estimate the prevalence of the different species via a quantifier (Gonzalez et al., 2019). However, these plankton species are typically grouped, because of their high number, into coarse-grained superclasses (i.e., parent nodes from a taxonomy of classes), which means that no prevalence estimation for the subclasss is attempted. An increase in the prevalence value of one of the (super-)classes is often the consequence of an increase in the prevalence value of only one of its (hidden) subclasses. A similar example may be found in seabed cover mapping for coral reef monitoring (Beijbom et al., 2015); here, ecologists are interested in quantifying the presence of different species in images, often grouping the coral species and algae species into coarser-grained classes.
In contrast to global covariate shift, local covariate shift does not occur due to a variation in the feature representation function (e.g., an alteration of the device in charge of taking measurements, which would impact on the covariates) but due to changes in the priors of (sub-)classes that remain hidden. The most important implication for quantification concerns the fact that this shift would reduce to prior probability shift if the subclasses (the original species in our examples) were observed in place of the superclasses.6 We will only consider the case in which \(P(Y)\) changes, since it is hard to think of any realistic scenario for asymmetric covariate shift in which the class prevalence values remain unaltered. Note also that, in extreme cases, an abrupt change in \(P(Y)\) can end up compromising the condition \(P_{L}(Y|X)=P_{U}(Y|X)\), for the same reasons why \(P(Y|X)\) is altered in prior probability shift. However,
under mild conditions, we can assume \(P(Y|X)\) does not change, or does not change significantly.
### Concept Shift
Concept shift arises when the boundaries of the classes change, i.e., when the underlying _concepts_ of interest change across the training and the testing conditions. Concept shift is characterised by a change in the class-conditional distribution \(P_{L}(X|Y)\neq P_{U}(X|Y)\), as well as a change in the posterior distribution \(P_{L}(Y|X)\neq P_{U}(Y|X)\). Another way of saying this is that there is a change in the functional relationship between the covariates and the class labels; see Figure 5.
Figure 5 depicts a situation in which each of the two classes (say, documents relevant and non relevant, respectively, to a certain user information need) subsumes two subclasses, and one of the subclasses "switches class", i.e., the documents contained in the subclass were once considered relevant to the information need and are now not relevant any more. Yet another example along these lines could be due to a change in the sensitivity of a response variable. So, for example, a change in the threshold above which the value of a continuous response variable indicates a positive example, is a change in the concept of "being positive", which implies (i) a change in \(P(Y|X)\), since some among the positive examples have now become negative, (ii) a change
Figure 4: Example of _local_ covariate shift generated with synthetic data using a normal distribution for each cluster. Situation (a) (1st row): original data distribution with two positive (orange) Gaussians \(\mathcal{N}_{1}\), \(\mathcal{N}_{2}\) and two negative (blue) Gaussians \(\mathcal{N}_{3}\), \(\mathcal{N}_{4}\). Situation (b) (2nd row): the prevalence of \(\mathcal{N}_{1}\) grows.
in \(P(X|Y)\), since the positive and negative classes are inevitably distributed differently, and (iii) even a change in \(P(Y)\), since the higher the threshold, the fewer the positive examples; however, the above does not imply any change in the marginal distribution \(P(X)\).
There are other examples of concept shift which may, instead, lead to a change in \(P(X)\) as well. Take, for example, the case of epidemiology (one of the quintessential applications of quantification) in which the spread of a disease (e.g., by a viral infection) is now manifested in the population by means of different symptoms (the covariates) due to a change in the pathogenic source (e.g., a mutation). In this paper, though, we will only be considering instances of concept shift in which the marginal distribution \(P(X)\) does not change, since otherwise none of the four distributions of interest (\(P(X)\), \(P(Y)\), \(P(X|Y)\), \(P(Y|X)\)) would be invariant across \(L\) and \(U\), which would make the problem essentially unlearnable.
Needless to say, concept shift represents the hardest type of shift for any quantification system (and, more in general, for any inductive inference model), since changes in the concept being modelled are external to the learning procedure, and since there is no possibility of behaving robustly to arbitrary changes in the functional relationship between the covariates and the labels. Attempts to tackle concept shift should inevitably entail a later phase of learning (as in continuous learning) in which the model is informed, possibly by means of new labelled examples, of the changes in the functional relationship between covariates and classes. To date, we are unaware of the existence of quantification methods devised to counter concept shift.
### Recapitulation
In light of the considerations above, in Table 2 we present the specific types of shift that we consider in this paper. Concretely, this comes down to exploring plausible ways of filling out the blank cells of Table 1, which are indicated in grey in Table 2.
## 5 Experiments
In this section we describe experiments that we have carried out in which we simulate the different types of dataset shift described in the previous sections.
\begin{table}
\begin{tabular}{l|c||c|c|c||c} \hline & \(P(X)\) & \(P(Y)\) & \(P(X|Y)\) & Definition & Experiments \\ \hline Prior probability shift & \(\pi\) & \(\pi\) & \(=\) & \(\pi\) & §4.1 & §5.3 \\ \hline Global pure covariate shift & \(\pi\) & \(=\) & \(\pi\) & \(=\) & §4.2.1 & §5.4 \\ \hline Global mixed covariate shift & \(\pi\) & \(\pi\) & \(\pi\) & \(=\) & §4.2.1 & §5.4 \\ \hline Local covariate shift & \(\pi\) & \(\pi\) & \(\pi\) & \(=^{*}\) & §4.2.2 & §5.5 \\ \hline Concept shift & \(=\) & \(\pi\) & \(\pi\) & \(\pi\) & §4.3. & §5.6 \\ \hline \end{tabular}
\end{table}
Table 2: The types of shift we consider. Greyed-out cells indicate assumptions we make (and that we discuss and justify in Section 4). Symbol * indicates a condition that can get compromised in extreme situations.
For simplicity, we have simulated all these types of shift by using the same base datasets, which we describe in the following section.
### Datasets
We extract the datasets we use for the experiments from a large crawl of 233.1M Amazon product reviews made available by McAuley et al. (2015);7 we use different datasets for simulating different types of shift. In order to extract these datasets from this crawl we first remove (a) all product reviews shorter than 200 characters and (b) all product reviews that have not been recognised as "useful" by any users. We concentrate our attention on two merchandise categories, Books and Electronics, since these are the two most populated categories in the corpus; in the next sections these two categories will sometimes be referred to as category \(A\) and category \(B\).
Footnote 7: [http://jmcauley.ucsd.edu/data/amazon/links.html](http://jmcauley.ucsd.edu/data/amazon/links.html)
Every review comes with a (true) label, consisting of the number of stars (according to a "5-star rating", with 1 star standing for "poor" and 5 stars standing for "excellent") that the author herself has attributed to the product being reviewed. Note that the classes are ordered, and thus we can define \(\mathcal{Y}_{\star}=\{s_{1},s_{2},s_{3},s_{4},s_{5}\}\), with \(s_{i}\) meaning "\(i\) stars", and \(s_{1}<s_{2}<s_{3}<s_{4}<s_{5}\). Since we deal with binary quantification, we exploit this order to generate, at desired
Figure 5: Example of concept shift generated with synthetic data using a normal distribution for each cluster. Situation (a) (1st row): original data distribution. Situation (b) (2nd row): the concept “negative” (blue) has changed in a way that it now encompasses one of the originally “positive” (orange) clusters, thus implying a change in \(P(X|Y)\) and in \(P(Y|X)\) but not in \(P(X)\) (2nd column).
"cut points" (i.e., thresholds below which a review is considered negative and above which is considered positive), binary versions of the dataset. We thus define the function "binarise_dataset", that takes a dataset labelled according to \(\mathcal{Y}_{\star}\) and a cut point \(c\), and returns a new version of the dataset labelled according to a binary codeframe \(\mathcal{Y}=\{0,1\}\); here, every labelled datapoint \((\mathbf{x},s_{i})\), with \(s_{i}\in\mathcal{Y}_{\star}\), is converted into a datapoint \((\mathbf{x},y)\), with \(y\in\mathcal{Y}\), such that \(y=1\) (the positive class) if \(i>c\), or \(y=0\) (the negative class) if \(i<c\); note that we filter out datapoints for which \(i=c\). In the cases in which we want to retain all datapoints labelled with all possible numbers of stars, we simply specify \(c\) as a real value intermediate between two integers (e.g., \(c=2.5\)).
### General Experimental Setup
In all the experiments carried out in this study we fix the size of the training set to 5,000 and the size of each test sample to 500. For a given experiment we evaluate all quantification methods with the same test samples, but different experiments may involve different samples depending on the type of shift being simulated. We run different experiments, each targeting a specific type of dataset shift; within each experiment we simulate the presence, in a systematic and controlled manner, of different degrees of shift. When testing with different degrees of a given type of shift, for every such degree we randomly generate 50 test samples. In order to account for stochastic fluctuations in the results due to the random selection of a particular training set, we repeat each experiment 10 times. We carry out all the experiments by using the QuaPy open-source quantification library (Moreo et al., 2021).8 All the code for reproducing our experiments is available from a dedicated GitHub repository.9
Footnote 8: [https://github.com/HLT-ISTI/QuaPy](https://github.com/HLT-ISTI/QuaPy)
Footnote 9: [https://github.com/pglez82/quant_datasetshift](https://github.com/pglez82/quant_datasetshift)
In order to turn raw documents into vectors, as the features we use tfidf-weighted words; we compute idf independently for each experiment by only taking into account the 5,000 training documents selected for that experiment. We only retain the words appearing at least 3 times in the training set, meaning that the number of different words (hence, the number of dimensions in the vector space) can vary across experiments.
As the evaluation measure we use absolute error (AE), since it is one of the most satisfactory (see (Sebastiani, 2020) for a discussion) and frequently used measures in quantification experiments, and since it is very easily interpretable.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & instances & \(\star\star\star\star\star\) & \(\star\star\star\star\star\) & \(\star\star\star\star\star\) & \(\star\star\star\star\star\) \\ \hline Books & 7,813,813 & 0.093 & 0.071 & 0.094 & 0.160 & 0.582 \\ Electronics & 1,889,965 & 0.193 & 0.079 & 0.093 & 0.178 & 0.457 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset information for categories Books and Electronics, along with the prevalence for each different star rating.
In the binary case, AE is defined as
\[\text{AE}(p_{\sigma},\hat{p}_{\sigma})=|p_{\sigma}-\hat{p}_{\sigma}| \tag{9}\]
For each experiment we report the mean absolute error (MAE), where the mean is computed across all the samples with the same degree of shift and all the repetitions thereof. We perform statistical significance tests at different confidence levels in order to check for the differences in performance between the best method (highlighted in boldface in all tables) and all other competing methods. All methods whose scores are _not_ statistically significantly different from the best one, according to a Wilcoxon signed-rank test on paired samples, are marked with a special symbol. We use superscript \(\dagger\) to indicate that \(0.001<p\)-value \(<0.05\) (loosely speaking, that the scores are "somewhat similar"), while superscript \(\ddagger\) indicates that \(0.05\leq p\)-value (loosely speaking, that the scores are "very similar"); the absence of any such symbol thus indicates that \(p\)-value \(\leq 0.001\) (loosely speaking, that the scores are "fairly different").
All the quantification methods considered in this study are of the aggregative type and are described in Section 3.3. In addition to these methods, we had initially also considered the Sample Mean Matching (SMM) method (Hassan et al., 2020), but we removed this method from the experiments as we found it to be equivalent to the PACC method (we give a formal proof of this equivalence in Appendix A).
For the sake of fairness, underlying all quantification methods we use the same type of classifier. (All the quantification methods we use are aggregative, so all of them use an underlying classifier.) As our classifier of choice we use logistic regression, since it is a well-known classifier which also delivers "soft" predictions and is known to deliver reasonably well-calibrated posterior probabilities (these two characteristics are required for PCC, PACC, DyS, and SLD). We optimise the hyperparameters of the quantifier following Moreo and Sebastiani (2021), i.e., minimising a quantification-oriented loss function (here: MAE) via a quantification-oriented parameter optimisation protocol; we explore the values \(C\in\{0.1,1,10,100,100\}\) (where \(C\) is the inverse of the regularization strength), and the values class_weight \(\in\) {Balanced, None} (where class_weight indicates the relative importance of each class), via grid search. We evaluate each configuration of hyperparameters in terms of MAE over artificially generated samples using a held-out stratified validation set consisting of 40% of the training documents. This means that we optimise each classifier specifically for each quantifier, and the parameters we choose are the ones that best suit this particular quantifier. Once we have chosen the optimal values for the hyperparameters, we retrain the quantifier using the entire training set.
The quantification methods used in this study do not have any additional hyperparameters, except for DyS that has two, i.e., (i) the number of bins used to build the histograms and (ii) the distance function. In this work we fix these values to (i) 10 bins and (ii) the Topsoe distance, since these are the values that gave the best results in the work that originally introduced DyS (Maletzke et al., 2019).
### Prior Probability Shift
#### 5.3.1 Evaluation Protocol
For generating prior probability shift we consider all the reviews from categories Electronics and Books. Algorithm 1 describes the experimental setup for this type of shift. For binarising the dataset we follow the approach described in Section 5.1, using a cut point of 3. We sample 5,000 training documents from the dataset using prevalence values of the positive class with values ranging from 0 to 1, at steps of 0.1. (Since it is not possible to generate a classifier with no positive examples or no negative examples, we actually replace \(p_{L}=0\) and \(p_{L}=1\) with \(p_{L}=0.02\) and \(p_{L}=0.98\), respectively.) We draw test samples from the dataset varying, here too, the prevalence of the positive class using values in \(\{0.0,0.1,..,0.9,1.0\}\). In order to give a quantitative indication of the degree of prior probability shift in each experiment, we compute the signed difference \((p_{L}-p_{U})\) rounded to one decimal, resulting in a real value in the range \([-1,1]\); to this respect, note that negative degrees of shift do _not_ indicate an absence of shift, but indicate a presence of shift in which \(p_{U}\) is greater than \(p_{L}\) (for positive degrees, \(p_{U}\) is lower than \(p_{L}\)).
For this experiment the number of test samples used for evaluation amounts to \(11\times 11\times 50\times 10\) = 60,500 for each quantification algorithm we test.
#### 5.3.2 Results
Table 4 and Figure 6 present the results of the prior probability shift experiments in the form of boxplots (blue boxes), where the outliers are indicated by black dots. In this case the SLD method stands out as the best performer, closely followed by DyS and PACC. These methods perform very well when the
degree of shift is moderate,10 while their performance degrades as this degree increases. On the other hand, CC and PCC are clearly the worst performers; the reason is that, as stated previously, CC and PCC naturally inherit the bias of the underlying classifier, so when the divergence between the distribution they are biased towards (i.e., the training distribution) and the test distribution increases, their performance tends to decrease. These results are in line with previous studies in the quantification literature (e.g., (Maletzke et al., 2019; Moreo and Sebastiani, 2022; Moreo et al., 2021; Schumacher et al., 2021)), most of which has indeed focused on prior probability shift.
Footnote 10: Here and in the rest of the paper, when speaking of “high” or “low” degrees of shift we actually refer to the absolute value of this degree (e.g., a degree of shift of -1 counts as as a “high” degree of shift). This will be the case not only for prior probability shift but also for other types of shift.
One interesting observation that emerges from Figure 6 has to do with the stability of the methods. ACC shows a tendency to sporadically yield anomalously high levels of error. Those levels of error correspond to cases in which the training sample is severely imbalanced (\(p_{L}=0.02\) or \(p_{L}=0.98\)). Note that, the correction implemented by Equation 4 may turn unreliable when the estimation of \(\mathrm{tpr}\) itself is unreliable (this is likely to occur when the amount of positives is \(2\%\), i.e., when \(p_{L}=0.02\)) and/or when the estimation of \(\mathrm{fpr}\) is unreliable (this is likely to occur when the amount of negatives is \(2\%\), i.e., when \(p_{L}=0.98\)). Yet another cause might include the instability of the denominator (this happens when \(\mathrm{tpr}\approx\mathrm{fpr}\)), which could, in turn, require
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & CC & ACC & PCC & PACC & DyS & SLD \\ \hline -1.0 &.737 & **.000** &.548 &.001 &.063 &.001 \\ -0.9 &.479 &.049 &.439 &.044 & \(\ddagger\).053 & **.041** \\ -0.8 &.355 &.088 &.352 &.077 & **.045** &.049 \\ -0.7 &.271 &.099 &.278 &.069 & **.040** & \(\ddagger\).041 \\ -0.6 &.213 &.094 &.216 &.054 & **.032** & \(\dagger\).034 \\ -0.5 &.166 &.086 &.162 &.042 & **.028** & \(\dagger\).029 \\ -0.4 &.126 &.071 &.115 &.031 & \(\dagger\).024 & **.023** \\ -0.3 &.091 &.055 &.093 &.025 &.021 & **.020** \\ -0.2 &.064 &.041 &.085 &.023 &.019 & **.017** \\ -0.1 &.047 &.032 &.091 &.022 &.017 & **.015** \\ 0.0 &.035 &.026 &.111 &.017 &.016 & **.014** \\ 0.1 &.048 &.034 &.090 &.021 &.018 & **.017** \\ 0.2 &.064 &.046 &.084 &.023 & \(\dagger\).019 & **.018** \\ 0.3 &.092 &.063 &.089 &.025 &.022 & **.020** \\ 0.4 &.127 &.077 &.112 &.029 &.026 & **.022** \\ 0.5 &.167 &.089 &.160 &.036 &.030 & **.024** \\ 0.6 &.213 &.096 &.213 &.045 &.035 & **.025** \\ 0.7 &.272 &.095 &.276 &.053 &.043 & **.027** \\ 0.8 &.355 &.081 &.351 &.058 &.053 & **.030** \\ 0.9 &.478 &.052 &.440 &.039 &.062 & **.029** \\ 1.0 &.742 &.020 &.551 & **.002** &.076 &.016 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for prior probability shift experiments in terms of MAE. Each row corresponds to a given degree of shift, measured as \((p_{U}-p_{L})\) (rounded to one decimal).
clipping the output in the range \([0,1]\). After analyzing the 100 worst cases, we verified that in 36% of the cases involved clipping, in 46% of the cases the denominator turned out to be smaller than 0.05.
Note that, if these extreme cases were to be removed, the average scores obtained by ACC would not substantially differ from those obtained by other quantification methods such as PACC or DyS.
### Global Covariate Shift
#### 5.4.1 Evaluation Protocol
For generating global covariate shift, we modify the ratio between the documents in category \(A\) (Books) and those in category \(B\) (Electronics), across the training data and the test samples. We binarise the dataset at a cut point of 3, as described in Section 5.1. We vary the prevalence \(\alpha\) of category \(A\) (the
Figure 6: Results obtained for prior probability shift; the error measure is MAE and the degree of shift is computed as \((p_{U}-p_{L})\) (rounded to one decimal).
prevalence of category \(B\) is \((1-\alpha))\), in the training data (\(\alpha^{L}\)) and in the test samples (\(\alpha^{U}\)), in the range \([0,1]\) with steps of \(0.1\), thus giving rise to \(121\) possible combinations. For the sake of a clear exposition, we present the results for different degrees of global covariate shift, measured as the signed difference between \(\alpha^{L}\) and \(\alpha^{U}\), resulting in a real value in the range \([-1,+1]\). We vary the priors of the positive class using the values \(\{0.25,0.50,0.75\}\) in both the training data and the test samples, in order to simulate cases of global pure covariate shift, where \(P_{L}(Y)=P_{U}(Y)\), and global mixed covariate shift, where \(P_{L}(Y)\neq P_{U}(Y)\). Note that even if the global pure covariate shift scenario is particularly awkward for a quantification setting (since the prevalence of the positive class in the training data coincides with the one in the test data), it is interesting because it shows how quantifiers react just to a mere change in the covariates. Algorithm 2 describes the experimental setup for this type of shift.
```
1:\(A\leftarrow\) binarise_dataset(\(A,\)cut_point = 3)
2:\(B\leftarrow\) binarise_dataset(\(B,\)cut_point = 3)
3:\(\mathcal{L}_{A},\mathcal{U}_{A}\leftarrow\)split_stratified(\(A\))
4:\(\mathcal{L}_{B},\mathcal{U}_{B}\leftarrow\)split_stratified(\(B\))
5:for\(10\) repetitions do
6:for\(p^{L}\in\{0.25,0.50,0.75\}\)do
7:for\(\alpha^{L}\in\{0.0,0.1,...,0.9,1.0\}\)do
8:/* Generate a training sample \(L\) from \(\mathcal{L}_{A}\) and \(\mathcal{L}_{B}\) with prevalence \(p^{L}\)
9: and a proportion \(\alpha^{L}\) of documents from \(\mathcal{L}_{A}\) */
10:\(L_{A}\sim\mathcal{L}_{A}\) with \(p_{L_{A}}=p^{L}\) and \(|L_{A}|=[\alpha^{L}\cdot 5000]\)
11:\(L_{B}\sim\mathcal{L}_{B}\) with \(p_{L_{B}}=p^{L}\) and \(|L_{B}|=[(1-\alpha^{L})\cdot 5000]\)
12:\(L\gets L_{A}\cup L_{B}\)
13:/* Use quantification algorithm \(Q\) to learn a quantifier \(q\) on \(L\) */
14:\(q\gets Q.fit(L)\)
15:for 50 repetitions do
16:/* Gernoting test samples */
17:for\(p^{U}\in\{0.25,0.50,0.75\}\)do
18:for\(\alpha^{U}\in\{0.0,0.1,...,0.9,1.0\}\)do
19:\(U_{A}\sim\mathcal{U}_{A}\) with \(p_{U_{A}}=p^{U}\) and \(|U_{A}|=[\alpha^{U}\cdot 500]\)
20:\(U_{B}\sim\mathcal{U}_{B}\) with \(p_{U_{B}}=p^{U}\) and \(|U_{B}|=[(1-\alpha^{U})\cdot 500]\)
21:\(U\gets U_{A}\cup U_{B}\)
22:\(\tilde{p}^{q}_{U}\gets q.\text{quantify}(U)\)
23:\(error\gets AE(p_{U},\tilde{p}^{q}_{U})\)
```
**Algorithm 2** Protocol for generating global covariate shift.
For this experiment the number of test samples used for evaluation amounts to \(3\times 3\times 11\times 11\times 50\times 10\) = 544,500 for each quantification algorithm we test.
#### 5.4.2 Results
We now report the results for the scenario in which the data exhibits global _pure_ covariate shift (see Tables 5, 6, 7, where global pure covariate shift is represented by the columns with a grey background, and Figures 7, 8, 9). As
can be expected, the bigger the degree of such shift, the worse the performance of the methods. Note that a degree of global pure covariate shift equal to 1 (resp., -1) means that the system was trained with documents only from category \(A\) (resp., \(B\)) while the testing samples only have documents from category \(B\) (resp., \(A\)). On the other hand, low degrees of global pure covariate shift represent the situation in which similar values of \(\alpha^{L}\) and \(\alpha_{U}\) were used. The experiments show that the method most robust to global pure covariate shift is PCC, which is consistent with the theoretical results of Tasche (2022). PCC is able to provide good results, beating the other methods consistently, even when the degree of global pure covariate shift is high. On the other hand, methods like SLD, that show excellent performance under prior probability shift, perform poorly under high values of global pure covariate shift.
The situation changes drastically when analysing the results for global _mixed_ covariate shift (which in the tables are represented by the columns with a white background), i.e., when also \(P(Y)\) changes across training data and test data. In these cases, the performance of methods like PCC or CC (methods that performed very well under the presence of global _pure_ covariate shift) degrades, due to the fact that these methods do not attempt any adjustment to the prevalence of the test data. In this case, methods designed to deal with prior probability shift, such as SLD, stand as the best performers. This is interesting, since this experiment represents a situation in which a change in the covariates happens along with a change in the priors, thus harming the calibration of the posterior probabilities on which PCC rests upon.
\begin{table}
\begin{tabular}{c|c c c c c c|c c
### Local Covariate Shift
#### 5.5.1 Evaluation Protocol
For simulating _local_ covariate shift we generate a shift in the class conditional distribution of only one of the classes. In order to do so, categories \(A\) and \(B\) are treated as subclasses, or clusters, of the positive and negative classes. Figure 10 might help in understanding this protocol. The main idea is to alter the prevalence \(P(Y)\) of the test samples by just changing the prevalence of positive documents of one of the subclasses (e.g., of category \(A\)) while maintaining
Figure 7: Results for global covariate shift with \(p_{L}=0.5\). The error measure is MAE and the degree of covariate shift is computed as \((\alpha^{L}-\alpha^{U})\). Figures with a grey background represent cases of global _pure_ covariate shift, in which \(P_{L}(Y)=P_{U}(Y)\).
the rest (e.g., positives and negatives in \(B\) and the negatives of \(A\)) unchanged. Following this procedure, we let the class-conditional distribution of the positive examples \(P(X|Y=1)\) vary, while the class-conditional distribution of the negative examples \(P(X|Y=0)\) remains constant.
For this experiment, we keep the training prevalence fixed at \(p_{L}=0.5\), while we vary the test prevalence \(p_{U}\) artificially. To allow for a wider exploration of the range of the prevalence values \(p_{U}\) that can be achieved by varying only the number of positives in category \(A\), we start from a configuration in which \(\frac{2}{3}\) of the positives in the training set are from category \(A\) and the remaining \(\frac{1}{3}\) are
Figure 8: Results for global covariate shift with \(p_{L}=0.25\). Error measure is MAE and the degree of covariate shift is computed as \((\alpha^{L}-\alpha^{U})\). Figures with a grey background represent cases of global _pure_ covariate shift, in which \(P_{L}(Y)=P_{U}(Y)\).
from category \(B\). Both categories contribute to the training set with exactly the same number of documents (2,500 each, since the training set contains 5,000 documents, as before). The set of negative examples is composed of \(\frac{1}{3}\) documents from \(A\) and \(\frac{2}{3}\) documents from \(B\). In the test samples all these proportions are kept fixed except for the positive documents from category \(A\), so that a desired prevalence value is reached by removing, or adding, positives of this category. Note that this process generates test samples of varying sizes. In particular, when the test size is equal to 500, the proportions of positive and negative documents, as well as the proportion of documents from \(A\) and \(B\), the proportion of positive documents from \(A\) and \(B\) is equal to 500.
Figure 9: Results for global covariate shift with \(p_{L}=0.75\). Error measure is MAE and the degree of covariate shift is computed as \((\alpha^{L}-\alpha^{U})\). Figures with a grey background represent cases of global _pure_ covariate shift, in which \(P_{L}(Y)=P_{U}(Y)\).
\(B\), match the proportions used in the training set. Using this procedure we explore \(p_{U}\) in the range \([0.25,0.75]\) at steps of 0.05 (see Algorithm 3).
For this experiment the number of test samples used for evaluation amounts to \(11\times 11\times 50\times 10\) = 60,500 for each quantification algorithm we test.
Figure 10: Conceptual diagram illustrating our local covariate shift protocol.
#### 5.5.2 Results
The results we have obtained for local covariate shift (orange boxes) are displayed in Figure 11. For easier comparison, this plot also shows results for the cases in which the class-conditional distributions are constant across the training data and the test data (blue boxes), i.e., when the type of shift is prior probability shift.
Consistently with the results of Section 5.3.2, most quantification algorithms (except for CC and PCC) work reasonably well (see the blue boxes) when the class-conditional distributions are invariant across the training and the test data. Instead, when the class-conditional distributions change, the performance of these algorithms tends to degrade. This should come at no surprise
Figure 11: Results for _local_ covariate shift expressed in terms of MAE. Blue boxes represent the situation in which \(P_{L}(X|Y)=P_{U}(X|Y)\) while orange boxes represent the situation in which \(P_{L}(X|Y)\neq P_{U}(X|Y)\) because \(P_{L}(X|Y=1)\neq P_{U}(X|Y=1)\). The degree of shift in the priors is shown along the x-axis and is computed as \((p_{U}-p_{L})\) rounded to two decimals.
given that all the adjustments implemented in the quantification methods we consider (as well as in all other methods we are aware of) rely on the assumption that the class-conditional distributions are invariant. The exception to this are CC and PCC, the only methods that do not attempt to adjust the priors. What comes instead as a surprise is not only that the performance of CC and PCC does not degrade, but that this performance seems to improve (i.e., the orange boxes in the extremes are systematically below the blue boxes for CC and PCC). This apparently strange behaviour can be explained as follows. When \(p_{U}\ll p_{L}\), CC and PCC will naturally tend to overestimate the true prevalence. However, in this case, the positive examples in the test sample happen to mostly be from category \(B\). Since the underlying classifier has been trained on a dataset in which the positives from category \(A\) were more abundant (\(\frac{2}{3}\)) than the positives from category \(B\) (\(\frac{1}{3}\)), the classifier has more problems in classifying positives from \(B\) than from category \(A\). This has the consequence that the overestimation brought about by CC and PCC is partially compensated (that is, positive examples from \(B\) tend to be misclassified as negatives more often), and thus the final \(\hat{p}_{U}\) gets closer to the real value \(p_{U}\). On the other side, when \(p_{U}\gg p_{L}\), CC and PCC will tend to underestimate \(\hat{p}\). However, in this scenario positive examples mostly belong to category \(A\), which the classifier identifies as positives more easily (since it has been trained on a relatively higher number of positives from \(A\)), thus increasing the value of \(\hat{p}_{U}\) and making it closer to the actual value \(p_{U}\).
A fundamental conclusion of this experiment is that, when the class-conditional distributions change, the adjustment implemented by the most sophisticated quantification methods can become detrimental. This is important since, in real applications, there is no guarantee that the type of shift a system is confronted with is prior probability shift, nor is there any general way for reliably identifying the type of shift involved. This experiment also shows how the bias inherited by CC and PCC can, under some circumstances, be "serendipitously" mitigated, at least in part. (We will see a similar example when studying concept shift in Section 5.6.)
### Concept Shift
#### 5.6.1 Evaluation Protocol
In order to simulate concept shift we exploit the ordinal nature of the original 5-star ratings. Specifically, we simulate changes in the concept of "being positive" by varying, in a controlled manner, the threshold above which a review is considered positive. The protocol we propose thus comes down to varying the cut points in the training set (\(c^{L}\)) and in the test set (\(c^{U}\)) _independently_, so that the notion of what is considered positive differs between the two sets. For example, by imposing a training cut point of \(c^{L}=1.5\) we are mapping 1-star to the negative class, and 2-, 3-, 4-, and 5-stars to the positive class. In other words, everything but strongly negative reviews are considered positive in the
training set. If, at the same time, we set the test cut point at \(c^{U}=4.5\), we are generating a large shift in the concept of "being positive", since in the test set only strongly positive reviews (5 stars) will be considered positive. For 5 classes there are 4 possible cut points \(\{1.5,2.5,3.5,4.5\}\); the protocol explores all combinations systematically (see Algorithm 4).
We use the signed difference \((c^{L}-c^{U})\) as an indication of the degree of concept shift, resulting in an integer value in the range \([-3,3]\); note that \((c^{L}-c^{U})=0\) corresponds to a situation in which there is no concept shift.
It is also worth noting that this protocol _does not affect_\(P(X)\), which remains constant across the training distribution and the test distribution. Conversely, varying the cut point has a direct effect on \(P(Y)\), which means that by establishing different cut points for the training and the test datasets we are indirectly inducing a change in the priors. In order to allow for controlled variations in the priors, we depart from a situation in which all five ratings have the same number of examples, i.e, we impose \(p(\mathcal{Y}_{\star})=(0.2,0.2,0.2,0.2,0.2)\) onto both the training set and the test set. This guarantees that a change in a cut point \(c\in\{1.5,2.5,3.5,4.5\}\) gives rise to a binary set with (positive) prevalence values in \(\{0.2,0.4,0.6,0.8\}\), which in turn implies a difference in priors \((p_{L}-p_{U})\in\{-0.6,-0.4,\ldots,0.4,0.6\}\).
For this experiment, the number of test samples used for evaluation amounts to \(4\times 4\times 50\times 10\) = 8,000 for each quantification algorithm we test.
#### 5.6.2 Results
The results for our simulation of concept shift are shown in Figure 12. The performance of all methods decreases as the degree of concept shift increases,
i.e., when \(c^{L}<c^{U}\) (resp., \(c^{L}>c^{U}\)) all methods tend to overestimate (resp., underestimate) the true prevalence. That no method could fare well under concept shift was expected, for the simple reason that none of these methods has been designed to confront arbitrary changes in the functional relationship between covariates and classes. These results deserve no further discussion, and are here reported only for the sake of completeness (we omit the corresponding table, though).
What instead deserves some discussion is the fact that concept shift might, under certain circumstances, lead to erroneous interpretations of the relative merits of quantification methods. This confusion might arise when the _bias_ of a quantifier gets partially compensated by the variation in the prior resulting from the change in the concept. This situation is reproduced in Figure 13, where we impose \(p_{L}=0.5\) and \(p_{U}=0.75\).11 Take a look at the errors produced
Figure 12: Results for concept shift. The error measure is MAE and the degree of concept shift is computed as \(c_{tr}-c_{tst}\).
by both methods when \((c^{L}-c^{U})=0\), i.e., when \(c^{L}=c^{U}\). Note that in this case, there is no concept shift, but there is prior probability shift. (Recall that we chose \(p_{L}=0.5\) and \(p_{U}=0.75\) for this experiment). We know that PCC tends to deliver biased estimators, while SLD instead does not. This is witnessed by the fact that PCC yields an error close to MAE=0.15 (it tends to underestimate the test prevalence), while SLD obtains a very low error instead; let us call this bias the "global" bias. As we separate the cut points, we introduce a form of bias (a "local" bias) that interacts with the global one. For instance, imagine we train our classifier with 1-star and 2-stars acting as negative labels and (3, 4, 5) acting as positive ones. Assume that in test we instead have (1, 2, 3) stars acting as the negative labels and only (4, 5) as the positives. In this case, the classifier will now tend to classify as positive the test examples with 3 stars. This local overestimation will partially compensate for the global underestimation. (An analogous reasoning applies in the other direction as well.) Note that such an improvement is accidental, and attributing any merit to the quantifier for this would be misleading.
## 6 Conclusions
Since the goal of quantification is estimating class prevalence, most previous efforts in the field have focused on assessing the performance of quantification systems in situations characterised by a shift in class prevalence values, i.e., by prior probability shift; in the quantification literature other types of dataset shift have received less attention, if any. In this paper we have proposed new evaluation protocols for simulating different types of dataset shift in a controlled manner, and we have used them to test the robustness to these types of shift of several representative methods from the quantification literature. The experimental evaluation we have carried out has brought about some interesting findings.
Figure 13: Results for concept shift with forced values for \(p_{L}=0.5\) and \(p_{U}=0.75\). The error measure is MAE and the degree of concept shift is computed as \((c_{tr}-c_{tst})\).
The first such finding is that many quantification methods are robust to prior probability shift but not to other types of dataset shift. When the simplifying assumptions that characterise prior probability shift (e.g., that the class-conditional densities remain unaltered) are not satisfied, all the tested methods (including SLD, a top performer under prior probability shift) experience a marked degradation in performance.
A second observation is that, while previous theoretical studies indicate that PCC should be the best quantification method for dealing with covariate shift, our experiments reveal that its use should only be recommended when the class label proportions are expected not to change substantially (a setting that we refer to as _pure_ covariate shift).
Such a setting, though, is fairly uninteresting in real-life applications, and our experiments show that other methods (particularly: SLD and PACC) are preferable to PCC when covariate shift is accompanied by a change in the priors. However, even SLD becomes unstable under certain conditions in which both covariates and labels change. We argue that such a setting, which we have called _local_ covariate shift, shows up in many applications of interest (e.g., prevalence estimation of plankton subspecies in sea water samples (Gonzalez et al., 2019), or seabed cover mapping (Beijbom et al., 2015), in which finer-grained unobserved classes are grouped into coarser-grained observed classes.
Finally, our results highlight the limitations that all quantification methods exhibit when coping with concept shift. This was to be expected since no method can adapt to arbitrary changes in the functional relationship between covariates and classes without the aid of external information. The same batch of experiments also shows that concept shift may induce a change in the priors that can partially compensate the bias of a quantifier; however, such an improvement is illusory and accidental, and it is difficult to envision clever ways for taking advantage of this phenomenon.
Possible directions for future work include extending the protocols we have devised to other specific types of shift that may be application-dependent (e.g., shifts due to transductive active learning (Kottke et al., 2022), to oversampling of positive training examples in imbalanced data scenarios (Moreo et al., 2016), to concept shifts in cross-lingual applications), and to types of quantification other than binary (e.g., multiclass, ordinal, multi-label). The goal of such research, as well of the research presented in this paper, is to allow a correct evaluation of the potential of different quantification methods when confronted with the different ways in which the unlabelled data we want to quantify on differs from the training data, and to stimulate research in new quantification methods capable of tackling the types of shift that current methods are insufficiently equipped for.
The work of the 1st author has been funded by MINECO (Ministerio de Economia y Competitividad) and FEDER (Fondo Europeo de Desarrollo Re
gional), grant PID2019-110742RB-I00 (MINECO/FEDER), and by Campus de Excelencia Internacional in collaboration with Santander Bank in the framework of the financial aid for mobility of excellence for teachers and researchers at the University of Oviedo. The work by the 2nd and 3rd authors has been supported by the SoBigData++ project, funded by the European Commission (Grant 871042) under the H2020 Programme INFRAIA-2019-1, by the AI4Media project, funded by the European Commission (Grant 951911) under the H2020 Programme ICT-48-2020, and by the SoBigData.it, FAIR, and QuaDaSh projects funded by the Italian Ministry of University and Research under the NextGenerationEU program. The authors' opinions do not necessarily reflect those of the funding bodies.
|
2302.13208 | The wave operator representation of quantum and classical dynamics | The choice of mathematical representation when describing physical systems is
of great consequence, and this choice is usually determined by the properties
of the problem at hand. Here we examine the little-known wave operator
representation of quantum dynamics, and explore its connection to standard
methods of quantum dynamics. This method takes as its central object the square
root of the density matrix, and consequently enjoys several unusual advantages
over standard representations. By combining this with purification techniques
imported from quantum information, we are able to obtain a number of results.
Not only is this formalism able to provide a natural bridge between phase and
Hilbert space representations of both quantum and classical dynamics, we also
find the waveoperator representation leads to novel semiclassical
approximations of both real and imaginary time dynamics, as well as a
transparent correspondence to the classical limit. This is demonstrated via the
example of quadratic and quartic Hamiltonians, while the potential extensions
of the waveoperator and its application to quantum-classical hybrids is
discussed. We argue that the wave operator provides a new perspective that
links previously unrelated representations, and is a natural candidate model
for scenarios (such as hybrids) in which positivity cannot be otherwise
guaranteed. | Gerard McCaul, Dmitry V. Zhdanov, Denys I. Bondar | 2023-02-26T02:21:31Z | http://arxiv.org/abs/2302.13208v4 | # The wave operator representation of quantum and classical dynamics
###### Abstract
The choice of mathematical representation when describing physical systems is of great consequence, and this choice is usually determined by the properties of the problem at hand. Here we examine the little-known wave operator representation of quantum dynamics, and explore its connection to standard methods of quantum dynamics, such as the Wigner phase space function. This method takes as its central object the _square root_ of the density matrix, and consequently enjoys several unusual advantages over standard representations. By combining this with purification techniques imported from quantum information, we are able to obtain a number of results. Not only is this formalism able to provide a natural bridge between phase and Hilbert space representations of both quantum and classical dynamics, we also find the wave operator representation leads to novel semiclassical approximations of both real and imaginary time dynamics, as well as a transparent correspondence to the classical limit. This is demonstrated via the example of quadratic and quartic Hamiltonians, while the potential extensions of the wave operator and its application to quantum-classical hybrids is discussed. We argue that the wave operator provides a new perspective that links previously unrelated representations, and is a natural candidate model for scenarios (such as hybrids) in which positivity cannot be otherwise guaranteed.
## I Introduction
When describing physical dynamics mathematically, there exist a number of equivalent representations that one can choose between. This plethora of potential representations is particularly pronounced in quantum dynamics. Besides the Schrodinger and Liouville equations, there also exists more esoteric formulations such as the Wigner-Weyl phase space representation [1; 2; 3; 4; 5], or the Feynman path integral [6]. Each of these carries its own strengths and weaknesses. For example, the phase space representation is commonly used in quantum chemistry and optics [7; 8], while path integrals find a natural home in the description of open system dynamics via the influence functional [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. On a more fundamental level, the choice of representation can change the degree which the correspondence principle is manifestly present. To draw again on the example of the path integral and Wigner function, the \(\hbar\to 0\) limit makes clear that for the former the only path of finite weight is that corresponding to the classical action [12], while the equation of motion for the Wigner function reduces to the classical Poisson bracket [8; 19].
In the realm of quantum dynamics, one's choice of representation can often lead to issues of interpretation. For instance, the measure of a path integral is only finitely additive and therefore not guaranteed to converge [20], while the Wigner function exhibits negativity. This is particularly problematic, as this potential negativity means it is uninterpretable as a density, despite being derived from one. In the case of pure states, this difficulty was resolved with the demonstration that the Wigner function should be interpreted as a phase space probability amplitude. This is in direct analogy with the Koopman-von Neumann (KvN) representation of classical dynamics [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] which explicitly admits a wavefunction on phase space, and which the Wigner function of a pure state corresponds to in the classical limit. The extension of this interpretation to mixed states has to date been lacking however, given that such states must be described by densities and therefore lack a direct correspondence to wavefunctions.
Here we address this issue, by employing the little-known _wave operator_ formalism. Such a representation of dynamics has been deployed in a number of contexts, including open systems [34], quantum holonomy in density matrices [35], the development of phase-space wavefunctions [36], as well as nonlinear dynamical models [37; 38; 39]. In fact, the motivation for a "square root" of the density and the advantages it provides can be found even when not explicitly referenced. For example, the recently developed Ensemble Rank Truncation method (ERT) has at its heart a method for representing a Lindbladian evolution of a density in terms of a weighted sum of wavefunctions [40]. The wave operator has also been used for foundational research [41; 42; 43], but here we extend this to demonstrate that when combined with purification techniques from quantum information, it provides a natural bridge between the Hilbert space representation of quantum dynamics, the phase space Wigner representation, as well as KvN dynamics [44]. Through this, we are able to derive not only a consistent interpretation of mixed states in the Wigner representation, but also establish a connection between the commonly utilised phase space methods of quantum chemistry, and quantum information. We also find that this representation of quantum dynamics leads to novel semiclassical approximations of both real and imaginary time dynamics, as well as a transparent correspondence to the classical limit.
The remainder of this paper is outlined as follows - Sec.III borrows from the field of quantum information to express the wave operator in a purified form. This
is then exploited in Sec.IV to introduce Bopp operators into the wave operator description. Equipped with this formulation, in Sec.V it is possible to identify the phase-space representation of the wave operator with the Wigner function, while in Sec.VI we use it to demonstrate that the classical limit of the wave operator description reduces exactly to the KvN representation of classical dynamics. Sec.VII then applies the same technique to the imaginary time Bloch equation, where we are able to derive a semi-classical correction to the equilibrium state of a system, and illustrate its effect using the examples of a quadratic and quartic oscillator. The paper then concludes with a summary of key findings, as well as outlining both open questions and future research directions.
## II The wave operator
We begin our treatment by making explicit a freedom present in the in the Liouville equation. This describes the dynamics for the density matrix \(\hat{\rho}\) of a quantum system:
\[i\hbar\partial_{t}\hat{\rho}=[\hat{H},\hat{\rho}]\equiv\hat{H}\hat{\rho}-\hat {\rho}\hat{H}, \tag{1}\]
where \(\hat{H}\) is a self-adjoint Hamiltonian. The expectation value of an observable \(\hat{O}\) is obtained as
\[\langle O\rangle=\text{Tr}(\hat{\rho}\hat{O}). \tag{2}\]
Let us first assume that the density matrix can be decomposed into the form:
\[\hat{\rho}=\hat{\Omega}\hat{\Omega}^{\dagger}, \tag{3}\]
where in what follows we shall refer to \(\hat{\Omega}(t)\) as the _wave operator_. Following this assignation, we might ask what form the dynamics of \(\hat{\Omega}(t)\) can take while remaining consistent with both Eq. (1) and Eq. (3). We find that the most general form of evolution permitted is
\[i\hbar\partial_{t}\hat{\Omega}=[\hat{H},\hat{\Omega}]-\hat{\Omega}\hat{F}, \tag{4}\]
where \(\hat{F}\) is an arbitrary self-adjoint operator. It is easy to show from this that
\[i\hbar\partial_{t}(\hat{\Omega}\hat{\Omega}^{\dagger})=[\hat{H},\hat{\Omega} \hat{\Omega}^{\dagger}] \tag{5}\]
is satisfied at all times if it is satisfied at a single moment (e.g., \(t=0\)). Consequently, the Liouville dynamics described by Eq. (1) may instead be described via the wave operator using Eq. (4), together with a prescription for expectations:
\[\langle O\rangle=\text{Tr}(\hat{\Omega}^{\dagger}\hat{O}\hat{\Omega}). \tag{6}\]
The principal advantage of expressing a quantum system's dynamics in terms of \(\hat{\Omega}(t)\) rather than \(\hat{\rho}(t)\) are two-fold. First, any dynamics using \(\hat{\Omega}(t)\) are guaranteed to preserve positivity on the level of the density. Such a property means that we are free to choose \(\hat{F}\) in such a way that \(\hat{\Omega}(t)\) may be highly non-Hermitian. The special case of \(\hat{F}=0\) of the wave operator description has been studied in [35; 43], but the ability to arbitrarily bias an evolution lends itself to numerical development, i.e. \(\hat{F}\) can be chosen such that the dynamics of \(\hat{\Omega}(t)\) can be either Schrodinger or Liouville like. A concrete example taking advantage of this freedom may be found in [34], where \(\hat{F}\) is chosen so as to maintain a lower triangular shape for \(\hat{\Omega}\), and thus minimise the number of coefficients that must be propagated.
To understand the physical meaning of \(\hat{F}\), we can rewrite equation (4) for small \(\delta t\) as
\[\hat{\Omega}(t+\delta t)=e^{-i\delta t\hat{H}/\hbar}\hat{\Omega}(t)e^{i\delta t \hat{H}/\hbar}e^{i\delta t\hat{F}/\hbar}+O(\delta t^{2}). \tag{7}\]
Assuming that \(\hat{F}\neq 0\) and \(\hat{\Omega}(t)\) is a non-negative operator, then equation (7) is a polar decomposition of \(\hat{\Omega}(t+\delta t)\). \(\hat{F}\) may therefore be interpreted as the generator of the "phase" of the non self-adjoint wave operator.
The second advantage of the wave operator formalism is conceptual. Specifically we shall see that when employed in concert with the technique of canonical purification, we obtain both a direct correspondence to the Wigner phase function, as well as a generally applicable procedure for taking the classical limit of a quantum system. It is hoped that ultimately the combination of these two properties will allow for a physically consistent model of a quantum-classical hybrid, but in the present work we restrict ourselves to the context of a closed system, where we are able to demonstrate the aforementioned classical limit.
## III Canonical purification of the wave operator
In this section we will establish a close link between the proposed wave operator description of quantum mechanics and the notion of purification in quantum information theory (see chapter 5 in [45]). Expressing the wave operator in a purified form will later allow for the introduction of Bopp operators, and the establishment of a classical limit for the formalism. To perform the purification, we first choose an arbitrary orthogonal time-independent basis \(\{|k\rangle\}\subset\mathcal{H}\) in a Hilbert space \(\mathcal{H}\). This allows us to define a mapping from an operator \(\hat{\Omega}\) acting on \(\mathcal{H}\) to a vector \(|\hat{\Omega}\rangle\in\mathcal{H}\otimes\mathcal{H}\) as
\[|\hat{\Omega}\rangle\equiv\sum_{k}\hat{\Omega}\,|k\rangle\otimes|k\rangle=( \hat{\Omega}\otimes\hat{1})\,|\omega\rangle\,, \tag{8}\]
where
\[|\omega\rangle=\sum_{k}|k\rangle\otimes|k\rangle\,. \tag{9}\]
The transformation given by Eq. (8) is closely related to the the concept of canonical purification (see page 166 of
[45]), while in linear algebra, the mapping is also known as row-major vectorization. Since Eq. (8) is a purification of the density matrix \(\hat{\rho}\), the latter can be recovered as a partial trace,
\[\hat{\rho}=\text{Tr}^{\prime}\ket{\hat{\Omega}}\bra{\hat{\Omega}}\equiv\sum_{k}( \hat{1}\otimes\bra{k})\ket{\hat{\Omega}}\bra{\hat{\Omega}}(\hat{1}\otimes\ket{k }). \tag{10}\]
A number of important identities can be derived from the definition of Eq. (8)
\[\ket{\hat{A}\hat{\Omega}}=(\hat{A}\otimes\hat{1})\ket{\hat{\Omega}}, \tag{11a}\] \[\bra{\hat{A}}\!\!\hat{B}=\text{Tr}(\hat{A}^{\dagger}\hat{B}),\] (11b) \[\ket{\hat{\Omega}\hat{A}}=(\hat{1}\otimes\hat{A}^{T})\ket{\hat{ \Omega}}, \tag{11c}\]
where \(\hat{A}^{T}\) denotes the transpose of \(\hat{A}\). The latter identity Eq. (11c) is a consequence of the following "ricochet" property:
\[\hat{A}\otimes\hat{1}\ket{\omega}= \sum_{ijk}a_{ij}\ket{i}\bra{j}\otimes\ket{k}= \sum_{ijk}a_{ij}\delta_{jk}\ket{i}\otimes\ket{k}\] \[=\sum_{ik}a_{ik}\ket{i}\otimes\ket{k} =\sum_{ij}\ket{k}\otimes a_{ki}\ket{i}\] \[= \sum_{ijk}\ket{k}\otimes a_{ji}\delta_{jk}\ket{i} =\sum_{ijk}\ket{k}\otimes a_{ji}\ket{i}\bra{j}\] \[=\hat{1}\otimes\hat{A}^{T}\ket{\omega}. \tag{12}\]
When this is combined with the fact that any operators of the form \(\hat{1}\otimes\hat{A}\) and \(\hat{B}\otimes\hat{1}\) will commute, we obtain Eq. (11c).
By combining Eq. (4) with Eq. (11), it is possible to express the evolution of the wave operator state in a Schrodinger-like form
\[i\hbar\partial_{t}\ket{\Omega} =\left(\hat{H}\otimes\hat{1}-\hat{1}\otimes(\hat{H}+\hat{F})^{T} \right)\ket{\Omega}, \tag{13}\] \[\langle O\rangle =\bra{\Omega}\!\hat{O}\otimes\hat{1}\ket{\Omega}. \tag{14}\]
The free choice of \(\hat{F}\) also means that this evolution can correspond either to a Liouville-type commutator evolution when \(\hat{F}=0\), or a Schrodinger equation with an ancillary space when \(\hat{F}=-\hat{H}\). The dynamics denoted by Eq. (13) can also be arrived at from a Dirac-Frankel variational principle [46],
\[\delta\Re\int_{t_{i}}^{t_{f}}\bra{\Omega(t)}\!i\hbar\partial_{t}-\left(\hat{H }\otimes\hat{1}-\hat{1}\otimes(\hat{H}+\hat{F})^{T}\right)\ket{\Omega(t)}dt=0, \tag{15}\]
where the choice of the "phase generator" \(\hat{F}\) in (13) does not affect the values of the observables since
\[i\hbar\partial_{t}\bra{O}=\bra{\Omega}\!\ket{\hat{O},\hat{H}}\otimes\hat{1} \ket{\Omega}. \tag{16}\]
The choice of an orthonormal basis in Eq. (8) to construct the purification of the wave operator is equivalent to fixing the "phase generator" \(\hat{F}\), and hence bears no observational consequences. If \(\ket{\Omega}\) and \(\ket{\Omega^{\prime}}\) denote two purifications of \(\hat{\Omega}\) corresponding to the different bases \(\{\ket{k}\}\) and \(\{\ket{k^{\prime}}\}\), then there exists a a unitary \(\hat{U}\) such that \(\ket{\Omega}=(\hat{1}\otimes\hat{U})\ket{\Omega^{\prime}}\)[45]. Then, Eq. (13) is invariant under the "gauge" transformation
\[\ket{\Omega}\rightarrow\ket{\Omega^{\prime}},\qquad\hat{F}\rightarrow\left( \hat{U}^{\dagger}(\hat{H}+\hat{F})^{T}\hat{U}+\hat{G}\right)^{T}\!-\!\hat{H}, \tag{17}\]
where the self-adjoint \(\hat{G}\) is defined as \(i\hbar\partial_{t}\hat{U}=\hat{U}\hat{G}\) (i.e. Stone's theorem) [47].
## IV Bopp operators for purified wave operators
Having defined the wave operator and its dynamics when represented as a purified state, we now show that this formalism provides a transparent method for the introduction of Bopp operators [48]. These not only allow one to transit between Hilbert and phase space representations of a quantum system, but also enable a classical limit to be taken transparently, as we shall find in a later section. For simplicity, hereafter we will consider system with one degree of freedom, but the extension to multi-dimensional case is trivial.
In anticipation of later developments, we shall refer to quantum coordinate and momentum variables as \(\hat{\mathbf{x}}\) and \(\hat{\mathbf{p}}\), where the bold font is used to indicate their status as non-commuting quantum operators, rather than vectorial notation. These will obey the Heisenberg canonical commutation relation
\[[\hat{\mathbf{x}},\hat{\mathbf{p}}]=i\hbar. \tag{18}\]
We will also assume that the operator functions \(H(\hat{\mathbf{x}},\hat{\mathbf{p}})\) and \(F(\hat{\mathbf{x}},\hat{\mathbf{p}})\) are represented in a Weyl-symmetrized form. We then introduce _the Bopp operators_ as
\[\hat{x} =\frac{1}{2}\left(\hat{1}\otimes\hat{\mathbf{x}}^{T}+\hat{ \mathbf{x}}\otimes\hat{1}\right), \qquad\hat{p} =\frac{1}{2}\left(\hat{\mathbf{p}}\otimes\hat{1}+\hat{1}\otimes \hat{\mathbf{p}}^{T}\right),\] \[\hat{\theta} =\frac{1}{\hbar}\left(\hat{1}\otimes\hat{\mathbf{x}}^{T}-\hat{ \mathbf{x}}\otimes\hat{1}\right), \qquad\hat{\lambda} =\frac{1}{\hbar}\left(\hat{\mathbf{p}}\otimes\hat{1}-\hat{1} \otimes\hat{\mathbf{p}}^{T}\right). \tag{19}\]
The inverse transformations read
\[\hat{\mathbf{x}}\otimes\hat{1} =\hat{x}-\frac{\hbar}{2}\hat{\theta}, \qquad\hat{\mathbf{p}}\otimes\hat{1}=\hat{p}+\frac{\hbar}{2}\hat{\lambda},\] \[\hat{1}\otimes\hat{\mathbf{x}}^{T} =\hat{x}+\frac{\hbar}{2}\hat{\theta}, \qquad\hat{1}\otimes\hat{\mathbf{p}}^{T}=\hat{p}-\frac{\hbar}{2}\hat{\lambda}. \tag{20}\]
The commutation relations of these Bopp operators can be calculated as (for example):
\[[\hat{x},\hat{p}]= \frac{1}{4}\left([\hat{\mathbf{x}},\hat{\mathbf{p}}]\otimes\hat{ 1}+\hat{1}\otimes[\hat{\mathbf{x}}^{T},\hat{\mathbf{p}}^{T}]\right), \tag{21}\] \[= \frac{1}{2\hbar}\left([\hat{\mathbf{x}},\hat{\mathbf{p}}]\otimes \hat{1}-\hat{1}\otimes[\hat{\mathbf{x}}^{T},\hat{\mathbf{p}}^{T}]\right). \tag{22}\]
Conjugating the fundamental commutation relation yields the identity \([\hat{\mathbf{x}}^{T},\hat{\mathbf{p}}^{T}]=-i\hbar\), and means the Bopp operators obey the following alegbra:
\[[\hat{x},\hat{p}] =[\hat{\theta},\hat{\lambda}]=0, \tag{23a}\] \[[\hat{p},\hat{\theta}] =[\hat{x},\hat{\lambda}]=i. \tag{23b}\]
With the help of the identities \(\hat{1}\otimes H(\hat{\mathbf{x}}^{T},\hat{\mathbf{p}}^{T})=H(\hat{1}\otimes\hat{ \mathbf{x}}^{T},\hat{1}\otimes\hat{\mathbf{p}}^{T})\) and \(H(\hat{\mathbf{x}},\hat{\mathbf{p}})\otimes\hat{1}=H(\hat{\mathbf{x}}\otimes 1, \hat{\mathbf{p}}\otimes\hat{1})\) (valid for any Weyl-symmetrized \(\hat{H}\)), the equations for the state dynamics and expectations read:
\[i\hbar\partial_{t}\ket{\Omega}= \hat{G}\ket{\Omega}, \tag{24}\] \[\hat{G}= H(\hat{x}-\frac{\hbar}{2}\hat{\theta},\hat{p}+\frac{\hbar}{2} \hat{\lambda})-H(\hat{x}+\frac{\hbar}{2}\hat{\theta},\hat{p}-\frac{\hbar}{2} \hat{\lambda})\] \[-F(\hat{x}+\frac{\hbar}{2}\hat{\theta},\hat{p}-\frac{\hbar}{2} \hat{\lambda}),\] (25) \[\langle O\rangle= \langle\Omega|O(\hat{x}-\frac{\hbar}{2}\hat{\theta},\hat{p}+ \frac{\hbar}{2}\hat{\lambda})|\Omega\rangle\,. \tag{26}\]
We note that Eqs. (24)-(26) have been derived in Ref.[49], but from an entirely different perspective.
Since \(\hat{x}\) and \(\hat{p}\) commute, they share a common eigenbasis
\[\hat{x}\ket{xp}=x\ket{xp},\qquad\hat{p}\ket{xp}=p\ket{xp}, \tag{27}\] \[\hat{1}\otimes\hat{1}=\int dxdp\ket{xp}\bra{xp}. \tag{28}\]
It follows from the commutator relationship (23) that
\[\bra{xp}\hat{x}\Omega\rangle=x\bra{xp}\Omega\,,\quad\bra{xp} \hat{\lambda}\Omega\rangle=-i\partial_{x}\bra{xp}\Omega\,,\] \[\bra{xp}\hat{p}\Omega\rangle=p\bra{xp}\Omega\,,\quad\bra{xp} \hat{\theta}\Omega\rangle=-i\partial_{p}\bra{xp}\Omega\,. \tag{29}\]
Hence,
\[i\hbar\partial_{t}\bra{xp}\Omega= \Big{(}H(x_{+},p_{-})-H(x_{-},p_{+})\] \[-F(x_{-},p_{+})\Big{)}\bra{xp}\Omega\,, \tag{30}\] \[\langle O\rangle= \int dxdp\bra{\Omega}xp\rangle\,O(x_{+},p_{-})\bra{xp}\Omega\,,\] (31) \[x_{\pm}= x\pm i\frac{\hbar}{2}\partial_{p},\;\;p_{\pm}=p\pm i\frac{\hbar} {2}\partial_{x}. \tag{32}\]
When \(F=0\) Eq. (30) coincides with the equation of motion for the Wigner function (see, e.g., Eq. (2.77) in [50]). In this case however, the original wave operator is not restricted to representing a pure state, meaning that Eq. (26) in conjunction with Eq. (24) extend the interpretation of the Wigner function as a wave function [8] to include the general case of mixed states.
## V The phase-space representation of the wave operator
In this section, we will provide an alternative derivation of Eq. (30) and Eq. (31). The Wigner-Weyl transformation of equations (4) and (6) read
\[i\hbar\partial_{t}\Omega(x,p)= H(x,p)\star\Omega(x,p)-\Omega(x,p)\star H(x,p)\] \[-\Omega(x,p)\star F(x,p), \tag{33}\] \[\langle O\rangle= \int dxdp\Omega(x,p)^{*}\star O(x,p)\star\Omega(x,p), \tag{34}\]
where \(\star\) denotes the Moyal product, \(H(x,p)\), \(\Omega(x,p)\), \(F(x,p)\), and \(O(x,p)\) are the Weyl symbols for the operators \(\hat{H}\), \(\hat{\Omega}\), \(\hat{F}\), and \(\hat{O}\), respectively. Utilizing the "lone star" identity \(\int f(x,p)\star g(x,p)dxdp=\int f(x,p)g(x,p)dxdp\) (see equation (16) in [51]) and
\[f(x,p)\star g(x,p)=f\left(x_{+},p_{-}\right)g(x,p), \tag{35}\] \[g(x,p)\star f(x,p)=f\left(x_{-},p_{+}\right)g(x,p), \tag{36}\]
(see, e.g., equations (12) and (13) in [51; 52]), we obtain
\[i\hbar\partial_{t}\Omega(x,p)= \Big{(}H(x_{+},p_{-})-H(x_{-},p_{+})\] \[-F(x_{-},p_{+})\Big{)}\Omega(x,p), \tag{37}\] \[\langle O\rangle= \int dxdp\,\Omega(x,p)^{*}O\left(x_{+},p_{-}\right)\Omega(x,p). \tag{38}\]
Comparing these two equations with Eq.(30) and Eq.(31), we conclude that \(\langle xp|\Omega\rangle\equiv\Omega(x,p)\), i.e., \(\langle xp|\Omega\rangle\) is the Wigner-Weyl transform of \(\hat{\Omega}\).
We can also recover a more direct interpretation of \(\Omega(x,p)\) in the case that \(W(x,p)\) is the Wigner function for a pure quantum state \(\hat{\rho}\). Recalling that purity implies \(W(x,p)\star W(x,p)=\frac{1}{2\pi\hbar}W(x,p)\) (see, e.g., equation (25) in [51]), one shows
\[\langle O\rangle= \int dxdp\,O(x,p)W(x,p)=\int dxdp\,O(x,p)\star W(x,p)\] \[=2\pi\hbar\int dxdp\,O(x,p)\star W(x,p)\star W(x,p)\] \[=2\pi\hbar\int dxdp\,W(x,p)O(x,p)\star W(x,p)\] \[=2\pi\hbar\int dxdp\,W(x,p)O\left(x_{+},p_{-}\right)W(x,p). \tag{39}\]
Since the Wigner function is real by construction, Eq. (39) is recovered from Eq. (38) and Eq. (37) in the case \(F\)=0 if
\[\Omega(x,p)=\sqrt{2\pi\hbar}W(x,p). \tag{40}\]
Eq. (39) and Eq. (40) therefore provide an alternative and much more simple derivation of the interpretation, put forth in [8], of the Wigner function for a pure quantum system as a Koopman-von Neumann wave function. In particular Eq. (39) and Eq. (40) coincide with Eq. (19) and Eq. (8) in [8]. In the general mixed case, we are still able to identify the wavevector with the Wigner function thanks to Eq. (30) and Eq. (31).
## VI The classical limit of the wave operator description
The proposed formalism also offers a direct route to the classical limit of quantum dynamics, where the Koopman-von Neumann representation of classical dynamics is naturally recovered. Beginning from Eq. (24), we first scale our arbitrary phase \(F\rightarrow\hbar F\), purely as a matter of convenience when taking the classical limit. Having done so, we then Taylor expand the Hamiltonian
around the Bopp operators:
\[H(\hat{x}\mp\frac{\hbar}{2}\hat{\theta},\hat{p}\pm\frac{\hbar}{2} \hat{\lambda})= H(\hat{x},\hat{p})\pm\frac{\hbar}{2}\partial_{p}H(\hat{x},\hat{p}) \hat{\lambda} \tag{41}\] \[\mp\frac{\hbar}{2}\partial_{x}H(\hat{x},\hat{p})\hat{\theta}+O( \hbar^{2}).\]
Inserting this into Eq. (24) and Eq. (25) we obtain
\[\hat{G}=\partial_{p}H(\hat{x},\hat{p})\hat{\lambda}-\partial_{x}H(\hat{x}, \hat{p})\hat{\theta}+F(\hat{x}+\frac{\hbar}{2}\hat{\theta},\hat{p}-\frac{\hbar }{2}\hat{\lambda})+O(\hbar^{2}). \tag{42}\]
Taking \(\lim_{\hbar\to 0}\hat{G}\) recovers the well-known KvN propagator, describing classical dynamics:
\[i\partial_{t}\left|\Omega\right>=\left[\partial_{p}H(\hat{x},\hat{p})\hat{ \lambda}-\partial_{x}H(\hat{x},\hat{p})\hat{\theta}+F(\hat{x},\hat{p})\right] \left|\Omega\right>. \tag{43}\]
We see that the phase generator in the classical limit corresponds to the arbitrary phase-space function one obtains in standard derivations of KvN [53; 54; 19; 21], which itself relates KvN to alternative dynamical equations such as Koopman-van Hove (KvH) [56; 57; 22].
The connection between Eq. (43) and the standard Liouville equation for the density can be made explicit by expressing this equation of motion in phase space:
\[\partial_{t}\Omega(x,p)=\{H(x,p),\Omega(x,p)\}-iF(x,p)\Omega(x,p), \tag{44}\]
where \(\{\cdot,\cdot\}\) indicates the Poisson bracket. Using \(\rho(x,p)=\left|\Omega(x,p)\right|^{2}\), we immediately recover the Liouville equation for the density.
It is also interesting to note that when expanding the right hand sides of Eqs. (24) and (41) in series in \(\hbar\) all terms corresponding to even powers of \(\hbar\) will cancel. An immediate consequence is that for quadratic Hamiltonians, the \(\hbar\to 0\) limit may only affect arbitrary phase term \(F\). Otherwise, the wave operator and Wigner function formalisms share the same property that the quantum equations of motion for quadratic systems remain unchanged in the classical limit \(\hbar\to 0\).
## VII Wave Operator Representation of Thermal States
The wave operator formalism is also instructive when considering the quantum correction to equilibrium states. Recall that the density matrix for the Gibbs state at temperature \(T\)=\(\frac{1}{k\beta}\) can be found (up to normalisation) via imaginary time propagation starting from \(\beta\)=0:
\[\partial_{\beta}\hat{\rho}=-\frac{1}{2}\left(\hat{H}\hat{\rho}+\hat{\rho} \hat{H}\right),\ \ \hat{\rho}(0)=\hat{1} \tag{45}\]
The solution to this equation is clearly \(\hat{\rho}(\beta)=\mathrm{e}^{-\beta\hat{H}}\), which selects the ground state as \(\beta\rightarrow\infty\). The matching equation for the wave operator is:
\[\partial_{\beta}\hat{\Omega}=-\frac{1}{4}\left[\left(\hat{H}+i\hat{F}\right) \hat{\Omega}+\hat{\Omega}\left(\hat{H}+i\hat{F}\right)\right],\ \ \hat{\Omega}(0)=\hat{1} \tag{46}\]
Eq. (46) can be proved directly by showing that the density matrix \(\hat{\rho}=\hat{\Omega}\hat{\Omega}^{\dagger}\) is the solution to Eq. (45) when \(\hat{\Omega}\) solves Eq. (46)[58]. Just as in the real time case, the free term \(\hat{F}\) merely adds a phase to the state, and in what follows we shall take \(\hat{F}=0\).
By vectorizing the thermal state wave operator \(\hat{\Omega}\) according to Eq. (8), Eq. (46) can be restated in terms of Bopp operators as
\[\partial_{\beta}\left|\Omega\right>=-\frac{1}{4}\left[H(\hat{x}-\frac{\hbar}{ 2}\hat{\theta},\hat{p}+\frac{\hbar}{2}\hat{\lambda})+H(\hat{x}+\frac{\hbar}{2 }\hat{\theta},\hat{p}-\frac{\hbar}{2}\hat{\lambda})\right]\left|\Omega\right>. \tag{47}\]
Series expansion of the right hand side of Eq. (47) in \(\hbar\) gives
\[\partial_{\beta}\left|\Omega\right>=\frac{1}{2}H(\hat{x},\hat{p}) \left|\Omega\right>\] \[+\frac{\hbar^{2}}{4}\left(\partial_{x}^{2}H(\hat{x},\hat{p})\hat{ \theta}^{2}+\partial_{p}^{2}H(\hat{x},\hat{p})\hat{\lambda}^{2}\right)\left| \Omega\right>+O(\hbar^{4}). \tag{48}\]
Thus, the lowest order quantum correction to the ground or thermal state is of order \(\hbar^{2}\), and only the terms corresponding to even powers of \(\hbar\) survive. This means that unlike in real time, Eq. (47) retains its form in the classical limit \(\hbar\to 0\) only for _linear_ Hamiltonians. It is also interesting to note that the semiclassical correction has the form of a Fokker-Planck like diffusive term when expressed in phase space.
In order to showcase the distinction between classical and quantum worlds within the wave operator formalism, let us compare the thermal states for benchmark one-dimensional quadratic and quartic systems. These will be described by the Hamiltonians
\[\hat{H}^{(n)}=\frac{1}{2}\hat{\mathbf{p}}^{2}+\frac{1}{2}\mathbf{\hat{x}}^{n}, \tag{49}\]
where \(n\)=2 and \(n\)=4, respectively. Let us consider three levels of approximation to Eq. (47): We label \(\left|\Omega_{q}^{(n)}\right>\) as the state obtained when evolving using the fully quantum Eq. (46). A semiclassical \(\left|\Omega_{s}^{(n)}\right>\) is derived from Eq. (48)by dropping the \(O(\hbar^{4})\) terms, and finally \(\left|\Omega_{c}^{(n)}\right>\) is the evolution using the \(\hbar\to 0\) limit of Eq. (48), which additionally wipes out \(O(\hbar^{2})\) terms. Fig. 1 illustrates these three types of evolution. As expected, the quantum and semiclassical evolutions are identical for an \(n=2\) quadratic Hamiltonian, but surprisingly produce only slightly different results for the quartic \(n=4\) Hamiltonian. In all cases however, the distinction between classical and quantum evolutions is clear, where in the former case the absence of a fundamental commutation relation (and therefore a zero-point energy) is reflected in both the ground state energy and \(\Delta x\Delta p\), as shown in Fig. 1.
## VIII Discussion
Here we have presented a novel representation of Hilbert space dynamics, where positivity is automati
cally preserved by performing dynamics on the square root of the density operator. One advantage of the present formalism is that it is set in Hilbert space with a Schrodinger-like equation of motion. Consequently it is possible to use all of the highly efficient tools developed for these dynamics (e.g. tensor network algorithms) when performing wave operator calculations. Furthermore, incorporating purification allows one to introduce Bopp operators to this formalism. This has resulted in a phase space representation of the wave operator, which in turn allows us to identify the Wigner function as the projection of the wave operator onto phase space.
Taking the classical limit of the wave operator formalism, we find that it corresponds to the KvN representation of classical dynamics. For quadratic Hamiltonians, this correspondence to classical dynamics is exact even before taking any \(\hbar\to 0\) limit. This mirrors similar results to be found in the path integral and Wigner representations of dynamics for quadratic systems. In the former case, a saddle point approximation ensures only paths corresponding to the classical action contribute to the propagator, while the Moyal star operation evolving the quasiprobability \(W(x,p)\) reduces to the Poisson bracket in the phase space representation.
When performing an analogous procedure in imaginary time, an \(O(h^{2})\) correction distinguishes quantum and classical quadratic systems, suggesting the most significant difference between the quantum and classical regime is in the ground state systems inhabit, rather than their real time dynamics. The fact that the semiclassical expansion of the imaginary time evolution exhibits a quadratic correction to the classical Hamiltonian is strikingly similar to another context in which the equilibrium state of a system is determined by a correction to its bare Hamiltonian. Specifically, when considering a system strongly coupled to an environment, its thermal state is described by a "Hamiltonian of mean force" that accounts for the environmental interaction. In those cases where this effective Hamiltonian is known [59; 60; 61; 10; 62], the correction to the bare Hamiltonian is _also_ quadratic rather than linear. It is tempting to speculate that these two phenomena may be related to each other.
There are a number of potential extensions to this formalism. For instance, the introduction of commuting Bopp operators in previous sections relies on the canonical commutation relation in an infinite dimensional space of operators [63]. Finite dimensional Hilbert spaces are more restrictive and generally do not allow introducing the analogs of Bopp operators with the desired commutation properties. Nevertheless, one might employ (for example) a Jordan-Schwinger map [64] to represent such a finite dimensional system. This would introduce an oscillator basis obeying canonical commutation relations in the continuum limit, and open a route to calculations analogous to those presented here.
One also might extend the wave operator machinery to dissipative dynamics. For example, it has been shown in Ref. [41] that a positive trace preserving map representing a wave-operator evolution has the form:
\[i\partial_{t}\hat{\Omega}{=}\sum_{k}\hat{A}_{k}\hat{\Omega}\hat{B}_{k}. \tag{50}\]
The purification representation of Eq. (50) describes an arbitrary unitary transformation of the state vector \(|\Omega\rangle\) and hence is capable of describing an arbitrary transformation of the density matrix \(\hat{\rho}\), similar to the standard Lindblad equation. Nevertheless, the precise connection to the latter is highly non-trivial: a conventional linear Lindblad equation will in general correspond to a highly nonlinear Eq. (50), where the operators \(\hat{A}_{k}\) and \(\hat{B}_{k}\) are functions of \(\hat{\Omega}\)[34].
More generally, the hunt for novel efficient representations for interacting systems is one of the chief motivations for the development of the wave operator formalism. Specifically, the fact that positivity is automatically preserved is of vital importance when attempting to construct a hybrid formalism, where a partial classical limit is taken on one part of an interacting system. The importance of developing such formalisms should not be understated, given that all our interactions with the quantum world must be mediated through essentially classical devices, which will themselves have a quantum backreaction. The growing sophistication of quantum technology demands we be able to accurately describe such phenom
Figure 1: Comparison of the imaginary time dynamics for quantum, semiclassical and classical systems, in the cases of quadratic and quartic Hamiltonians. In both cases, the expected energy of the system is distinguished by the zero-point energy present in the quantum and semiclassical evolutions. Finally, the uncertainty relation in position and momentum is clearly respected in the quantum and semiclassical system, while in the classical system it approaches zero, reflecting the zero energy classical ground state.
ena via a quantum-classical hybrid. Traditionally however, such hybrid representations map to the quantum density operator \(\hat{\rho}\), and the hybrid equations of motion derived (e.g. the AGK equation) [65; 66; 67; 68] do not necessarily preserve the positivity of the state [22], calling into question the physicality of the dynamics. This remains an area of active research [69; 70], with quantum computational approaches to this problem already being explored [71]. It is our hope that the wave operator formalism developed here will provide a useful tool in the ongoing development of hybrid system models.
|
2301.09700 | Improving dynamic collision frequencies: impacts on dynamic structure
factors and stopping powers in warm dense matter | Simulations and diagnostics of high-energy-density plasmas and warm dense
matter rely on models of material response properties, both static and dynamic
(frequency-dependent). Here, we systematically investigate variations in
dynamic electron-ion collision frequencies $\nu(\omega)$ in warm dense matter
using data from a self-consistent-field average-atom model. We show that
including the full quantum density of states, strong collisions, and inelastic
collisions lead to significant changes in $\nu(\omega)$. These changes result
in red shifts and broadening of the plasmon peak in the dynamic structure
factor, an effect observable in x-ray Thomson scattering spectra, and modify
stopping powers around the Bragg peak. These changes improve the agreement of
computationally efficient average-atom models with first-principles
time-dependent density functional theory in warm dense aluminum, carbon, and
deuterium. | Thomas W. Hentschel, Alina Kononov, Alexandra Olmstead, Attila Cangi, Andrew D. Baczewski, Stephanie B. Hansen | 2023-01-23T20:03:21Z | http://arxiv.org/abs/2301.09700v1 | Improving dynamic collision frequencies: impacts on dynamic structure factors and stopping powers in warm dense matter
###### Abstract
Simulations and diagnostics of high-energy-density plasmas and warm dense matter rely on models of material response properties, both static and dynamic (frequency-dependent). Here, we systematically investigate variations in dynamic electron-ion collision frequencies \(\nu(\omega)\) in warm dense matter using data from a self-consistent-field average-atom model. We show that including the full quantum density of states, strong collisions, and inelastic collisions lead to significant changes in \(\nu(\omega)\). These changes result in red shifts and broadening of the plasmon peak in the dynamic structure factor, an effect observable in x-ray Thomson scattering spectra, and modify stopping powers around the Bragg peak. These changes improve the agreement of computationally efficient average-atom models with first-principles time-dependent density functional theory in warm dense aluminum, carbon, and deuterium.
## I Introduction
In high-energy-density plasmas such as those occurring in inertial confinement fusion (ICF) implosions, stellar atmospheres, and planetary interiors, atomic-scale collisions between electrons and ions have a cascading impact at longer length and time scales: collisions mediate transport properties, dampen collective plasma oscillations, and ultimately contribute to the particle and energy transport processes that govern plasma evolution.
In the static (zero-frequency) limit, collision frequencies inform properties like electrical and thermal conductivities that can impact instability development in ICF implosions [1]. Dynamic (frequency-dependent) collisions influence plasma response functions that influence measurable quantities like x-ray Thomson Scattering (XRTS) spectra [2] and stopping powers [3; 4]. Stopping powers are especially important for ICF since they mediate the velocity-dependent return of energy from fast fusion products back into the fusion fuel and play a critical role in self-heating and ignition [5; 6; 7].
While there is no direct observable associated with dynamic collision frequencies, their static limit is closely tied to measurable electrical conductivities [8; 9], and their dynamic behavior can be constrained with x-ray Thomson Scattering (XRTS) measurements [10]. However, direct, benchmark-quality measurements of these transport and observable properties are difficult because they require preparing, independently characterizing, and interrogating uniform high-energy-density samples [3; 4; 11]. Calculating these properties is also challenging, particularly in the warm dense matter (WDM) regime (materials near solid densities with temperatures near the Fermi energy) since it requires accounting for many different physical effects including partial ionization, screening, degeneracy, and ion coupling. Many models of collisional processes in use today rely on approximations drawn from plasma physics (e.g. weak scattering, ideal electron distributions, and Coulomb logarithms [12]) that are known to be inaccurate for WDM.
Fortunately, sophisticated first-principles models have been developed that can self-consistently follow the complete electronic structure and ionic motion of 10s-100s of atoms in a WDM state. These models are based on density functional theory (DFT) [13; 14] and provide reliable predictions for equations of state and static transport properties [15; 16]. Additionally, real-time time-dependent DFT (TDDFT) [17; 18; 19; 20] can model the dynamic response of materials to incident electromagnetic waves and charged particles, offering direct access to many transport and observable properties including stopping powers [21; 22; 23; 24; 25; 26] and dynamic structure factors (DSFs) [27; 28; 29].
While first-principles TDDFT models can provide accurate predictions for material properties, like conventional DFT, their computational cost grows with temperature in WDM conditions and becomes even more expensive at higher temperatures and for higher Z species. They are thus generally unsuitable for building the extensive tables required for simulations of many astrophysical and laboratory plasmas. Instead, researchers often rely on simpler models, such as DFT-based average atom (AA) models [30; 31; 32; 33], to populate wide-ranging tables of plasma properties. These simpler models are not only less accurate than multi-atom (TD)DFT, but typically also require external models like the Ziman formula [34; 35; 36] for calculations of static collision frequencies or the random phase approximation (RPA) to account for dynamic effects in scattering [2; 37; 38] and stopping powers [39; 40].
DFT-AA models can go beyond the RPA by including static collisions and local field corrections [41; 42; 43] or dynamic collision frequencies \(\nu(\omega)\) in their calculations of dynamic response functions [44; 45; 10] using the Mermin
modification to the RPA [46]. However, the accuracy of these corrections has not been rigorously assessed. Further, most such implementations rely on approximations such as the uniform electron (UEG) gas [47; 48; 49; 50; 51] or Born collision cross sections [52; 10; 44], neither of which is valid in the WDM regime: the UEG does not capture structure in the continuum electron density of states (DOS), and Born cross sections do not capture the strong collisions that dominate scattering processes near the Fermi energy in partially ionized atoms. Corrections beyond the Born approximation have been limited to combining strong collisions calculated using ladder diagrams with dynamic screening via the Gould-DeWitt approach [53; 54; 10] or normalizing Born collision frequencies to match Ziman calculations using T-matrix collision frequencies at \(\omega=0\)[52]. Finally, few DFT-AA models consider either the full quantum DOS or the impact of inelastic collisions that can lead to rapid damping of plasma excitations.
In Section II of this work, we briefly describe a DFT-AA model suitable for exploring the impact of dynamic collision frequencies \(\nu(\omega)\) on DSFs and stopping powers. In Section III, we systematically investigate the impacts of four variations on DFT-AA calculations of \(\nu(\omega)\): ideal vs. quantum DOS, uniform vs. structured ions, static (Born) vs. dynamic (Lennard-Balescu) screening, and Born (weak) vs. T-matrix (strong) cross sections. We also propose an approximation for inelastic contributions to \(\nu(\omega)\) and compare to \(\nu(\omega)\) derived from Kubo-Greenwood calculations of the dynamic conductivity [53]. We find that using the quantum DOS is critical to preserve sum rules, that there is promising agreement between the independent T-matrix and Kubo-Greenwood approaches, that inelastic collisions may have significant impact on \(\nu(\omega)\), and that the Born approximation is fundamentally unreliable for WDM. In Section IV, we show that the most complete calculations of \(\nu(\omega)\) improve agreement with direct DSF calculations from TDDFT. In Section V, we describe our derivation of stopping powers from the Mermin dielectric functions and show that the improvements in the DSFs carry over into significant improvements in stopping powers. We conclude in Section VI.
## II Average-atom model
Fully quantum AA models have been used for decades [30; 31; 32] to describe both thermodynamic and transport properties [55; 56], and have recently been extended to provide access to direct experimental observables like XRTS spectra [38; 44]. The AA model in this manuscript closely follows these previous implementations.
DFT-AA models self-consistently solve a set of Kohn-Sham-like equations. The first step involves guessing an electron-ion potential \(V_{ei}(r)\) that assumes a functional form for the exchange-correlation potential \(V_{xc}(r)\). We use a \(V_{ei}(r)\) that is forced to zero at the Wigner-Seitz radius \(R_{WS}=(3/(4\pi n_{i}))^{1/3}\) for ion density \(n_{i}\) and a local density approximation (LDA) for \(V_{xc}(r)\)[57]. Next, we solve the radial Schrodinger equation to find bound \(P_{nt}(r)\) and continuum \(P_{ef}(r)\) radial electronic orbitals, and populate these orbitals according to Fermi-Dirac occupation factors:
\[f(\varepsilon)=\frac{1}{1+\exp((\varepsilon-\mu)/(k_{B}T))} \tag{1}\]
with \(\varepsilon\) the electron energy and \(\mu\) a chemical potential that enforces neutrality within the Wigner-Seitz sphere. The occupied orbitals provide an electron density \(n_{e}(r)\) from \(4\pi r^{2}n_{e}(r)=\sum_{a}f(\varepsilon_{a})g_{a}P_{a}^{2}(r)\) with \(g_{a}=2(2\ell_{a}+1)\) that in turn generates a new \(V_{ei}(r)\); this procedure is iterated until \(n_{e}(r)\) and \(V_{ei}(r)\) converge to self-consistency.
At convergence, the bound electrons are represented by orbitals with a fixed number of nodes and energy eigenvalues \(\varepsilon_{n\ell}\), while the continuum electrons are represented by distorted waves with phase shifts \(\delta_{e\ell}\) determined by matching numerical orbitals to analytic plane waves at \(R_{WS}\). The continuum orbitals define a quantum density of states \(X(\varepsilon)=\sum_{\ell}g_{\ell}\int_{0}^{R_{WS}}P_{ef}^{2}(r)dr\), whose accuracy is largely determined by the adequacy of the spherical approximation for \(V_{ei}(r)\), the choice of \(V_{xc}(r)\), and boundary conditions on \(V_{ei}(r)\) and \(P_{a}(r)\)[33].
Fig. 1 shows the continuum-electron density of states (DOS) \(X(\varepsilon)\) obtained from our DFT-AA model (solid red line) at a thermal energy of \(1\,\mathrm{eV}\) and densities of \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\) for aluminum, \(10\,\mathrm{g}\,\mathrm{cm}^{-3}\) for carbon, and \(10\,\mathrm{g}\,\mathrm{cm}^{-3}\) for deuterium. The ideal electron gas DOS, \(X^{i}=(2\varepsilon)^{1/2}/(n_{i}\pi^{2})\), with \(n_{i}\) the ion density, is also shown to represent the allowed states for plane-wave (ideal) free electrons. As illustrated here, the ratio of the quantum DOS to the ideal DOS, \(\xi(\varepsilon)=X(\varepsilon)/X_{i}(\varepsilon)\), is typically larger than unity. The occupied states, \(X(\varepsilon)f(\varepsilon)\), are shown as the shaded regions; integrals over these regions represent estimates for the screened ion charge \(Z_{s}\). For the fully quantum DOS from DFT-AA, the area contained in the red shaded region gives continuum-electron values \(Z_{c}\) of 3.0, 4.0, and 1.0 for aluminum, carbon, and deuterium respectively, while integrals of the ideal DOS with \(f(\varepsilon)\) parameterized by the same chemical potential predicts smaller ideal free-electron values \(Z_{i}\) of 2.0, 2.4, and 0.75 [58]. In order for quantities derived from an ideal density of states to satisfy continuum-electron sum rules, one would have to impose a new chemical potential, \(\mu_{i}\)[35], illustrated by the shaded gray regions.
We can directly compare these results against first-principles DFT-MD calculations (solid orange lines), which include non-spherical effects and can support continuum states with negative energies. Since DFT-MD does not necessarily use the same energy zero as AA and deuterium lacks a bound state to anchor the energy axis, we align the DFT-MD curves to match the DFT-AA chemical potential. For all elements, the DFT-MD DOS is closer to the DFT-AA quantum DOS than to the ideal DOS. Thermal broadening of the low-energy features would further improve the agreement. For carbon,
the DFT-MD DOS has additional structure absent in the the DFT-AA model, suggesting residual non-spherical or band-structure effects.
To model ionic properties, we follow the methods of Starrett and Saumon [32; 59], solving a second set of self-consistent equations to obtain the potential and electron density associated with the external plasma. Once the external density is determined, one must assign a screened ion charge \(Z_{s}\) to determine the ion-ion potential \(V_{ii}(r)\) and generate a static ion-ion structure factor \(S_{ii}(k)\) and its Fourier transform, the radial ion distribution function \(g_{ii}(r)\). We have found consistently good agreement with _ab-initio_ multi-center DFT-MD models using \(Z_{s}=Z_{i}\). For example, Fig. 2 shows that a DFT-AA model using \(Z_{s}=Z_{i}=2\) for aluminum with a thermal energy of 1 eV gives better agreement with DFT-MD calculations of \(g_{ii}(r)\) than one that uses \(Z_{s}=Z_{c}=3\). In contrast to quantities that integrate over the DOS, like the collision frequencies, DSFs, and stopping powers described in the following sections, here the choice of \(Z_{s}\) is unconstrained by sum rules.
## III Electron-ion collision frequencies
The DFT-AA model described above provides an internally consistent picture of both electronic and ionic structure. In principle, it provides sufficient information to compute many thermodynamic and transport quantities, including static collision frequencies \(\nu(0)\) in the Ziman formulation [34]:
\[\nu^{Z}(0)=\frac{1}{3\pi Z_{0}}\int_{0}^{\infty}d\varepsilon\int_{0}^{2p}dk\,k ^{3}S_{ii}(k)\frac{\partial f(\varepsilon)}{\partial\varepsilon}\frac{ \partial\Sigma(\varepsilon,\theta)}{\partial\theta} \tag{2}\]
and dynamic collision frequencies \(\nu(\omega)\) from the generalized Boltzmann equation [60; 44; 54; 61]:
\[\nu(\omega)=\frac{-i}{6\pi Z_{0}}\int_{0}^{\infty}dk\,k^{6}S_{ii}(k)\frac{ \partial\Sigma(\varepsilon,\theta)}{\partial\theta}\frac{\epsilon^{0}(k, \omega)-\epsilon^{0}(k,0)}{\omega} \tag{3}\]
In both of the above expressions, \(\frac{\partial\Sigma}{\partial\theta}\) is a differential cross section that represents the magnitude of momentum transfer for particles with momentum \(p=(2\varepsilon)^{1/2}\) scattering at a given angle \(\theta\), with \(k^{2}=2p^{2}(1-\cos\theta)\) and \(S_{ii}(k)\) the static ion-ion structure factor. In Eq. (2), the normalization factor \(Z_{0}\) is constrained through a finite-temperature generalization of Fermi surface properties [62]. In the UEG (ideal) gas approximation implicit in Eq. (2), \(Z_{0}=Z_{i}\). In Eq. (3), \(\epsilon^{0}(k,\omega)\) is the RPA dielectric function, which is derived from a UEG approximation that also implicitly assumes an ideal electron gas DOS, for which we set \(Z_{0}=Z_{i}\).
Both of these expressions are highly integrated quantities with multiple dependencies, and both affect observable transport and response properties. The static collision frequency directly informs the electrical conductivity
Figure 2: Radial ion distribution functions for solid-density Al at 1 eV from the DFT-AA model with different choices for the screened ion charge \(Z_{s}\), compared with _ab-initio_ DFT-MD results. This distribution is directly related to the ion-ion structure factor \(S_{ii}(k)\).
Figure 1: Continuum and ideal free electron DOS for aluminum, carbon, and deuterium at a temperature of \(T=1\) eV and densities of \(2.7\) g cm\({}^{-3}\) for Al, \(10\) g cm\({}^{-3}\) for C, and \(10\) g cm\({}^{-3}\) for D. The lightly shaded areas are the product of the Fermi-Dirac distribution with the corresponding DOS, where the chemical potential is chosen in each case such that integral over energy equals 3.0, 4.0, and 1.0 for aluminum, carbon, and deuterium respectively. The energy scales of DFT-MD are shifted to align the chemical potential with that of the DFT-AA model, \(\mu_{AA}\).
\(\sigma_{DC}=Z_{s}n_{i}/\nu(0)\) and the dynamic collision frequency \(\nu(\omega)\) influences DSFs and stopping powers (see Sections IV and V). In the remainder of this section, we systematically explore the impact of variations in \(\frac{\partial\Sigma}{\partial\theta}\), \(S_{ii}(k)\), DOS, and static and dynamic screening.
### Electron-ion collision cross sections
Electron-ion collision cross sections describe the interaction of an impact electron at energy \(\varepsilon\) with an ion defined by a screened nuclear charge. A very common simplification is the assumption of weak collisions, which permits use of the Born approximation. For a given scattering potential \(V^{s}(r)\), the Born approximation is defined by the Fourier transform \(\tilde{V}^{s}(k)\)[63; 64]:
\[\frac{\partial\Sigma^{B}(\varepsilon,\theta)}{\partial\theta}=\frac{|\tilde{ V}^{s}(k)|^{2}}{4\pi^{2}} \tag{4}\]
There are many plausible choices for the scattering potential, including the self-consistent potentials of the DFT-AA model [35]. In general, these self-consistent potentials can be fit to a simple parameterized Yukawa form \(V^{s}(r)=-Z_{s}e^{-k_{s}r}/r\), which implies \(\tilde{V}^{s}(k)=-4\pi Z_{s}/(k^{2}+k_{s}^{2})\), with \(Z_{s}\) the screened (background) nuclear charge and \(k_{s}\) the inverse of the screening length \(r_{s}=(T_{\rm eff}/4\pi n_{e})^{1/2}\). Here, \(T_{\rm eff}=(T^{2}+T_{F}^{2})^{1/2}\) is an effective temperature that interpolates between classical and degenerate cases, with \(T_{F}\) the Fermi energy. Examples of Born-Yukawa cross sections for aluminum at solid density and \(T=1\,\)eV with two different choices of \(Z_{s}\) are given in Figure 3. Here, we show \(Z_{s}=2\) to represent scattering from the screened nucleus with \(Z_{i}\) free electrons and and \(Z_{s}=13\) to represent scattering from the unscreened nucleus: this choice uniquely determines the high-energy behavior.
The Born approximation is only accurate for weak scattering from a potential that minimally perturbs the free-electron waves; _i.e._ for incident electrons with energies greater than the magnitude of the scattering potential [63]. The screening constant \(k_{s}\) softens the scattering interaction for low-energy collisions, but it cannot capture the complexity of fully quantum screening in partially ionized materials.
To account for strong scattering in the self-consistent potential of the DFT-AA model, we use a transition- or T-matrix formulation that explicitly includes the effects of the distorted quantum continuum electron orbitals in the presence of the self-consistent \(V_{ei}\) through their phase shifts \(\delta_{\varepsilon\ell}\)[64; 53]:
\[\frac{\partial\Sigma^{T}(\varepsilon,\theta)}{\partial\theta}=\frac{1}{p^{2} }\left|\sum_{\ell=0}^{\infty}(2\ell+1)\sin\delta_{\varepsilon\ell}\,e^{i \delta_{\varepsilon\ell}}P_{\ell}(\cos\theta)\right|^{2}, \tag{5}\]
where \(P_{\ell}\) are Legendre polynomials. For \(S_{ii}(k)=1\), the momentum-transfer T-matrix cross section can be integrated over the solid angle to give the energy-dependent cross section:
\[\Sigma^{\rm tr}(\varepsilon)=\frac{4\pi}{p^{2}}\sum_{\ell=0}^{\infty}(\ell+ 1)\sin^{2}(\delta_{\varepsilon\ell}-\delta_{\varepsilon\ell+1}). \tag{6}\]
At high energies, this expression approaches the \(Z_{s}=13\) limit of the momentum-integrated Born-Yukawa cross section, \(\Sigma^{tr}(\varepsilon)=\pi Z_{s}^{2}/(2\varepsilon^{2})(x-\ln(x)-1)\) with \(x=k_{s}^{2}/(k_{s}^{2}+8\varepsilon)\). At lower energies, the momentum-integrated Born and T-matrix cross sections have very different behavior, including near the \(\mu_{AA}\approx E_{F}\approx 10\,\)eV electron impact energies that dominate thermal scattering. While the Born approximation with \(Z_{s}=2\) is more reasonable at these lower energies than the Born approximation with \(Z_{s}=13\), it is not generally reliable for WDM.
Figure 3 also shows the total cross section,
\[\Sigma^{\rm total}(\varepsilon)=\frac{4\pi}{p^{2}}\sum_{\ell=0}^{\infty}(2 \ell+1)\sin^{2}(\delta_{\varepsilon\ell}), \tag{7}\]
which we will use below to estimate inelastic contributions to \(\nu(\omega)\).
### \(\nu(\omega)\) from electron-ion collision integrals
The collision frequencies in Eqs. (2) and (3) depend not only on collision cross sections but also on the ion-ion structure factors \(S_{ii}(k)\) and on the densities of states implied by the \(\partial f(\varepsilon)/\partial\varepsilon\) and \(\epsilon^{0}(k,\omega)\) terms in their respective integrals. Here, we systematically investigate changes in these terms, moving from the simplest approximation of a Born-Yukawa cross section with \(S_{ii}(k)=1\) and an ideal DOS to a T-matrix cross section with a self-consistent \(S_{ii}(k)\) and a quantum density of states. In the process, we will show that moving towards the reference
Figure 3: Scattering cross sections for solid-density aluminum at \(1\,\)eV from Eqs. (4) and (5) integrated over the solid angle. Dashed gray lines show Born-Yukawa approximations with two different choices of ion charge \(Z_{s}\). T-matrix results from the DFT-AA model are given for the momentum-transfer (dashed red, Eq. (6) with a self-consistent \(S_{ii}(k)\)) and total (solid red, Eq. (7)) cross sections.
\(S_{ii}(k)\) and DOS established above through comparison to DFT-MD improves agreement with reference values for \(\nu(0)\) and the dynamic structure factor (DSF).
Figure 4 shows the real and imaginary parts of \(\nu(\omega)\) and its impact on the DSF of solid aluminum at a temperature of \(1\,\mathrm{eV}\). In general, the real part of \(\nu(\omega)\) will broaden the DSF from its RPA limit and the imaginary part will shift it [66]. Experimental data at similar conditions [10] falls close to the orange curve from our reference TDDFT model. Thus a \(\nu(\omega)\) with a fairly large positive real value and a fairly large negative imaginary value is needed to broaden and shift the RPA DSF towards the reference spectrum. The real part of \(\nu(\omega)\) is also constrained in the static limit by reference conductivity data [65; 8]; this static collision frequency is indicated by the red star.
The simplest approach to calculating \(\nu(\omega)\) is given by the light gray dashed lines in Fig. 4. It uses the Born-Yukawa cross section with \(Z_{s}=Z_{i}\), the ideal free-electron density of states, \(S_{ii}(k)=1\), and a direct integration over \(\epsilon(k,\omega)\) (denoted "B" in the legend). Using the Born cross section simplifies the expressions for both \(\nu^{Z}(0)\) and \(\nu(\omega)\), replacing the energy integral in \(\nu^{Z}(0)\) with a factor \(f(k/2)\)[35]. Note that \(f(k/2)\) is also the static limit of \(\frac{-ik^{3}}{2\omega}[\epsilon^{0}(k,\omega)-\epsilon^{0}(k,0)]\), which ensures consistency between the two expressions. The real part of this simple \(\nu^{B}(\omega)\) has a static limit that is much larger than the reference value and a fixed functional form that follows an asymptotic decay proportional to \(Z_{s}^{2}/\omega^{3/2}\), turning near the plasma frequency \(\omega_{pl}=(4\pi Z_{s}n_{i})^{1/2}\) of \(12.9\,\mathrm{eV}\) (for \(Z_{s}=Z_{i}\)). Because of its fixed form, the imaginary part of \(\nu(\omega)\) is always positive and so it moves the RPA DSF _away_ from the reference spectrum.
The darker dashed gray lines in Fig. 4 show the impact of changing \(S_{ii}(k)\) from unity to the self-consistent \(S_{ii}(k)\) from the DFT-AA model (see Fig. 2). Here, we integrate over \(\min[S_{ii}(k),1]\) to cut out strong peaks and approximately account for incipient lattice structure in strongly coupled fluids (_c.f._ Wetta and Pain [36], Baiko _et al._[67]). This change decreases the static limit but does not change the functional form of the simplest Born approach.
Next, we modify the integration of Eq. 3, substituting \(\epsilon^{-1}(k,\omega)\) for \(\epsilon(k,\omega)\) and replacing \(-i\) with \(i\). This Lenard-Balescu (LB) integration better represents dynamic screening [53]. We find results (dotted gray lines) very similar to those of Faussurier and Blancard [61] under this change: a decrease in the magnitude of \(\nu(\omega)\) but no change to its functional form. Note that this change removes the formal equivalence between Eq. (2) and the zero-frequency limit of Eq. (3). Although \(\nu(0)\) from the two expressions are often very close, we normalize low-frequency values to the Ziman limit \(\nu^{Z}(0)\) by setting \(\nu(\omega)=[1-f_{Z}(\omega)]\nu^{LB}(\omega)+f_{Z}(\omega)\nu^{Z}(0)\), with
Figure 4: Real (a) and imaginary (b) parts of the dynamic collision frequency of solid-density aluminum at \(\mathrm{T}=1\,\mathrm{eV}\) from various approximations described in the text. The red star indicates a reference point for the static collision frequency [8; 65]. (c) Mermin dynamic structure factors at a wavenumber of \(1.55\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\) corresponding to the various \(\nu(\omega)\) described in the text. The orange curve from TDDFT is the reference spectrum, similar to experimental data at similar conditions presented in Sperling _et al._[10]. In the legend, Born/TM indicates the cross section; f/q indicates the DOS; B/LB indicates Boltzmann or Lenard-Balescu integration; and S’ indicates a self-consistent structure factor.
\(f_{Z}(\omega)=1-[1+(0.1\,\omega_{pl}/\omega)^{2}]^{-1}\).
Our final calculation with the Born-Yukawa cross section replaces every \(f(\varepsilon)\) in the expressions for the RPA \(\epsilon(k,\omega)\)[38] with \(f(\varepsilon)\xi(\varepsilon)\), which replaces the implicitly ideal density of states in the RPA with the fully quantum DOS. The \(\xi(\varepsilon)\) factor is also applied to the Mermin \(\epsilon(k,\omega)\) used to obtain the DSF, and as a prefactor to the \(\partial f(\varepsilon)/\partial\varepsilon\) term in Eq. (2) [68], which necessitates a change to the normalization factor: \(Z_{0}=Z_{c}\). This procedure allows us to use the DFT-AA chemical potential while still accounting for all of the continuum electrons (see details in Appendix A).
As illustrated by the solid gray lines, using the quantum DOS further reduces the magnitude of \(\nu(\omega)\), almost recovering the RPA DSF. It also modifies the shape of \(\nu(\omega)\), introducing some functional variation and moving the turning point to higher frequencies consistent with \(\omega_{pl}=15.8\) eV for \(Z_{s}=Z_{c}\). In all Born-Yukawa cases, however, both the static limit of \(\nu(\omega)\) and the Mermin DSF are far from the reference values.
Keeping all of the above improvements, we now replace the Born-Yukawa cross section with the T-matrix momentum-transfer cross section given by Eq. (6) and illustrated in in Fig. 3. This change results in the dashed red lines in Fig. 4. With strong collisions, \(\nu(\omega)\) takes on a dramatically different functional form whose real part matches the static reference value very well and whose imaginary part moves the RPA peak towards the reference DSF. But while the real part of \(\nu(\omega)\) increases from its static limit above the plasma frequency, it is too small near the peak of the DSF to give the necessary broadening.
So far, we have considered only elastic (momentum-transfer) collisions in \(\nu(\omega)\), which represents the damping of plasma excitations at the driving frequency \(\omega\). However, inelastic single-particle excitations of one continuum electron into a higher-energy state are also possible and would be followed by a very rapid recombination that could contribute to the damping of collective modes. To estimate how such inelastic collisions might affect \(\nu(\omega)\), we define an _ad-hoc_ inelastic collision cross section \(\Sigma^{\rm inel}(\varepsilon)=\Sigma^{\rm total}(\varepsilon)-\Sigma^{\rm tr} (\varepsilon)\). For each impact electron energy \(\varepsilon\), we integrate this cross section over the electron distribution \(X(\varepsilon^{\prime})f(\varepsilon^{\prime})\) to obtain excitation rates and use detailed balance to obtain relaxation rates, respecting all Pauli blocking factors and assuming that the final electrons in any three-body process carry equal energies. Since the recombination rates are much larger than the excitation rates and are purely collisional, we assign an inelastic \(\nu(\omega)\) at each \(\omega\) to be the recombination rate of the corresponding collision excitation at \(\varepsilon=\omega\). The T-matrix \(\nu(\omega)\) including these inelastic contributions are given by the solid red lines in Fig. 4: they tend to modestly increase the static limit and contribute significant damping at lower frequencies, bringing the DSF into much better agreement with the reference spectrum.
### \(\nu(\omega)\) from Kubo-Greenwood
Finally, we consider an independent method for calculating dynamic collision frequencies based on the work of Reinholz _et al._[53], who relate the dynamic collision frequency \(\nu^{ff}(\omega)\) to the free-free part of the dynamic conductivity \(\sigma^{ff}(\omega)\) by
\[\sigma^{ff}(\omega)=Z_{i}n_{i}/[\nu^{ff}(\omega)-i\omega]. \tag{8}\]
Using self-consistent data from the DFT-AA model, we implement a Kubo-Greenwood (KG) expression for the dynamic conductivity following Johnson, Guet, and Bertsch [69]. Figure 5 shows the real part of the dynamic conductivity in the solid-density, \(1\,\mathrm{eV}\) aluminum case. Three curves are shown: the bound-free contribution, a Drude-like free-free contribution representing the ideal free electrons \(Z_{i}\), and a quantum variation of the free-free contribution representing all continuum electrons. To regularize the low-frequency \(1/\omega^{2}\) divergence, we impose a Lorentzian broadening or Drude form on the conductivity: \(\sigma^{ff}(\omega)=\sigma^{ff}(0)/[1+(\omega/\nu^{D})^{2}]\). For ideal free electrons, this Drude form is the entire free-free contribution to the dynamic conductivity. By construction, its real part perfectly satisfies the conductivity sum rule \(\int_{0}^{\infty}\mathrm{Re}\{\sigma^{ff}(\omega)\}d\omega=(\pi/2)Z_{i}n_{i}\) for _any_ value of \(\nu^{D}\), so we typically impose the Ziman value, setting \(\nu^{D}=\nu^{Z}(0)\). The DC limit of the dynamic collision frequency in the Drude model \(\nu^{ff}(0)=n_{i}Z_{i}/\sigma^{ff}(0)\) is then identical to the regularizing frequency \(\nu^{D}\), and inversion of Eq. (8) returns a constant \(\mathrm{Re}\{\nu^{ff}(\omega)\}=\nu^{D}\).
Using the distorted-wave continuum electrons from the DFT-AA model in the Kubo-Greenwood expression complicates this picture. The quantum \(\sigma^{cc}(\omega)\) modifies the simple \(1/\omega^{2}\) behavior, typically adding an additional feature at low frequencies that corresponds roughly to transitions among the continuum electrons, as illustrated in Fig. 5. To regularize the quantum \(\sigma^{cc}(\omega)\), we multiply it by a factor \(1/[1+(\nu^{KG}/\omega)^{2}]\) and search for the collision frequency \(\nu^{KG}\) that will satisfy the sum rule
Figure 5: Dynamic conductivities for solid-density aluminum at \(1\,\mathrm{eV}\) from the DFT-AA model based on the Kubo-Greenwood formalism, showing a departure of the quantum calculation from the ideal/Drude form.
\(\int_{0}^{\infty}\mathrm{Re}\{\sigma^{cc}(\omega)\}d\omega=(\pi/2)Z_{c}n_{i}\)[61]. This returns a value of \(\nu^{KG}\) that is independent from the Ziman approximation, and the complex transform of \(\sigma^{cc}(\omega)\) via Eq. 8 provides an independent check on \(\nu(\omega)\). Here, it returns a structured collision frequency that closely resembles our T-matrix calculations below \(\omega_{pl}\), as illustrated by the solid blue line in Fig. 4. While the KG \(\nu(\omega)\) shifts the DSF toward the reference spectrum and reaches larger values than the T-matrix approach at higher frequencies, it does not add sufficient broadening near the peak of the DSF to be in good agreement with the reference spectrum. Still, we retain it as an independent approximation in our investigations of DSFs and stopping powers.
### \(\nu(\omega)\) dependence on temperature and element
Figure 6 illustrates the frequency dependence of the real (solid) and imaginary (dashed) parts of \(\nu(\omega)\) for a wider range of elements and conditions. Here, and in the following sections, only three approaches to \(\nu(\omega)\) are considered. "Born" denotes the simplest form of \(\nu(\omega)\), which uses the Born-Yukawa cross section (with \(Z_{s}=Z_{c}\)), the ideal DOS, \(S_{ii}(k)=1\), and a direct integration over \(\epsilon(k,\omega)\). "T+" is our most consistent and complete calculation of \(\nu(\omega)\), which uses the T-matrix cross section, the quantum DOS, a self-consistent \(S_{ii}(k)\) (with peaks removed), the inverse (Lenard-Balescu) integration over \(\epsilon^{-1}(k,\omega)\), and includes inelastic contributions to \(\nu(\omega)\). "KG" denotes the \(\nu(\omega)\) derived from the Kubo-Greenwood optical conductivities. At all temperatures and for all elements, \(\nu^{\mathrm{Born}}(\omega)\) maintain a consistent functional form, while \(\nu^{T+}(\omega)\) and, to a lesser extent, \(\nu^{KG}(\omega)\) exhibit significant variations. The static limits of the three approaches, which inform the DC conductivity, also vary significantly. In the following sections, we explore the impact of these variations in \(\nu(\omega)\) on DSFs and stopping powers.
## IV Dynamic structure factors
X-ray Thompson scattering (XRTS) experiments have proven to be a valuable platform for benchmarking theories for WDM against experimental data [70; 2; 10]. In XRTS experiments, hard coherent x-rays scatter from electrons in a target material, transferring some momentum \(k\) and energy \(\omega\). The intensity of the scattered photons is proportional to the DSF of the material, which is often expressed as a sum of three different contributions to the scattering process using the Chihara decomposition [71]:
\[S(k,\omega)=|g(k)|^{2}S_{ii}(k,\omega)+Z_{0}S_{ee}(k,\omega)+S_{ bf}(k,\omega). \tag{9}\]
Here, the first term in the sum represents elastic scattering from tightly bound electrons that closely follow the ion motion and depends on the Fourier transform of the total electron density \(g(k)\). The second term describes
Figure 6: Real (solid) and imaginary (dashed) dynamic collision frequencies from three approaches for three materials: aluminum at a density of \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\); carbon at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\); and deuterium at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\) as a function of the driving frequency \(\omega\) at temperatures of \(1\,\mathrm{eV}\) (top row), \(10\,\mathrm{eV}\) (middle), and \(100\,\mathrm{eV}\) (bottom).
inelastic scattering from the \(Z_{0}\) free electrons, and the third describes inelastic scattering from bound electrons that are photoionized in the scattering process.
In this work, we focus only on the \(Z_{0}S_{ee}\) term, which is the dominant contribution to the overall scattering signal in the collective regime corresponding to smaller scattering angles or momentum transfers [2]. It is related to the dielectric response through the fluctuation-dissipation theorem:
\[S_{ee}(k,\omega)=-\frac{1}{1-\exp(-\omega/T)}\frac{k^{2}}{4\pi Z_{0}n_{i}}\text{ Im}\left[\frac{-1}{\epsilon(k,\omega)}\right]. \tag{10}\]
Here, \(Z_{0}n_{i}\) is the free electron density. The dielectric function \(\varepsilon(k,\omega)\) describes the linear screening response to an external driving electric field, and it can be thought of as a generalization of the dielectric constant for dynamic fields. The factor \(\text{Im}\left[-1/\varepsilon\right]\) is related to the energy dissipated by the system and is called the electron loss function (ELF).
In the WDM community, the free-electron dielectric response is often approximated using an RPA dielectric function [72; 73; 38; 44] where the electrons are assumed to act as a uniform electron gas (UEG). The RPA dielectric function is equivalent to the finite-temperature extension of the Lindhard dielectric function [74], and is accurate for non-collisional plasmas. As shown by recent XRTS experiments on warm, dense aluminum, however, the RPA does not generally provide an adequate description of the collective plasmon response in the WDM regime. [10; 70].
One approach to improving the RPA dielectric is to include electron correlations beyond the RPA using a local field correction (LFC). The LFC factor is introduced to account for the existence of exchange-correlation holes around the electrons, with earlier approximations for low-temperatures made by Hubbard and others (Refs. [75] and [76] as cited in Ref. [72]). Recently, Dornheim and co-workers have developed a neural network representation for the LFC of the warm, dense UEG, trained using _ab initio_ path integral Monte Carlo data [77; 78; 79; 80; 81; 82].
While the current progress on LFC for the UEG is promising, it does not capture distortions in the electron density that accompany the complex screening of partially ionized material in the WDM regime. The distortions lead to collisions that modify the electron response, and can be included using the Mermin ansatz [46]
\[\epsilon^{\text{M}}(k,\omega)=1+\frac{(\omega+i\nu)[\epsilon^{0}(k,\omega+i \nu)-1]}{\omega+i\nu\frac{\epsilon^{0}(k,\omega+i\nu)-1}{\epsilon^{0}(k,0)-1} }\, \tag{11}\]
which modifies the RPA dielectric function \(\epsilon^{0}\) according to the collision frequencies \(\nu(\omega)\). While it is possible to combine both the effects of the LFC for correlations among the electrons and the Mermin ansatz for electron-ion collisions in an extended Mermin approach [83; 66], here we focus only on the impact of \(\nu(\omega)\). We ensure that \(S_{ee}\) satisfies the sum rule by using the ideal DOS and modified chemical potential for RPA and Born-Mermin DSFs, and by using \(\mu_{AA}\) and including the factor \(\xi(\varepsilon)\) for KG and T+ Mermin DSFs (see Section II and Appendix A).
Figures 7 and 8 show DSFs in the collective scattering regime for aluminum at \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\), carbon at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\), and deuterium at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\) at thermal energies of \(1\,\mathrm{eV}\) and \(10\,\mathrm{eV}\). Each subplot contains DSFs constructed using the RPA dielectric (purple dash-dotted lines) and the Mermin dielectric using the Born (gray dashed lines), T+ (red solid lines), and KG (blue solid lines) collision frequencies specified in Section III.4.
We find that including dynamic collision frequencies in the Mermin ansatz broadens and shifts the plasmon peaks away from the RPA DSFs [66]. The negative imaginary values of the Born \(\nu(\omega)\) shift the plasmon peaks to higher energies, while the KG and T+ collisions tend to red-shift the peaks to lower energies.
The orange lines in Figures 7 and 8 show DSFs calculated using TDDFT following the approach of Refs. [27]
Figure 7: Dynamic structure factors for aluminum at a density of \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\), carbon at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\), and deuterium at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\). The temperature for all subplots is \(1\,\mathrm{eV}\) and the wavenumber for each element is specified. The inclusion of collision frequencies in the dielectric function tends to broaden and shift the DSF when compared to the RPA DSF. At these low momentum transfers, the plasmon peak for the RPA DSF for carbon and deuterium is very close to a \(\delta\)-function and was artificially broadened by including a small real, constant collision frequency on the order of \(1\,\mathrm{eV}\).
and 29 to directly model the electronic response to an x-ray probe. For aluminum and carbon, the free-electron contributions to the DSF are isolated by pseudizing all but the outermost valence electrons, explicitly treating only 3 and 4 electrons per atom, respectively. Deuterium is fully ionized at this density, so all electrons are explicitly treated in this case. Additional details of the TDDFT calculations are described in Appendix C. Here, TDDFT serves as an independent and highly accurate benchmark for the approximate Mermin DSFs informed by collisions from the DFT-AA model.
Aluminum is a nearly-free electron metal with an unstructured DOS, and is thus a good candidate for the Mermin model, which adds density corrections to the UEG picture assumed by the RPA dielectric through the relaxation-time approximation. For aluminum, both the T+ and KG collisions tend to red-shift the DSF towards the TDDFT peak, while Born collisions blue-shift the plasmon away from the TDDFT peak. The inclusion of inelastic collisions in \(\nu^{T+}(\omega)\) significantly broadens the Mermin DSF, leading to good agreement with TDDFT, while the real part of \(\nu^{KG}(\omega)\) does not provide significant broadening near \(\omega_{pl}\) and has a much narrower plasmon peak.
For carbon at \(k_{B}T=1\,\)eV, the RPA DSF predicts a very narrow plasmon feature, which is expected for highly collective scattering as \(k/k_{c}\to 0\)[72], with
\[k_{c}=\frac{n_{e}}{k_{B}T}\frac{F_{-1/2}(\frac{\mu}{k_{B}T})}{F_{1/2}(\frac{1} {k_{B}T})}\]
the inverse screening length from Ref. [2] and \(F_{j}(x)\) the complete Fermi-Dirac integral of index \(j\). Here, \(k/k_{c}\) is about twice as large for aluminum as it is for carbon or deuterium in Figure 7. The carbon in this case is also very strongly coupled (\(\Gamma_{ii}=209\)) and its DOS is much more structured than that of the DFT-AA model (see Figure 1). In contrast with the aluminum case, none of the approximate DSFs agree well with the TDDFT results for carbon, although all of the Mermin DSFs offer improvements over the RPA and both KG and T+ collisions offer slight improvements over Mermin with Born collisions.
For fully-ionized deuterium at \(k_{B}T=1\,\)eV, the RPA DSF agrees fairly well with the location of the plasmon peak predicted by TDDFT. This success can be attributed to the nearly-free electron DOS, similar to aluminum (see Figure 1). The inclusion of T+ and KG collisions results in a slightly shifted and broadened peak that is still in reasonable agreement with the TDDFT results.
For all elements, the observed trends at \(k_{B}T=1\,\)eV are echoed at the higher temperature of \(k_{B}T=10\,\)eV, as shown in Figure 8. However, thermal broadening, increasingly ideal densities of state, and weaker coupling mitigate some of the more profound differences among models.
## V Stopping powers
We now consider the stopping power, a critical quantity for energy balance in inertial fusion. The stopping power is the rate of kinetic energy lost by a projectile with some initial velocity \(v\) per unit length traveled through a target material and is denoted by \(-dE(v)/dx\)[84].
It is common to separate the stopping into contributions from collisions with the target nuclei and interactions with the target electrons. At low projectile velocities, the nuclear stopping dominates and is typically described with a classical treatment of the interactions between the projectile and target nuclei using an effective interatomic potential [85]. At large projectile velocities, electronic stopping dominates and must be treated in a quantum mechanical picture [84; 22]. Since we are interested in how electron-ion collisions affect the free electron dynamics, we ignore the nuclear stopping contribution.
Within linear response theory, the stopping power is expressed as a double integral over momentum and energy space of the electron loss function (see Appendix B for a discussion of our numerical approach to computing these integrals), which is proportional to the DSF
Figure 8: DSFs for aluminum, carbon, and deuterium at the same densities and wavenumbers as in Figure 7 at a temperature of \(10\,\)eV. TDDFT calculations for deuterium are not included.
discussed in the previous section:
\[-\frac{dE(v)}{dx}=\frac{2Z_{p}^{2}}{\pi v^{2}}\int_{0}^{\infty}\frac{dk}{k}\int_{0 }^{kv}d\omega\omega\,\mathrm{Im}\left[\frac{-1}{\epsilon(k,\omega)}\right]. \tag{12}\]
In Equation 12, \(Z_{p}\) and \(v\) are the projectile's charge and velocity. The applicability of this formula is limited to cases where the projectile interactions with the target electrons are weak, and it has been shown to overestimate the stopping power for highly charged ions [86].
Within TDDFT calculations, the stopping power is computed from the force exerted on the projectile by the target electrons as their response to the moving ion is simulated in real time. The free-electron contributions to stopping powers were isolated through appropriate pseudization in the same manner as was done for the DSF calculations. The approach closely follows previous work [24; 87], and additional details are included in Appendix C.
We evaluate the stopping power for a proton (\(Z_{p}=1\)) projectile within aluminum, carbon, and deuterium as a function of velocity in Figures 9, 10, and 11 respectively, using the RPA and Mermin dielectric functions with the Born, T+, and KG collision frequencies discussed in Section II modifying the electron loss function of Eq. 12. These are compared to TDDFT free-electron stopping power calculations at the same conditions, which we again treat as a benchmark for the approximate model based on RPA or Mermin dielectric functions and collisions from the DFT-AA model.
For the stopping in aluminum at \(k_{B}T=$1\,\mathrm{eV}$\) in Figure 9, the RPA stopping power agrees with the magnitude and location of the Bragg peak predicted by TDDFT but underestimates the stopping at lower velocities. The inclusion of T+ and KG collisions, which red-shifted the DSFs, significantly improves agreement with the low-velocity portion of the curve. Shifting the plasmon peak to lower energies effectively enables collective excitations by lower velocity, or lower energy, projectiles. By contrast, the the simple Born electron-ion collision frequency in the Mermin dielectric stopping power noticeably lowers the height of the Bragg peak and does not improve the low-velocity stopping. At the higher temperature of \(10\,\mathrm{eV}\), all of the approximate stopping powers match TDDFT quite well.
In Figure 10, using T+ and KG collision frequencies in the Mermin stopping power again improves agreement with TDDFT in the low velocity regime in warm, dense carbon. At high projectile velocities, the RPA and Mermin stopping powers are larger than the TDDFT predictions; this disagreement should not be too surprising considering the disagreement found in the DSFs in Section IV. However, the high-velocity tails for the approximate dielectric-based theories follow the Bethe stopping power formula [88] (as cited in Ref. [21]), which is proportional to \(v^{-2}\log(v^{2})\) as \(v\to\infty\). TDDFT seems to follow the same trend with a different proportionality constant, but with only a few high-velocity data points it is difficult to determine the exact behavior. Finite-size effects and/or shortcomings of the pseudopotential approximation likely cause the TDDFT data to underestimate stopping powers at high velocities. At \(10\,\mathrm{eV}\), the disagreement at higher velocities decreases.
Figure 11 shows the stopping power for deuterium with \(k_{B}T=$1\,\mathrm{eV}$\). The agreement between the TDDFT stopping power and the Mermin stopping power using T+ and KG collisions is excellent over the whole velocity range. As discussed in the previous section, this can be attributed to the nearly-free electronic behavior in deuterium. Here, the broadening from the T+ and KG collisions seem to be necessary to improve the stopping power for low to intermediate projectile velocities, since the RPA plasmon energy agreed with TDDFT.
## VI Discussion
We have presented several approaches for calculating dynamic collision frequencies using self-consistent quantities determined within a DFT-based average-atom model and have investigated their impact on transport and scattering properties of WDM. We find that collision cross sections based on the Born approximation cannot cap
Figure 9: Electronic stopping powers for a proton traveling through aluminum at \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\), for temperatures of 1 and 10 eV. The lines correspond to stopping powers evaluated within the dielectric formalism using the RPA dielectric function (dot-dashed), and the Mermin dielectric function with Born, T-matrix, and KG collisions. The orange dots are stopping powers from TDDFT calculations with a 3-electron pseudo-potential.
ture the details of strong scattering in partially ionized plasmas. Strong, fully quantum collisions have not yet been rigorously incorporated within the formalism of the generalized Boltzmann equation [53], so we have pursued two simple, independent approaches: first, a substitution of fully quantum T-matrix cross sections in the standard integrals for dynamic collision frequencies, and second, a transform of the complex dynamic conductivity from Kubo-Greenwood calculations. Both approaches explicitly incorporate the fully quantum density of states and account for all continuum electrons. These approaches agree well with each other in the static limit for weakly coupled plasmas, and both show radical departures from the functional form of collision frequencies based on Born cross sections. We have also explored the effects of inelastic processes on the dynamic collision frequencies.
Changes in dynamic collision frequencies \(\nu(\omega)\) carry over into modifications of observable and transport properties. In particular, we use the Mermin ansatz to obtain \(\nu(\omega)\)-dependent dielectric functions that go beyond the RPA and directly provide DSFs and integrands for stopping powers. Using _ab-initio_ TDDFT as a reference model, we show that the new approaches to dynamic collision frequencies offer significant improvement over previous approaches: we find that collision frequencies computed using both Kubo-Greenwood calculations and T-matrix cross sections tend to shift plasmon features towards the TDDFT results, and that these shifts in the DSFs result in much better agreement with TDDFT stopping powers at velocities around and below the Bragg peak. This demonstrates that adequate treatments of collisions can overcome some of the shortcomings of linear-response stopping power models based on dielectric functions.
While we have not found an approach to computing \(\nu(\omega)\) that gives perfect agreement with TDDFT in every case, the present results offer a computationally efficient and internally consistent approach to predicting observable and transport quantities. These improvements will increase the reliability of XRTS diagnostics and help close gaps between the computationally expensive first-principles models suitable for benchmark calculations and the computationally efficient average-atom models suitable for wide-ranging tabulation of material properties.
###### Acknowledgements.
We are grateful for conversations with Nathaniel Shaffer, Charles Starrett, Charles Seyler, and Andre Schleife. We thank Joel Stevenson for technical support. SBH and TWH were partially supported by the US Department of Energy, Office of Science Early Career Research Program, Office of Fusion Energy Sciences under Grant No. FWP-14-017426. TWH was also supported by the NNSA Stewardship Science Academic Programs under DOE Cooperative Agreement DE-NA0003764. AK, AO, and ADB were supported by the US Department of Energy Science Campaign 1. AC was partially supported by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon state government out of the State budget approved by the Saxon State Parliament. SBH, AK, AO, AC, and ADB were partially supported by Sandia National Laboratories' Laboratory Directed Research and Development
Figure 11: Electronic stopping powers for a proton traveling through deuterium with a density of \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\) and a temperature of 1eV.
Figure 10: Electronic stopping powers for a proton traveling through carbon at \(10.0\,\mathrm{g}\,\mathrm{cm}^{-3}\), for temperatures of 1 and \(10\,\mathrm{eV}\). The TD-DFT calculations use a 4-electron pseudo-potential. The dashed black lines correspond to curves proportional to the Bethe stopping formula.
Program.
Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for DOE's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**Thomas W. Hentschel:** Conceptualization (equal); data curation (lead); formal analysis (equal); writing - original draft (lead); writing - review & editing (equal); software (equal). **Alina Kononov:** Conceptualization (equal); formal analysis (equal); writing - original draft (supporting); writing - review & editing (equal). **Alexandra Olmstead:** formal analysis (equal); writing - review & editing (equal). **Attila Cangi:** formal analysis (supporting); writing - review & editing (equal). **Andrew D. Baczewski:** Conceptualization (equal); formal analysis (equal); writing - original draft (supporting); writing - review & editing (equal). **Stephanie B. Hansen:** Conceptualization (equal); formal analysis (equal); writing - original draft (supporting); writing - review & editing (equal); software (equal).
## Appendix A Using the quantum DOS in the RPA dielectric
The RPA dielectric function \(\epsilon^{0}\) is written in terms of the density response for the non-interacting electron gas \(\chi^{072}\):
\[\epsilon^{0}(\mathbf{k},\omega)=1-\frac{4\pi}{k^{2}}\chi^{0}(\mathbf{k}, \omega),\]
where
\[\chi^{0}(\mathbf{k},\omega)=-2\int\frac{d\mathbf{p}}{(2\pi)^{3}}\frac{f( \varepsilon(\mathbf{p}+\mathbf{k}))-f(\varepsilon(\mathbf{p}))}{\varepsilon (\mathbf{p}+\mathbf{k})-\varepsilon(\mathbf{p})-(\omega+i\delta)}\]
and where \(\delta\to 0^{+}\), \(f(\varepsilon)\) is the Fermi-Dirac occupation factor (Eq. (1)), and \(\varepsilon(\mathbf{p})=p^{2}/2\). By using spherical coordinates for \(\mathbf{p}\), the density response can be written as
\[\chi^{0}(\mathbf{k},\omega) =\int_{0}^{\infty}\frac{p^{2}dp}{\pi^{2}}F(\varepsilon(p);k,\omega)\] \[=\int_{0}^{\infty}d\varepsilon X^{i}(\varepsilon)F(\varepsilon; k,\omega)\]
with
\[F(\varepsilon;k,\omega)=-\frac{1}{2k\sqrt{2\varepsilon}}f( \varepsilon)\ \times\\ \left[\log\left(\frac{k^{2}/2+k\sqrt{2\varepsilon}-(\omega+i \delta)}{k^{2}/2-k\sqrt{2\varepsilon}-(\omega+i\delta)}\right)+\right.\\ \left.\log\left(\frac{k^{2}/2+k\sqrt{2\varepsilon}+(\omega+i \delta)}{k^{2}/2-k\sqrt{2\varepsilon}+(\omega+i\delta)}\right)\frac{}{} \right]\]
and \(X^{i}(\varepsilon)\) the ideal DOS.
We propose swapping \(X^{i}(\varepsilon)\) with the quantum DOS obtained from the DFT-AA model (or, equivalently, multiplying the integrand by the ratio of the quantum DOS to the ideal DOS \(\xi(\varepsilon)=X(\varepsilon)/X^{i}(\varepsilon)\)). This modification replaces the quadratic energy spectrum of ideal continuum states embedded in \(F(\varepsilon;k,\omega)\) with the quantum behavior predicted by the DFT-AA model, allowing us to use the DFT-AA chemical potential while still fulfilling sum rules and accounting for all of the continuum electrons.
A similar procedure can be carried out to incorporate \(\xi(\varepsilon)\) in the Ziman collision frequency (Eq. (2)) [68].
## Appendix B Numerically calculating the dielectric stopping power integrals
For finite temperatures, the double integral in the stopping power formula (Eq. (12)) must be computed numerically, where the dielectric function is also determined from a numerically evaluated integral. To efficiently compute these integrals, we examine the integrands themselves. Figure 12 illustrates the electron loss function (ELF) Im\((-1/\epsilon(k,\omega))\) of aluminum with a density of \(2.7\,\mathrm{g}\,\mathrm{cm}^{-3}\) at a temperature of \(1\,\mathrm{eV}\) as a function of \(\omega\) for various values of \(k\). As \(k\) is decreased, the ELF using the RPA dielectric function (lighter-colored lines) develops a sharp feature resembling a \(\delta\) function at the plasmon excitation energy predicted for small \(k\)[72]. By contrast, the ELF with the Mermin dielectric (darker-colored lines) converges to a broadened peak as \(k\to 0\), due to the electron-ion collisions (here, we use the KG+ collision frequencies discussed in the main text for illustration). This semi-log plot also reveals qualitatively different behavior for energies greater than the plasmon peak: while the RPA ELF abruptly falls to zero, the Mermin ELF decays much more slowly.
Integrating over the Mermin ELF is fairly straightforward numerically, but the RPA ELF is more difficult because of the sharp feature at small \(k\). To resolve this
difficulty, we first find the location of the peak and use this information to perform the inner, \(\omega\)-integral. Although approximate dispersion relations for the plasmon peak exist, these are typically only accurate for certain regions of phase space (i.e. weakly degenerate electron gas) [60]. Instead, we find the plasmon peak numerically, first using a bisection method applied directly to the ELF and resorting to a secant method for finding the zeros of the real part of the dielectric function (which corresponds to the ELF peak for small \(k\)[72]). When the width of the peak falls below our minimum integration grid spacing as \(k\to 0\), we replace the ELF peak with an exact \(\delta\)-function at the peak position, weighted to ensure that the \(f\)-sum rule is satisfied (Eq. (10)).
Our numerically calculated dispersion relations for aluminum are shown in Figure 13. We compare our results to the improved dispersion relation (IDR) from work by Thiele and others that is accurate for a larger range of wavenumbers and degeneracies than the typical Gross-Bohm dispersion relation (see Equations 21 & 22 in Ref. [60]).
To certify the accuracy of the \(\omega\)-integral for the RPA and Mermin ELF, we compare our results against the \(f\)-sum rule
\[\int_{0}^{kv}d\omega\omega\,\mathrm{Im}\left[\frac{-1}{\epsilon( k,\omega)}\right]+\int_{kv}^{\infty}d\omega\omega\,\mathrm{Im}\left[\frac{-1}{ \epsilon(k,\omega)}\right]\\ =\int_{0}^{\infty}d\omega\omega\,\mathrm{Im}\left[\frac{-1}{ \epsilon(k,\omega)}\right]=\frac{\pi}{2}\omega_{p}^{2} \tag{11}\]
where \(\omega_{p}\) is the plasma frequency. The first term is the integral used in the stopping power calculations, which must return the constant \(\pi\omega_{p}^{2}/2\) when added to the second term (which we also compute). If this equality is satisfied, we trust the results of our quadrature scheme since the integrand is always non-negative. In this work, we accept the quadrature results if the sum rule is satisfied to within \(5\%\). If it is not satisfied, we can increase the accuracy of our integrator.
The numerical results for the \(\omega\)-integral (\(\int_{0}^{kv}d\omega\omega\,\mathrm{Im}(-1/\epsilon)\)) are shown in Figure 14 as a function of \(k\) for various values of the velocity \(v\). For a given element, temperature, and density, the overall structure of these functions depends on the velocity: for small values of \(k\), the upper limit of the \(\omega\)-integral \(kv\) exceeds the dispersion of the plasmon peak. If \(v\) is large enough, after some value of \(k\) the majority of the plasmon peak is contained in the integration range \([0,kv]\), and the curve for the \(\omega\)-integral flattens off at the sum rule value. As \(k\) increases, the dispersion of plasmon surpasses \(kv\). Eventually, the integration range will contain essentially none of the plasmon peak, and the integral will be \(0\).
Notice that for the RPA ELF, the rise up to the sum rule is abrupt since we approximate the RPA ELF as a \(\delta\)-function once the width of the peak falls below our minimum integration grid spacing, which occurs for small \(k\). Finally, these curves are multiplied by \(2Z_{1}^{2}/(\pi v^{2})\times 1/k\) and integrated over \(k\) to obtain the stopping power as a function of \(v\). Since the functional form is relatively well behaved, simple quadrature approaches can be used.
## Appendix C Details of the real-time time-dependent density functional theory calculations
DSF and stopping power calculations were performed using an implementation of real-time TDDFT first reported in Refs. [24; 27], respectively. This implementation is an extension of the Vienna _ab initio_ simulation package [89; 90; 91], which relies on the projector augmented-wave (PAW) formalism [92] to accurately and efficiently
Figure 12: Electron loss function (ELF) as a function of \(\omega\) for various wavenumbers. The lighter-colored lines correspond to the RPA dielectric, while the darker-colored lines correspond to the Mermin dielectric using KG+ electron-ion collision frequencies. As \(k\to 0\), the ELF using the RPA dielectric develops a sharp feature that makes numerical integration difficult.
Figure 13: ELF dispersion as a function of the wavenumber. The dispersion curves for the RPA ELF and Mermin ELF using T+ collisions are calculated numerically, while the IDR curve is an approximate analytical function from [60]. In this example, the IDR curve agrees fairly well with the RPA curve for \(k<0.4\) a.u.
represent the electron-ion interaction. In both types of calculations, the central equations of motion are the time-dependent Kohn-Sham (TDKS) equations, with initial conditions determined by the self-consistent orbitals from a Mermin-Kohn-Sham DFT calculation [93; 14]. These orbitals are computed for an atomic configuration either corresponding to a known crystal structure or sampled from a Born-Oppenheimer molecular dynamics trajectory [94]. To represent the effects of electronic temperature, the Mermin-Kohn-Sham orbitals are occupied according to the Fermi-Dirac distribution at a fixed temperature and self-consistently determined chemical potential. These occupations are held constant over the course of the integration of the TDKS equations. All calculations use the adiabatic local density approximation [95; 96], in which the time-dependent exchange-correlation potential is approximated to have only local spatial and temporal dependence on the electronic charge density.
The TDKS equations are numerically integrated using the Crank-Nicolson method [97]. The discrete update equations are solved using the generalized minimal residual (GMRES) [98] iterative algorithm with a default residual of \(10^{-12}\). Due to the ionic motion in the stopping power calculations, it is necessary to add a gauge correction to the TDKS equations to properly account for the time dependence of the projectors and preserve unitarity of the integration (and thus probability and charge conservation). To this end, we have followed the approach in Ref. [99] in our implementation.
The DSF calculations rely on a real-time implementation of a linear response calculation, similar to the one that was first reported in Ref. [100] and later refined and adapted to WDM in Ref. [27]. Aluminum calculations at a temperature of 1 eV (10 eV) contained 64 (32) atoms and 5 (64) electronic bands per atom with a 3-electron pseudopotential, plane-wave cutoff energy of 500 eV, and reciprocal space integration using a \(4\times 4\times 4\)\(\Gamma\)-centered quadrature. Carbon calculations contained 128 atoms and 3 (12) electronic bands per atom for a temperature of 1 eV (10 eV) with a 4-electron pseudopotential, plane-wave cutoff energy of 1000 eV, and reciprocal space integration using the Baldereschi mean-value point [101]. Deuterium calculations contained 256 atoms and 1 electronic bands per atom for a temperature of 1 eV with an all-electron pseudopotential, plane-wave cutoff energy of 2800 eV, and reciprocal space integration using a \(7\times 7\times 7\)\(\Gamma\)-centered quadrature. All the DSF calculations used a time step of 1 as.
Stopping powers were computed from time-dependent Hellmann-Feynman forces acting on a proton moving through the supercell. Details regarding the choice of proton trajectory and stopping power extraction are documented in Ref. [87]. Aluminum calculations contained 256 atoms with a plane-wave cutoff of 750 eV. Deuterium calculations contained 1728 atoms with a plane-wave cutoff energy of 2000 eV. All the stopping power calculations sampled reciprocal space using the \(\Gamma\) point only. For fast protons traversing aluminum or carbon with \(v\geq 1.5\)a.u., the time step scaled inversely with proton velocity to maintain a fixed displacement of about 0.02 A within each step. Slower protons required shorter time steps of 0.3 - 0.4 as to achieve convergence. Similar time steps of 0.2 - 0.4 as were used for deuterium calculations. All other parameters remained the same as listed for the DSF calculations above.
|
2302.10701 | Scalable Infomin Learning | The task of infomin learning aims to learn a representation with high utility
while being uninformative about a specified target, with the latter achieved by
minimising the mutual information between the representation and the target. It
has broad applications, ranging from training fair prediction models against
protected attributes, to unsupervised learning with disentangled
representations. Recent works on infomin learning mainly use adversarial
training, which involves training a neural network to estimate mutual
information or its proxy and thus is slow and difficult to optimise. Drawing on
recent advances in slicing techniques, we propose a new infomin learning
approach, which uses a novel proxy metric to mutual information. We further
derive an accurate and analytically computable approximation to this proxy
metric, thereby removing the need of constructing neural network-based mutual
information estimators. Experiments on algorithmic fairness, disentangled
representation learning and domain adaptation verify that our method can
effectively remove unwanted information with limited time budget. | Yanzhi Chen, Weihao Sun, Yingzhen Li, Adrian Weller | 2023-02-21T14:40:25Z | http://arxiv.org/abs/2302.10701v1 | # Scalable Infomin Learning
###### Abstract
The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can effectively remove unwanted information with limited time budget.
## 1 Introduction
Learning representations that are uninformative about some target but still useful for downstream applications is an important task in machine learning with many applications in areas including algorithmic fairness [1; 2; 3; 4], disentangled representation learning [5; 6; 7; 8], information bottleneck [9; 10], and invariant representation learning [11; 12; 13; 14].
A popular method for the above task is adversarial training [1; 2; 3; 4; 7; 11; 15], where two neural networks, namely the encoder and the adversary, are trained jointly to compete with each other. The encoder's goal is to learn a representation with high utility but contains no information about the target. The adversary, on the contrary, tries to recover the information about the target from the learned representation as much as possible. This leads to a minimax game similar to that in generative adversarial networks [16]. Adversarial training is effective with a strong adversary, however, it is often challenging to train the adversary thoroughly in practice, due to time constraints and/or optimisation difficulties [17; 18]. In fact, recent studies have revealed that adversarial approaches may not faithfully produce an infomin representation in some cases [18; 19; 20; 21; 22]. This motivates us to seek a good, adversarial training-free alternative for scalable infomin learning.
In this work, we propose a new method for infomin learning which is almost as powerful as using a strong adversary but is highly scalable. Our method is inspired by recent advances in information theory which proposes to estimate mutual information in the sliced space [23]. We highlight the following contributions:
* We show that for infomin learning, an accurate estimate of mutual information (or its bound) is unnecessary: testing and optimising statistical independence in some sliced spaces is sufficient;
* We develop an analytical approximation to such sliced independence test, along with a scalable algorithm for infomin learning based on this approximation. No adversarial training is required.
Importantly, the proposed method can be applied to a wide range of infomin learning tasks without any constraint on the form of variables or any assumption about the distributions. This contrasts our method to other adversarial training-free methods which are either tailored for discrete or univariate variables [22; 24; 25; 25; 26] or rely on variational approximation to distributions [10; 12; 19; 27].
## 2 Background
**Infomin representation learning**. Let \(X\in\mathbb{R}^{\,D}\) be the data, \(Y\in\mathbb{R}^{\,D^{\prime}}\) be the target we want to predict from \(X\). The task we consider here is to learn some representation \(Z=f(X)\) that is useful for predicting \(Y\) but is uninformative about some target \(T\in\mathbb{R}^{d}\). Formally, this can be written as
\[\min_{f}\mathcal{L}(f(X);Y)+\beta\cdot I(f(X);T) \tag{1}\]
where \(f\) is an encoder, \(\mathcal{L}\) is some loss function quantifying the utility of \(Z\) for predicting \(Y\) and \(I(f(X);T)\) quantifies the amount of information left in \(\dot{Z}\) about \(T\). \(\beta\) controls the trade-off between utility and uninformativeness. Many tasks in machine learning can be seen as special cases of this objective. For example, by setting \(T\) to be (a set of) sensitive attributes e.g. race, gender or age, we arrive at fair representation learning [1; 2; 21; 28]. When using a stochastic encoder, by setting \(Y\) to be \(X\) and \(T\) to be some generative factors e.g., a class label, we arrive at disentangled representation learning [5; 6; 7; 8]. Similarly, the information bottleneck method [9; 10] corresponds to setting \(T=X\), which learns representations expressive for predicting \(Y\) while being compressive about \(X\).
**Adversarial training for infomin learning**. A key ingredient in objective (1) is to quantify \(I(f(X);T)\) as the informativeness between \(f(X)\) and \(T\). One solution is to train a predictor for \(T\) from \(f(X)\) and use the prediction error as a measure of \(I(f(X);T)\)[1; 3; 4; 29]. Another approach is to first train a classifier to distinguish between samples from \(p(Z,T)\) vs. \(p(Z)p(T)\)[7; 15] or to distinguish samples \(Z\sim p(Z|T)\) with different \(T\)[2; 11], then use the classification error to quantify \(I(f(X);T)\). All these methods involve the training of a neural network \(t\) to provide a lower-bound estimate to \(I(f(X);T)\), yielding a minmax optimisation problem
\[\min_{f}\max_{t}\mathcal{L}(f(X);Y)+\beta\cdot\hat{I}_{t}(f(X);T) \tag{2}\]
where \(\hat{I}_{t}(f(X);T)\) is an estimator constructed using \(t\) that lower-bounds \(I(f(X);T)\). The time complexity of optimising (2) is \(O(L_{1}L_{2})\) where \(L_{1}\) and \(L_{2}\) are the number of gradient steps for the min and the max step respectively. The strength of \(t\) is crucial for the quality of the learned representation [18; 19; 20; 21; 22]. For a strong adversary, a large \(L_{2}\) is possibly needed, but this means a long training time. Conversely, a weak adversary may not produce a truly infomin representation.
## 3 Methodology
We propose an alternative to adversarial training for optimising (1). Our idea is to learn representation by the following objective, which replaces \(I(f(X);T)\) in objective (1) with its'sliced' version:
\[\min_{f}\mathcal{L}(f(X);Y)+\beta\cdot SI(f(X);T), \tag{3}\]
where \(SI\) denotes the sliced mutual information, which was also considered in [23]. Informally, \(SI\) is a 'facet' of mutual information that is much easier to estimate (ideally has closed form) but can still to some extent reflect the dependence between \(Z\) and \(T\). Optimising (3) is then equivalent to testing and minimising the dependence between \(Z\) and \(T\) from one facet. Importantly, while testing dependence through only a single facet may be insufficient, by testing and minimising dependence through various facets across a large number of mini-batches we eventually see \(I(Z;T)\to 0\).
We show one instance for realising \(SI\) whose empirical approximation \(\hat{SI}\) has an analytic expression. The core of our method is Theorem 1, which is inspired by [4; 23].
**Theorem 1**.: Let \(Z\in\mathbb{R}^{\,D}\) and \(T\in\mathbb{R}^{\,d}\) be two random variables that have moments. \(Z\) and \(T\) are statistically independent if and only if \(SI(Z,T)=0\) where \(SI(Z,T)\) is defined as follows
\[SI(Z;T)=\sup_{h,g,\theta,\phi}\rho(h(\theta^{\top}Z),g(\phi^{\top}T)), \tag{4}\]
where \(\rho\) is the Pearson correlation, \(h,g:\mathbb{R}\rightarrow\mathbb{R}\) are Borel-measurable non-constant functions, and \(\theta\in\mathbb{S}^{D-1}\), \(\phi\in\mathbb{S}^{d-1}\) are vectors on the surfaces on \(D\)-dimensional and \(d\)-dimensional hyperspheres.
Proof.: See the Appendix.
We call \(\theta\) and \(\phi\) the slices for \(Z\) and \(T\) respectively, and \(\theta^{\top}Z\), \(\phi^{\top}T\) the sliced \(Z\) and \(T\) respectively.
We sketch here how this result relates to [4, 23]. [23] considers \(\overline{SI}(Z;T)\), defined as the expected mutual information \(\mathbb{E}[I(\theta^{\top}Z,\phi^{\top}T)]\) of the sliced \(Z\) and \(T\), where the expectation is taken over respective Haar measures \(\theta\in\mathbb{S}^{D-1}\), \(\phi\in\mathbb{S}^{d-1}\). Instead of considering the mutual information \(I(\theta^{\top}Z,\phi^{\top}T)\) in the average case, we handle Pearson correlation over the supreme functions \(h,g\) as defined above, which links to Renyi's maximal correlation [4, 30, 31, 32] and has some interesting properties suitable for informin representation learning.
Intuitively, Theorem 1 says that in order to achieve \(I(Z;T)\to 0\), we need not estimate \(I(Z;T)\) in the original space; rather we can test (and maximise) independence in the sliced space as realised by (4). Other realisations of the sliced mutual information \(SI\) may also be used. The major merit of the realisation (4) is it allows an analytic expression for its empirical approximation, as shown below.
**Analytic approximation to \(\boldsymbol{SI}\)**. An empirical approximation to (4) is
\[SI(Z;T)\approx\sup_{i,j}\sup_{h_{i},g_{j}}\rho(h_{i}(\theta_{i}^{\top}Z),g_{j} (\phi_{j}^{\top}T)),\]
\[\text{where}\quad\theta_{i}\sim\mathcal{U}(\mathbb{S}^{D-1}),\ \ i=1,...,S,\qquad\phi_{j}\sim \mathcal{U}(\mathbb{S}^{d-1}),\ \ j=1,...,S. \tag{5}\]
i.e., we approximate (4) by randomly sampling a number of slices \(\theta,\phi\) uniformly from the surface of two hyperspheres \(\mathbb{S}^{D-1}\) and \(\mathbb{S}^{d-1}\) and pick those slices where the sliced \(Z\) and the sliced \(T\) are maximally associated. With a large number of slices, it is expected that (5) will approximate (4) well. We refer to [23] for a theoretical analysis on the number of required slices in estimator-agnostic settings. In Appendix B we also investigate empirically how this number will affect performance.
For each slicing direction, we further assume that the supreme functions \(h_{i},g_{j}:\mathbb{R}\rightarrow\mathbb{R}\) for that direction can be well approximated by \(K\)-order polynomials given sufficiently large \(K\), i.e.
\[h_{i}(a)\approx\hat{h}_{i}(a)=\sum_{k=0}^{K}w_{ik}\sigma(a)^{k},\qquad g_{j}(a )\approx\hat{g}_{j}(a)=\sum_{k=0}^{K}v_{jk}\sigma(a)^{k},\]
where \(\sigma(\cdot)\) is a monotonic function which maps the input to the range of \([-1,1]\). Its role is to ensure that \(\sigma(a)\) always has finite moments, so that the polynomial approximation is well-behaved. Note that no information will be lost by applying \(\sigma(\cdot)\). Here we take \(\sigma(\cdot)\) as the tanh function. In Appendix A we also investigate theoretically the approximation error of this polynomial approximation scheme. Other approximation schemes such as random feature model [33] can also be used to approximate \(h_{i}\) and \(g_{j}\), which will be explored in the future. In this work, we simply set \(K=3\).
With this polynomial approximation, the solving of each functions \(h_{i},g_{j}\) in (5) reduces to finding their weights \(w_{i},v_{j}\):
\[\sup_{h_{i},g_{j}}\rho(h_{i}(\theta_{i}^{\top}Z),g_{j}(\phi_{j}^{\top}T)) \approx\sup_{w_{i},g_{j}}\rho(w_{i}^{\top}Z_{i}^{\prime},v_{j}^{\top}T_{j}^ {\prime}),\]
\[Z_{i}^{\prime}=[1,\sigma(\theta_{i}^{\top}Z),...,\sigma(\theta_{i}^{\top}Z)^ {K}],\qquad T_{j}^{\prime}=[1,\sigma(\phi_{j}^{\top}T),...,\sigma(\phi_{j}^{ \top}T)^{K}]\]
This is known as canonical correlation analysis (CCA) [34] and can be solved analytically by eigen-decomposition. Hence we can find the weights for all pairs of \(h_{i},g_{j}\) by \(S^{2}\) eigendecompositions.
In fact, the functions \(h_{i},g_{j}\) for all \(i,j\) can be solved simultaneously by performing a larger eigendecomposition only once. We do this by finding \(w,v\) that maximise the following quantity:
\[\hat{S}I_{\Theta,\Phi}(Z;T)=\sup_{w,v}\rho(w^{\top}Z^{\prime},v^{\top}T^{ \prime}), \tag{6}\]
where
\[Z^{\prime}=[Z_{1}^{\prime},...,Z_{i}^{\prime}],\qquad T^{\prime}=[T_{1}^{ \prime},...,T_{j}^{\prime}]\]
\[Z_{i}^{\prime}=[1,\sigma(\theta_{i}^{\top}Z),...,\sigma(\theta_{i}^{\top}Z)^ {K}],\qquad T_{j}^{\prime}=[1,\sigma(\phi_{j}^{\top}T),...,\sigma(\phi_{j}^{ \top}T)^{K}]\]
\[\theta_{i}\sim\mathcal{U}(\mathbb{S}^{D-1}),\ \ i=1,...,S,\qquad\phi_{j}\sim \mathcal{U}(\mathbb{S}^{d-1}),\ \ j=1,...,S.\]
That is, we first concatenate all \(Z_{1}^{\prime},...Z_{S}^{\prime}\) and \(T_{1}^{\prime},...T_{S}^{\prime}\) into two 'long' vectors \(Z^{\prime}\in\mathbb{R}^{(K+1)S}\) and \(T^{\prime}\in\mathbb{R}^{(K+1)S}\) respectively, then solve a CCA problem corresponding to \(Z^{\prime}\) and \(T^{\prime}\). We then use (6) to replace (5). The theoretical basis for doing so is grounded by Theorem 2, which tells that the solution of (6) yields an upper bound of (5), provided that the polynomial approximation is accurate.
**Theorem 2**.: Provided that each \(h_{i},g_{j}\) in (5) are \(K\)-order polynomials, given the sampled \(\Theta=\{\theta_{i}\}_{i=1}^{S},\Phi=\{\phi_{j}\}_{j=1}^{S}\), we have \(\hat{SI}_{\Theta,\Phi}(Z;T)\leq\epsilon\Rightarrow\sup_{i,j}\sup_{h_{i},g_{j}} \rho(h_{i}(\theta_{i}^{\top}Z),g_{j}(\phi_{j}^{\top}T))\leq\epsilon\).
Proof.: See Appendix A.
The intuition behind the proof of Theorem 2 is that if all \(Z_{i}^{\prime}\) and \(T_{j}^{\prime}\) as a whole cannot achieve a high correlation, each of them alone can not either.
The benefits for solving \(f_{i},g_{j}\) for all slices jointly are two-fold. (a) The first benefit is better computational efficiency, as it avoids invoking a for loop and only uses matrix operation. This has better affinity to modern deep learning infrastructure and libraries (e.g. Tensorflow [35] and PyTorch [36]) which are optimised for matrix-based operations. In addition to computational efficiency, another benefit for solving \(g_{j},h_{j}\) jointly is (b) the stronger power in independence testing. More specifically, while some sliced directions may individually be weak for detecting dependence, they together as a whole can compensate for each other, yielding a more powerful test. This also echos with [37].
```
Input: data \(\mathcal{D}=\{X^{(n)},Y^{(n)},T^{(n)}\}_{n=1}^{N}\) Output:\(Z=f(X)\) that optimises (1) Hyperparams: \(\beta\), \(N^{\prime}\), \(L_{1}\), \(L_{2}\) Parameters: encoder \(f\), MI estimator \(t\) for\(l_{1}\) in \(1\) to \(L_{1}\)do sample mini-batch \(\mathcal{B}\) from \(\mathcal{D}\) sample \(\mathcal{D}^{\prime}\) from \(\mathcal{D}\) whose size \(N^{\prime}<N\) \(\rhd\)Max-step for\(l_{2}\) in \(1\) to \(L_{2}\)do \(t\gets t+\eta\nabla_{t}\hat{I}_{t}(f(X);T)\) with data in \(\mathcal{D}^{\prime}\) endfor \(\rhd\)Min-step \(f\gets f-\eta\nabla_{f}[\mathcal{L}(f(X);Y)+\beta\hat{I}_{t}(f(X);T)]\) with data in \(\mathcal{B}\) endfor return\(Z=f(X)\)
```
**Algorithm 1** Adversarial Infomin Learning
**Mini-batch learning algorithm**. Given the above approximation (6) to \(SI\), we can now elaborate the details of our mini-batch learning algorithm. In each iteration, we execute the following steps:
* _Max-step_. Sample \(S\) slices \(\Theta=\{\theta_{i}\}_{i=1}^{S},\Phi=\{\phi_{j}\}_{j=1}^{S}\) and a subset of the data \(\mathcal{D}^{\prime}\subset\mathcal{D}\). Learn the weights \(w,v\) of \(\hat{SI}(Z;T)\) (6) with the sampled \(\Theta,\Phi\) and the data in \(\mathcal{D}^{\prime}\) by eigendecomposition;
* _Min-step_. Update \(f\) by SGD: \(f\gets f-\eta\nabla_{f}[\mathcal{L}(f(X),Y)+\beta\hat{SI}(f(X),T)]\) with \(Z=f(X)\) where the parameters \(w,v\) of \(\hat{SI}(f(X),T)\) is learned in the max-step: \(\hat{SI}(Z,T)=\rho(w^{\top}Z^{\prime},v^{\top}T^{\prime})\).
The whole learning procedure is shown in Algorithm 2. We note that the data in the subset \(\mathcal{D}^{\prime}\) can be different from the mini-batch \(\mathcal{B}\) and its size can be much larger than the typical size of a mini-batch.
Compared to that of adversarial methods [1, 2, 3, 4, 11, 21] as shown in Algorithm 1, we replace neural network training in the max-step with a analytic eigendecomposition step, which is much cheaper to execute. As discussed in Sec 2, if the network \(t\) is not trained thoroughly (due to e.g. insufficient gradient steps \(L_{2}\) in the max-step), it may not provide a sensible estimate to \(I_{t}(f(X);T)\) and can hence a weak adversary. Our method does not suffer from this issue as \(\hat{SI}\) is solved analytically.
Finally, as an optional strategy, we can improve our method by actively seeking more informative slices for independence testing by optimising the sampled slices with a few gradient steps (e.g. 1-3):
\[\Theta\leftarrow\Theta-\xi\nabla_{\Theta}\hat{SI}_{\Theta,\Phi}(Z,T), \Phi\leftarrow\Phi-\xi\nabla_{\Phi}\hat{SI}_{\Theta,\Phi}(Z,T) \tag{7}\]
which is still cheap to execute. Such a strategy can be useful when most of the sampled slices are ineffective in detecting dependence, which typically happens in later iterations where \(I(Z;T)\approx 0\).
Related works
**Neural mutual information estimators**. A set of neural network-based methods [38; 39; 40; 41] have been proposed to estimate the mutual information (MI) between two random variables, most of which work by maximising a lower bound of MI [42]. These neural MI estimators are in general more powerful than non-parametric methods [43; 44; 45; 46] when trained thoroughly, yet the time spent on training may become the computational bottleneck when applied to informi learning.
**Upper bound for mutual information**. Another line of method for realising the goal of informi learning without adversarial training is to find an upper bound for mutual information [10; 12; 19; 47; 48]. However, unlike lower bound estimate, upper bound often requires knowledge of either the conditional densities or the marginal densities [42] which are generally not available in practice. As such, most of these methods introduce a variational approximation to these densities whose choice/estimate may be difficult. Our method on the contrary needs not to approximate any densities.
**Slicing techniques**. A series of successes have been witnessed for the use of slicing methods in machine learning and statistics [49], with applications in generative modelling [50; 51; 52; 53], statistical test [54] and mutual information estimate [23]. Among them, the work [23] who proposes the concept of sliced mutual information is very related to this work and directly inspires our method. Our contribution is a novel realisation of sliced mutual information suitable for informi learning.
**Fair machine learning**. One application of our method is to encourage the fairness of a predictor. Much efforts have been devoted for the same purpose, however most of the existing methods can either only work at the classifier level [4; 24; 25; 55], or only focus on the case where the sensitive attribute is discrete or univariate [22; 26; 28; 55; 56], or require adversarial training [1; 2; 3; 4; 11; 21]. Our method on the contrary has no restriction on the form of the sensitive attribute, can be used in both representation level and classifier level, and require no adversarial training of neural networks.
**Disentangled representation learning**. Most of the methods in this field work by penalising the discrepancy between the joint distribution \(P=q(Z)\) and the product of marginals \(Q=\prod_{d}^{D}q(Z_{d})\)[6; 7; 27; 57; 58].1 However, such discrepancy is often non-trivial to estimate, so one has to resort to Monte Carlo estimate (\(\beta\)-TCVAE [27]), to train a neural network estimator (FactorVAE [7]) or to assess the discrepancy between \(P\) and \(Q\) by only their moments (DIP-VAE [57]). Our method avoids assessing distribution discrepancy directly and instead perform independence test in the sliced space.
Footnote 1: Note there exist methods based on group theory [59; 60; 61] which do not assess distribution discrepancy.
## 5 Experiments
We evaluate our approach on four tasks: independence testing, algorithmic fairness, disentangled representation learning, domain adaptation. Code is available at github.com/cyz-ai/infomin.
**Evaluation metric**. To assess how much information is left in the learned representation \(Z\in\mathds{R}^{D}\) about the target \(T\in\mathds{R}^{K}\), we calculate the Renyi's maximal correlation \(\rho^{*}(Z,T)\) between \(Z\) and \(T\):
\[\rho^{*}(Z,T)=\sup_{h,g}\rho(h(Z),g(T)) \tag{8}\]
which has the properties \(\rho^{*}(Z,T)=0\) if and only if \(Z\perp T\) and \(\rho^{*}(Z,T)=1\) if \(h(Z)=g(T)\) for some deterministic functions \(h,g\)[30]. One can also understand this metric as the easiness of predicting (the transformed) \(T\) from \(Z\), or vice versa.2 As there is no analytic solution for the supremum in (8), we approximate them by two neural networks \(h,g\) trained with SGD. Early stopping and dropout are applied to avoid overfitting. The reliability of this neural approximation has been verified by the literature [4] and is also confirmed by our experiments; see Appendix B.
Footnote 2: It can be shown \(\rho^{*}(Z,T)\) is equivalent to the normalised mean square error between \(h(Z)\) and \(g(T)\).
This metric is closely related to existing metrics/losses used in fairness and disentangled representation learning such as demographic parity (DP) [24] and total correlation (TC) [7]. For example, if \(\rho^{*}(Z,T)\to 0\) then it is guaranteed that \(\hat{Y}\perp T\) for any predictor \(\hat{Y}=F(Z)\), so \(\rho^{*}(Z,T)\) is an upper bound for DP. Similarly, \(\rho^{*}(Z,T)\) coincides with TC which also assesses whether \(Z\perp T\). In additional to this metric, we will also use some task-specific metric; see each experiment below.
**Baselines**. We compare the proposed method (denoted as "Slice") with the following approaches:
* _Pearson_, which quantifies \(I(Z;T)\) by the Pearson correlation coefficient \(\frac{1}{DK}\sum_{d}^{D}\sum_{k}^{K}\rho(Z_{d};T_{k})\). It was used in [5; 57] as an easy-to-compute proxy to MI to learn disentangled representations;
* \(dCorr\), i.e. distance correlation, a non-parametric method for the quantifying the independence between two vectors [46]. It was applied in [41] as a surrogate to MI for representation learning;
* _Neural Renyi_, an adversarial method for fair machine learning [25] which quantifies \(I(Z;T)\) by the Renyi correlation \(\rho^{*}(Z,T)=\sup_{h,g}\rho(h(Z),g(T))\) with \(h,g\) approximated by neural networks. It can be seen as training a predictor to predict (the transformed) \(T\) from \(Z\) and is closely related to many existing methods in algorithmic fairness and domain adaptation [1; 2; 3; 4; 29];
* _Neural TC_, an adversarial method for learning disentangled representation [7; 62] which quantifies \(I(Z;T)\) by the total correlation \(TC(Z,T)=KL[p(Z,T)\|p(Z)p(T)]\). To computes TC, a classifier is trained to classify samples from \(p(Z,T)\) and samples from \(p(Z)p(T)\). This method can also be seen as a variant of the popular MINE method [38] for mutual information estimate.
* _v-CLUB_, i.e. variational contrastive log upper bound, which introduces a (learnable) variational distribution \(q(T|Z)\) to form an upper bound of MI [47]: \(I(Z;T)\leq\mathbb{E}_{p(Z,T)}[\log q(T|Z)]-\mathbb{E}_{p(Z)p(T)}[\log q(T|Z)]\). Like adversarial method, \(q(T|Z)\) can be learned by a few gradient steps.
For a fair comparison, for adversarial training-based approaches (i.e. Neural Renyi, Neural TC), we ensure that the training time of the neural networks in these methods is at least the same as the execution time of our method or longer. We do this by controlling the number of adversarial steps \(L_{2}\) in Algorithm 1. The same setup is used for v-CLUB. See each experiment for the detailed time.
**Hyperparameter settings**. Throughout our experiments, we use \(200\) slices. We find that this setting is robust across different tasks. An ablation study on the number of slices is given in Appendix B. The order of the polynomial used in (6) namely \(K\) is set as \(K=3\) and is fixed across different tasks.
**Computational resource**. All experiments are done with a single NVIDIA GeForce Tesla T4 GPU.
### Independence testing
We first verify the efficacy of our method as a light-weight but powerful independence test. For this purpose, we investigate the test power of the proposed method on various synthetic dataset with different association patterns between two random variables \(X\in\mathbb{R}^{10},Y\in\mathbb{R}^{10}\) and compared to that of the baselines. The test power is defined as the ability to discern samples from the joint distribution \(p(X,Y)\) and samples from the product of marginal \(p(X)p(Y)\) and is expressed as a probability \(p\in[0,1]\). The data is generated as \(Y=(1-\alpha)\langle t(\mathbf{A}X)\rangle+\alpha\epsilon,X_{d}\sim\mathcal{U} [-3,3]\), \(\mathbf{A}_{dd}=1,\mathbf{A}_{dk}=0.2\), \(\epsilon\sim\mathcal{N}(\epsilon;\mathbf{0},\mathbf{I})\), \(\alpha\in(0,1)\) and \(\langle\cdot\rangle\) is a scaling operation that scales the operand to the range of \([0,1]\) according to the minimum and maximum values in the population. The function \(t(\cdot)\) determines the association pattern between \(X\) and \(Y\) and is chosen from one of the following: \(t(a)=a,a^{2},\sin(a),\text{tanh}(a)\). The factor \(\alpha\) controls the strength of dependence between \(X\) and \(Y\).
All tests are done on 100 samples and are repeated 1,000 times. We choose this sample number as it is a typical batch size in mini-batch learning. For methods involving the learning of parameters (i.e. Slice, Neural Renyi, Neural TC), we learn their parameters from 10,000 samples. The time for learning the parameters of Slice, Neural Renyi and Neural TC are 0.14s, 14.37s, 30.18s respectively. For completeness, we also compare with the 'optimal test' which calculates the Renyi correlation \(\rho^{*}(X,Y)=\rho(h(X),g(Y))\) with the functions \(h,g\) exactly the same as the data generating process.
Figure 1: Comparison of the test power of different independence test methods. The x-axis corresponds to different values for the dependence level \(\alpha\) and the y-axis corresponds to the test power.
Figure 1 shows the power of different methods under various association patterns \(t\) and dependence levels \(\alpha\). Overall, we see that the proposed method can effectively detect dependence in all cases, and has a test power comparable to neural network-based methods. Non-parametric tests, by contrast, fail to detect dependence in quadratic and periodic cases, possibly due to insufficient power. Neural TC is the most powerful test among all the methods considered, yet it requires the longest time to train. We also see that the proposed method is relatively less powerful when \(\alpha\geq 0.8\), but in such cases the statistical dependence between \(X\) and \(Y\) is indeed very weak (also see Appendix B). The results suggest that our slice method can provide effective training signals for infromin learning tasks.
### Algorithmic fairness
For this task, we aim to learn fair representations \(Z\in\mathds{R}^{80}\) that are minimally informative about some sensitive attribute \(T\). We quantify how sensitive \(Z\) is w.r.t \(T\) by Renyi correlation \(\rho^{*}(Z,T)\) calculated using two neural nets. Smaller \(\rho^{*}(Z,T)\) is better. The utility of the learned representation i.e., \(\mathcal{L}(Z;Y)\) is quantified by \(\rho^{*}(Z,Y)\). This formulation for utility, as aforementioned, is equivalent to measuring how well we can predict \(Y\) from \(Z\). In summary, the learning objective is:
\[\max\rho^{*}(Z;Y)-\beta\hat{I}(Z;T),\]
where \(\hat{I}(Z;T)\) is estimated by the methods mentioned above. For each dataset considered, we use 20,000 data for training and 5,000 data for testing respectively. We carefully tune the hyperparameter \(\beta\) for each method so that the utility \(\rho^{*}(Z;Y)\) of that method is close to that of the plain model (i.e. the model trained with \(\beta=0\), denoted as N/A below; other experiments below have the same setup). For all methods, we use \(5,000\) samples in the max step (so \(N^{\prime}=5,000\) in Algorithm 1, 2).
**US Census Demographic data**. This dataset is an extraction of the 2015 American Community Survey, with 37 features about 74,000 census tracts. The target \(Y\) to predict is the percentage of children below poverty line in a tract, and the sensitive attribute \(T\) is the ratio of women in that tract. The result is shown in Table 1. From the table we see that the proposed slice method produces highly fair representation with good utility. The low \(\rho^{*}(Z,T)\) value indicates that it is unlikely to predict \(T\) from \(Z\) in our method. While adversarial methods can also to some extent achieve fairness, it is still not comparable to our method, possibly because the allocated training time is insufficient (in Appendix B we study how the effect of the training time). Non-parameteric methods can not produce truly fair representation, despite they are fast to execute. v-CLUB, which estimates an upper bound of MI, achieves better fairness than adversarial methods on average, but has a higher variance [63].
**UCI Adult data**. This dataset contains census data for 48,842 instances, with 14 attributes describing their education background, age, race, marital status, etc. Here, the target \(Y\) to predict is whether the income of an instance is higher than 50,000 USD, and the sensitive attribute \(T\) is the race group. The result is summarised in Table 2. Again, we see that the proposed slice method outperforms other methods in terms of both fairness and utility. For this dataset, Neural Renyi also achieves good fairness, although the gap to our method is still large. Neural TC, by contrast, can not achieve a
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & **N/A** & **Pearson** & **dCorr** & **Slice** & **Neural Rényi** & **Neural TC** & **vCLUB** \\ \hline \(\rho^{*}(Z,Y)\uparrow\) & \(0.95\pm 0.00\) & \(0.95\pm 0.00\) & \(0.95\pm 0.00\) & \(0.95\pm 0.01\) & \(0.95\pm 0.01\) & \(0.95\pm 0.02\) & \(0.94\pm 0.02\) \\ \hline \(\rho^{*}(Z,T)\downarrow\) & \(0.92\pm 0.02\) & \(0.84\pm 0.08\) & \(0.47\pm 0.08\) & \(0.07\pm 0.02\) & \(0.23\pm 0.10\) & \(0.27\pm 0.03\) & \(0.16\pm 0.10\) \\ \hline time (sec./max step) & \(0.000\) & \(0.012\) & \(0.087\) & \(0.102\) & \(0.092\) & \(0.097\) & \(0.134\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Learning fair representations on the US Census Demographic dataset. Here the utility of the representation is measured by \(\rho^{*}(Z,Y)\), while \(\rho^{*}(Z,T)\) is used to quantify the fairness of the representation. Training time is also provided as the seconds required per max step.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **N/A** & **Pearson** & **dCorr** & **Slice** & **Neural Rényi** & **Neural TC** & **vCLUB** \\ \hline \(\rho^{*}(Z,Y)\uparrow\) & \(0.99\pm 0.00\) & \(0.99\pm 0.00\) & \(0.97\pm 0.01\) & \(0.98\pm 0.01\) & \(0.97\pm 0.01\) & \(0.98\pm 0.02\) & \(0.97\pm 0.02\) \\ \hline \(\rho^{*}(Z,T)\downarrow\) & \(0.94\pm 0.02\) & \(0.91\pm 0.06\) & \(0.71\pm 0.06\) & \(0.08\pm 0.02\) & \(0.17\pm 0.08\) & \(0.36\pm 0.13\) & \(0.26\pm 0.12\) \\ \hline time (sec./max step) & \(0.000\) & \(0.015\) & \(0.071\) & \(0.112\) & \(0.107\) & \(0.131\) & \(0.132\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Learning fair representations on the UCI Adult dataset. Here the utility of the representation is measured by \(\rho^{*}(Z,Y)\), while \(\rho^{*}(Z,T)\) is used to quantify the fairness of the representation.
comparable level of fairness under the time budget given -- a phenomenon also observed in the US Census dataset. This is possibly because the networks in Neural TC require longer time to train. The v-CLUB method does not work very satisfactorily on this task, possibly because the time allocated to learn the variational distribution \(q(Z|T)\) is not enough, leading to a loose upper bound of \(I(Z;T)\).
### Disentangled representation learning
We next apply our method to the task of disentangled representation learning, where we wish to discover some latent generative factors irrelevant to the class label \(T\). Here, we train a conditional autoencoder \(X\approx G(Z,T)\) to learn representation \(Z=E(X)\) which encodes label-irrelevant information of \(X\). The target to recover is \(Y=X\). The utility of \(Z\) is therefore quantified as the reconstruction error: \(\mathcal{L}(Z;Y)=\mathbb{E}[\|G(Z,T)-X\|_{2}^{2}]\), resulting in the following learning objective:
\[\max\mathbb{E}[\|G(Z,T)-X\|_{2}^{2}]-\beta\hat{I}(Z;T).\]
The conditional autoencoder uses a architecture similar to that of a convolutional GAN [64], with the difference being that we insert an adaption layer \(Z^{\prime}=\text{MLP}(Z,T)\) before feeding the features to the decoder. See Appendix B for the details of its architecture. All images are resized to 32 \(\times\) 32. For all methods, we use \(10,000\) samples in the max step (so \(N^{\prime}=10,000\) in Algorithms 1 and 2).
**Dsprite**. A 2D shape dataset [65] where each image is generated by four latent factors: shape, rotation, scale, locations. Here the class label \(T\) is the shape, which ranges from (square, ellipse, heart). For this dataset, we train the autoencoder 100 iterations with a batch size of 512. The dimensionality of the representation for this task is 20 i.e. \(Z\in\mathbb{R}\,^{20}\). As in the previous experiments, we provide quantitative comparisons of the utility and disentanglement of different methods in Table 3. In addition, we provide a qualitative comparison in Figure 2 which visualises the original image, the reconstructed image and the reconstructed image with a swapped label. From Table 3, we see that the proposed method achieve very low \(\rho^{*}(Z,T)\) while maintaining good MSE, suggesting that we
Figure 3: Label swapping experiments on CMU-PIE dataset. Left: the original image \(X\). Middle: reconstructing \(X\approx G(Z,T)\) using \(Z=E(X)\) and the true label \(T\). Right: reconstructing \(X^{\prime}=G(Z,T^{\prime})\) using \(Z=E(X)\) and a swapped label \(T^{\prime}\neq T\). Changing \(T\) only affect the expression.
Figure 2: Label swapping experiments on Dsprite dataset. Left: the original image \(X\). Middle: reconstructing \(X\approx G(Z,T)\) using \(Z=E(X)\) and the true label \(T\). Right: reconstructing \(X^{\prime}=G(Z,T^{\prime})\) using \(Z=E(X)\) and a swapped label \(T^{\prime}\neq T\). Changing \(T\) should only affects the style.
may have discovered the true label-irrelevant generative factor for this dataset. This is confirmed visually by Figure 2(b), where by changing \(T\) in reconstruction we only change the style. By contrast, the separation between \(T\) and \(Z\) is less evident in adversarial approach, as can be seen from Table 3 as well as from Figure 2(a) (see e.g. the reconstructed ellipses in the third column of the figure. They are more like a interpolation between ellipses and squares).
**CMU-PIE**. A colored face image dataset [66] where each face image has different pose, illumination and expression. We use its cropped version [67]. Here the class label \(T\) is the expression, which ranges from (neutral, smile). We train an autoencoder with 200 iteration and a batch size of 128. The dimensionality of the representation for this task is 128 i.e. \(Z\in\mathbb{R}^{128}\). Figure 3 and Table 4 shows the qualitative and quantitative results respectively. From Figure 3, we see that our method can well disentangle expression and non-expression representations: one can easily modify the expression of a reconstructed image by only changing \(T\). Other visual factors of the image including pose, illumination, and identity remain the same after changing \(T\). Adversarial approach can to some extent achieve disentanglement between \(Z\) and \(T\), however such disentanglement is imperfect: not all of the instances can change the expression by only modifying \(T\). This is also confirmed quantitatively by Table 4, where one can see the relatively high \(\rho^{*}(Z,T)\) values in adversarial methods. For this task, v-CLUB also achieves a low \(\rho^{*}(Z,T)\) value, though it is still outperformed by our method.
classifier \(C\) only sees labels in \(\mathcal{D}_{s}\), we call \(\mathcal{D}_{s}\) the source domain and \(\mathcal{D}_{t}\) the target domain. For the two encoders \(f_{c}\) and \(f_{d}\), we use Resnets [68] with 7 blocks trained with 100 iterations and a batch size of 128. Here \(Z_{c},Z_{d}\in\mathbb{R}^{256}\). We use \(N^{\prime}=5,000\) samples in the max step for all methods.
**MNIST \(\rightarrow\) MNIST-M**. Two digit datasets with the same classes but different background colors. Both datasets have 50,000 training samples. Table 5 shows the result, indicating that our method can more effectively remove the information about the domain. This is further confirmed by the T-SNE [69] plot in Figure 4, where one can hardly distinguish the samples of \(Z_{c}\) from the two domains. This naturally leads to a higher target domain accuracy \(\text{Acc}(\hat{Y}_{t})\) than other methods.
**CIFAR10 \(\rightarrow\) STL10**. Two datasets of natural images sharing 9 classes. There are 50,000 and 5,000 training samples in the two datasets respectively. Following existing works [47; 70; 71], we remove the non-overlapping classes from both datasets. Table 5 and Figure 4 show the result. Again, we see that our method can more effectively remove domain information from the learned representation.
## 6 Conclusion
This work proposes a new method for infomin learning without adversarial training. A major challenge is how to estimate mutual information accurately and efficiently, as MI is generally intractable. We sidestep this challenge by only testing and minimising dependence in a sliced space, which can be achieved analytically, and we showed this is sufficient for our goal. Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify our method's efficacy.
Through our controlled experiments, we also verify that adversarial approaches indeed may not produce infomin representation reliably - an observation consistent with recent studies. This suggests that existing adversarial approaches may not converge to good solutions, or may need more time for convergence, with more gradient steps needed to train the adversary fully. The result also hints at the potential of diverse randomisation methods as an alternative to adversarial training in some cases.
While we believe our method can be used in many applications for societal benefit (e.g. for promoting fairness), since it is a general technique, one must always be careful to prevent societal harms.
## Acknowledgement
AW acknowledges support from a Turing AI Fellowship under grant EP/V025279/1, The Alan Turing Institute, and the Leverhulme Trust via CFI. YC acknowledges funding from Cambridge Trust.
Figure 4: T-SNE plots of the learned content representations \(Z_{c}\) in domain adaptation tasks. (a)(c) show the cases when the adversary is not trained thoroughly (i.e. \(L_{2}\) in Algorithm 1 is set too small).
\begin{table}
\begin{tabular}{c c c c c c} \multicolumn{4}{c}{MNIST \(\rightarrow\) MNIST-M} \\ \hline & **N/A** & **Slice** & **Neural TC** & **vCLUB** \\ \hline \hline \(\text{Acc}(\hat{Y}_{t})\uparrow\) & \(99.3\pm 0.04\) & \(99.2\pm 0.02\) & \(99.2\pm 0.04\) & \(99.0\pm 0.03\) \\ \hline \(\text{Acc}(\hat{Y}_{t})\uparrow\) & \(46.3\pm 0.03\) & \(98.5\pm 0.45\) & \(80.1\pm 0.17\) & \(98.0\pm 0.10\) \\ \hline \(\rho^{\prime}(Z_{c},Z_{d})\downarrow\) & \(0.86\pm 0.05\) & \(0.06\pm 0.01\) & \(0.64\pm 0.04\) & \(0.49\pm 0.12\) \\ \hline time (sec.Step) & \(0.000\) & \(2.578\) & \(3.282\) & \(3.123\) \\ \hline \end{tabular}
\begin{tabular}{c c c c c} \multicolumn{4}{c}{CIFAR10 \(\rightarrow\) STL10} \\ \hline & **N/A** & **Slice** & **Neural TC** & **vCLUB** \\ \hline \(\text{Acc}(\hat{Y}_{t})\uparrow\) & \(93.0\pm 0.03\) & \(92.5\pm 0.03\) & \(92.4\pm 0.03\) & \(92.1\pm 0.04\) \\ \hline \(\text{Acc}(\hat{Y}_{t})\uparrow\) & \(75.9\pm 0.09\) & \(82.3\pm 0.03\) & \(80.8\pm 0.08\) & \(78.5\pm 0.11\) \\ \hline \(\rho^{\prime}(Z_{c},Z_{d})\downarrow\) & \(0.43\pm 0.05\) & \(0.08\pm 0.01\) & \(0.39\pm 0.07\) & \(0.42\pm 0.09\) \\ \hline time (sec.Step) & \(0.000\) & \(3.146\) & \(3.222\) & \(3.080\) \\ \hline \end{tabular}
\end{table}
Table 5: Learning domain-invariant representations. Here \(\text{Acc}(\hat{Y}_{s})\) and \(\text{Acc}(\hat{Y}_{t})\) are the classification accuracy in the source and the target domains respectively. Time used per max step is given. |
2303.07154 | Differential Good Arm Identification | This paper targets a variant of the stochastic multi-armed bandit problem
called good arm identification (GAI). GAI is a pure-exploration bandit problem
with the goal to output as many good arms using as few samples as possible,
where a good arm is defined as an arm whose expected reward is greater than a
given threshold. In this work, we propose DGAI - a differentiable good arm
identification algorithm to improve the sample complexity of the
state-of-the-art HDoC algorithm in a data-driven fashion. We also showed that
the DGAI can further boost the performance of a general multi-arm bandit (MAB)
problem given a threshold as a prior knowledge to the arm set. Extensive
experiments confirm that our algorithm outperform the baseline algorithms
significantly in both synthetic and real world datasets for both GAI and MAB
tasks. | Yun-Da Tsai, Tzu-Hsien Tsai, Shou-De Lin | 2023-03-13T14:28:21Z | http://arxiv.org/abs/2303.07154v3 | # Differentiable Good Arm Identification
###### Abstract
This paper targets a variant of the stochastic multi-armed bandit problem called good arm identification (GAI). GAI is a pure-exploration bandit problem with the goal to output as many good arms using as few samples as possible, where a good arm is defined as an arm whose expected reward is greater than a given threshold. In this work, we propose DGAI - a differentiable good arm identification algorithm to improve the sample complexity of the state-of-the-art HDoC algorithm in a data-driven fashion. We also showed that the DGAI can further boost the performance of a general multi-arm bandit (MAB) problem given a threshold as a prior knowledge to the arm set. Extensive experiments confirm that our algorithm outperform the baseline algorithms significantly in both synthetic and real world datasets for both GAI and MAB tasks.
Department of Computer Science, National Taiwan University
[email protected], [email protected]
## 1 Introduction
Bandit is a sequential decision-making problem using observed stochastic rewards as feedbacks. In a classic multi-armed bandit (MAB) problem, the goal is to maximize the expected cumulative reward over a number of trials through solving the exploration-exploitation dilemma. Another classic bandit problem is the best arm [1, 10, 12] identification (BAI), which is a pure-exploration problem that the agent tries to identify the best arm with minimum sample complexity. In this paper, we specifically focus on the good arm identification (GAI) [12] problem, which is a variant of the pure-exploration problem derived from BAI.
In GAI, a good arm is defined as an arm whose expected reward is greater than or equal to a given threshold. The agent outputs a good arm as soon as the agent identifies it as a good one with certain error probability within repeated trials, and stops when it believes there is no good arm remain. Thus, the goal for GAI is to identify as many arms as possible using as few samples as possible. There are some key differences between GAI and BAI: 1. GAI identifies arms with absolute comparison (i.e. above threshold) instead of relative comparison (best), which demands different strategy for optimization. 2. Different from BAI that need to examine the entire arm set to identify the best arm, GAI prefers anytime algorithm which can still return a reasonably good solution even if it is interrupted during processing. Therefore, GAI is often more useful for applications such as online advertisement and high-speed trading where the decisions need to be made within a certain amount of time [12]. 3. GAI problem can be regarded as dealing with the exploration-exploitation dilemma for confidence, where the exploitation means to pulls the arm that is currently most possible to be good to increase the confidence of identification, and exploration means to pulls other arms to increase the confidence of identifying them as either good or bad.
The basic framework of GAI algorithms such as HDoC, LUCB-G, APT-G [12] are composed of a sampling strategy and an identification criterion which we showed in Algorithm 1. The sampling strategy determines which arm to draw while the identification criterion decides whether to accept an arm as good. Once an arm is identified as good arm, it will be removed from the pool as there is no need to sample it anymore. The SOTA algorithms such as HDoC combines the UCB algorithm for sampling strategy and LCB algorithm for identification criteria such that the overall sample complexity is bounded in terms of an union bound \(\Delta_{i}=|\mu_{i}-\epsilon|\) (i.e. gap between arms for sampling) and \(\Delta_{i,j}=\mu_{i}-\mu_{j}\) (i.e. gap between threshold for identification), whichever is larger, where \(\Delta=min\{min_{i\in[K]}\Delta_{i},min_{\lambda\in[K-1]}\Delta_{\lambda, \lambda+1}/2\}\).
The combine usage of two independent confidence bounds causes the union sample complexity bound, which might not be optimal in some situations. For instance, assuming there exists many arms whose estimated rewards are very similar but far below threshold, the UCB algorithm can still spend lots of exploration efforts sampling them in order to distinguish one from each other, even that they have very little chance to be good. This can happen to applications such as the recommendation systems where many items have similar but low click-through-ratio. Therefore, we argue that the confidence bounds for both sampling and identification in GAI shall be learned in a data-driven manner altogether to avoid the union bound since it depending on different distribution of gaps between arms/thresholds.
Inspired by SoftUCB [13], we proposed an algorithm Differentiable GAI (or DGAI) that uses differentiable UCB-based index to achieve the above goal. The
proposed index is more informative than classical UCB index and can be combined with a smoothed indicator function to learn the optimal confidence bound respectively for both sampling strategy and identification criteria. The objective is to adjust the scale of the confidence bound by a controlling parameter. With the newly designed differentiable algorithm and learning objective function, we can greatly improve the empirical performance compared to state-of-the-art GAI algorithms.
Next, we also show that DGAI can also be applied to solve a conventional MAB task given a threshold as prior. Generally, the criteria for eliminating the sub-optimal arms in MAB and bad arms in GAI is the same when the conventional UCB algorithm is chosen for sampling, as shown in Sec. 4.4. However, while applying the DGAI algorithm for MAB, the criteria (i.e. confidence bound) to eliminate sub-optimal arms can further be improved in a data-driven fashion. In this case, the threshold serves as a prior knowledge to the unknown arm set for our differentiable algorithm to learn and adjust the confidence bound. Thus, we can reformulate the objective function of MAB to perform both maximizing cumulative reward and eliminating sub-optimal arms to further improve the empirical performance on MAB problems.
In summary, this paper makes the following contributions:
1. proposing DGAI, a differentiable algorithm, to learn the confidence bounds for both sampling and identification via gradient ascent with our designed objective function.
2. showing that the theoretical lower bound for GAI can be improved.
3. showing that DGAI can be applied to boost the performance of the typical MAB task
4. through experiments, demonstrating that DGAI can outperform state-of-the-art baselines on both synthetic and real-world datasets.
## 2 Related works
### Good arm identification
(Kano et al., 2019) is the first work to formulate the GAI problem as a pure-exploration problem in the fixed confidence setting. In fix confidence setting, an acceptance error rate (confidence) is fixed and the goal is to minimize the number of pulling arms needed to output good arms (sample complexity). It solves a new kind of dilemma : exploration-exploitation dilemma of confidence, where the exploration means that the agent pulls other arms rather than the currently best one in order to increase the confidence of that arm being identified as a good one or bad one, and the exploitation indicates that the agent pulls the currently best arm to increase the confidence on the goodness. They then propose three algorithms: a hybrid algorithm for the Dilemma of Confidence (HDoC), the Lower and Upper Confidence Bounds algorithm for GAI (LUCB-G), which is based on the LUCB algorithm for the best arm identification (Kalyanakrishnan et al., 2012) and the Anytime Parameter-free Thresholding algorithm for GAI (APT-G), which is based on the APT algorithm for the thresholding bandit problem (Locatelli et al., 2016). The lower bound on the sample complexity for GAI is \(\Omega(\lambda log\frac{1}{\delta})\), and HDoC can find \(\lambda\) good arms within \(O(\lambda\log\frac{1}{\delta}+(K-\lambda)\log\log\frac{1}{\delta})\) samples.
### Differentiable bandit algorithm
SoftUCB (Yang and Toni, 2020) solves the policy-gradient optimization of bandit policies via differentiable bandit algorithm. They propose a differentiable UCB-typed linear bandit algorithm with a subtle combination of EXP3 (Auer et al., 1995) and Phased Elimination (Lattimore and Szepesvari, 2020) to enable the confidence bound to be learned purely in a data-driven fashion without relying on concentration inequalities and assumptions on the form of reward distribution. With the differentiable UCB-based index and the gradient estimator, the learned confidence bound can be adaptive to the actual problem structure. (Yang and Toni, 2020) also proved that differentiable UCB-based index has a regret bound scales with \(d\) and \(T\) as \(\mathcal{O}(\beta\sqrt{dT})\) compared classical UCB-typed algorithms where \(d\) is dimension of the input to the linear bandit model and \(T\) is the total sampling rounds.
## 3 Preliminary
In this section, we first formulate the GAI problem in the fixed confidence setting and show lower bound on sample complexity for GAI. Next we will introduce the basic framework for GAI algorithm that follows the HDoC algorithm. We give the notations in Table 1.
### Good arm identification
Let \(K\) be the number of arms, \(\xi\) be the threshold, and \(\delta\) be an acceptance error rate. Reward of arm \(i\), \(i\in[1,\ldots,K]\), follows mean \(\mu_{i}\), which is unknown in the beginning. We
\begin{table}
\begin{tabular}{|l||l|} \hline \(K\) & Number of arms. \\ \(m\) & number of good arms (unknown). \\ \(A\) & set of arms, where \(|A|=K\). \\ \(\xi\) & Threshold determining whether a arm is good or bad. \\ \(\delta\) & acceptance error rate \\ \(\mu_{i}\) & The true mean of \(i^{th}\) arm. \\ \(\hat{\mu}_{i,t}\) & The empirical mean of \(i^{th}\) arm at time \(t\). \\ \(\tau_{\lambda}\) & The number of rounds to identify whether \\ & \(\lambda^{th}\) arm is good or arm. \\ \(\tau_{stop}\) & Round that agent confirms no good arm left. \\ \(N_{i}(t)\) & The number of samples of arm \(i\) by the end of sampling round \(t\). \\ \(U_{t}\) & confidence bound at time \(t\). \\ \(\overline{\mu}_{i,t}\) & upper confidence bound. \\ \(\mu_{i,t}\) & lower confidence bound. \\ & \(\|x\|_{M}=\sqrt{x^{T}Mx}\), \(x\in\mathbb{R}^{d}\). \\ & \(\Delta_{i}=|\mu_{i}-\xi|\). \\ & \(\Delta_{i,j}=\mu_{i}-\mu_{j}\). \\ \hline \end{tabular}
\end{table}
Table 1: Notations
define good arms as those arms whose means are larger than the given threshold \(\xi\). Without loss of generality, we let
\[\mu_{1}\geq\mu_{2}\geq\cdots\geq\mu_{m}\geq\xi\geq\mu_{m+1}\geq\cdots\geq\mu_{K} \tag{1}\]
Likewise, the agent is unaware of the total number of good arms \(m\).
In each round \(t\), the agent selects arm \(a(t)\) to pull and receives the reward. The agent identifies one arm as good arm or bad arm according to the previous rewards this arm has received, and the agent stops at time \(\tau_{stop}\) when all arms is identified as either good or bad arm. The agent outputs \(\hat{a}_{1},\hat{a}_{2},\dots\) as good arm at round \(\tau_{1},\tau_{2},\dots\) respectively.
### Lower bound on sample complexity for GAI
This lower bound on the sample complexity for GAI is given in [10] in terms of top-\(\lambda\) expectations \(\{\mu_{i}\}_{i=1}^{\lambda}\) and is tight up to the logarithmic factor \(O(\log\frac{1}{\delta})\).
**Definition 1** (\((\lambda,\delta)\)-Pac).: _An algorithm satisfying the following conditions is called (\(\lambda\), \(\delta\))-PAC: if there are at least \(\lambda\) good arms then \(\mathbb{P}[\{\hat{m}<\lambda\}\cup\bigcup_{i\in\{\hat{a}_{1},\hat{a}_{2}, \dots,\hat{a}_{\lambda}\}}\{\mu_{i}<\xi\}]\leq\delta\) and if there are less than \(\lambda\) good arms then \(\mathbb{P}[\hat{m}\geq\lambda]\leq\delta\)._
**Theorem 1**.: _Under any \((\lambda,\delta)\)-PAC algorithm, if there are \(m\geq\lambda\) good arms, then_
\[\mathbb{E}[\tau_{\lambda}]\geq\left(\sum_{i=1}^{\lambda}\frac{1}{d(\mu_{i},\xi )}\log\frac{1}{2\delta}\right)-\frac{m}{d(\mu_{\lambda},\xi)}, \tag{2}\]
_where \(d(x,y)=x\log(x/y)+(1-x)\log((1-x)/(1-y))\) is the binary relative entropy, with convention that \(d(0,0)=d(1,1)=0\)._
### Hybrid algorithm for the Dilemma of Confidence (HDoC)
HDoC's sampling strategy is based on the upper confidence bound (UCB) algorithm for the cumulative regret minimization [1], and its identification criterion (i.e. to output an arm as a good one) is based on the lower confidence bound (LCB) identification [10]. The upper bound on the sample complexity of HDoC given in [10] is as below:
**Theorem 2**.: _Assume that \(\Delta_{\lambda,\lambda+1}>0\). Then, for any \(\lambda\leq m\) and \(\epsilon<\min\{\min_{i\in[K]}\Delta_{i},\,\Delta_{\lambda,\lambda+1}/2\}\),_
\[\mathbb{E}[\tau_{\lambda}]\leq\sum_{i\in[\lambda]}n_{i}+\sum_{ \in[K]\setminus[\lambda]}\left(\frac{\log(K\max_{j\in[K]}n_{j})}{2(\Delta_{ \lambda,i}-2\epsilon)^{2}}+\delta n_{i}\right)\] \[\qquad\qquad+\frac{K^{2^{-\frac{\epsilon^{2}}{(\min_{i\in[K]} \Delta_{i}-\epsilon)^{2}}}}}{2\epsilon^{2}}+\frac{K(5+\log\frac{1}{2\epsilon^{ 2}})}{4\epsilon^{2}} \tag{3}\]
\[\mathbb{E}[\tau_{\mathrm{stop}}]\leq\sum_{i\in[K]}n_{i}+\frac{K}{2\epsilon^{2}} \tag{4}\]
_where_
\[n_{i}=\frac{1}{(\Delta_{i}-\epsilon)^{2}}\log\left(\frac{4\sqrt{K/\delta}}{( \Delta_{i}-\epsilon)^{2}}\log\frac{5\sqrt{K/\delta}}{(\Delta_{i}-\epsilon)^{ 2}}\right) \tag{5}\]
The following corollary comes from this theorem.
**Corollary 1**.: _Let \(\Delta\)=\(\min\{\min_{\lambda\in[K-1]}\Delta_{\lambda,\lambda+1}/2,\min_{i\in[K]}\Delta_{i}\}\). Then, for any \(\lambda\leq m\),_
\[\mathbb{E}[\tau_{\lambda}]=\mathcal{O}\left(\frac{\lambda\log\frac{1}{\delta }+(K-\lambda)\log\log\frac{1}{\delta}+K\log\frac{K}{\Delta}}{\Delta^{2}}\right) \tag{6}\]
\[\mathbb{E}[\tau_{\mathrm{stop}}]=\mathcal{O}\left(\frac{K\log(1/\delta)+K\log (K/\Delta)}{\Delta^{2}}\right) \tag{7}\]
Note that the evaluation of the error probability in HDoC is based on the union bound over all rounds \(t\in\mathbb{N}\), and the use of the union bound does not worsen the asymptotic analysis for \(\delta\to 0\). However, the empirical performance can be considerably improved if avoiding the union bound and use an identification criterion based on such a bound. The complexity measure for the sampling strategy is based on the gap between arms \(\Delta_{i,j}\) and the complexity measure for the identification criteria is based on the gap between threshold \(\Delta_{i}\).
## 4 Algorithm
In this section, we proposed the differential GAI algorithm, DGAI, and show the improvements from the state-of-the-art HDoC algorithm.
### Differentiable Algorithm
For simplicity, we first introduce the differentiable bandit algorithm in linear bandit matrix forms. Each arm \(i\in\mathcal{A}\) is associated with a known feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\). Linear bandits is equivalent to discrete and independent arms using one hot feature vectors. The expected reward of each arm \(\mu_{i}=\mathbf{x}_{i}^{T}\boldsymbol{\theta}\) follows a linear relationship over \(\mathbf{x}_{i}\) and an unknown parameter vector \(\boldsymbol{\theta}\). At each round \(t\), the confidence bound is defined as \(|\hat{\mu}_{i,t}-\mu_{i}|\leq U_{t},\ \ \forall i\in\mathcal{A}\) where \(U_{t}=\beta||\mathbf{x}_{i}||_{\mathbf{V}_{t}^{-1}}\) and \(\mathbf{V}_{t}=\sum_{t=1}^{T}\mathbf{x}_{t}\mathbf{x}_{t}^{T}\) is the Gram matrix up to round \(t\). \(\beta\) in the confidence bound can be derived base on different concentration inequalities under the
assumption of the stochastic reward structure. e.g., Hoeffding (1994), self-normalized (Abbasi-Yadkori, Pal, and Szepesvarii 2011), Azuma Inequality (Lattimore and Szepesvarii 2018), Bernstein inequality (Mnih, Szepesvari, and Audibert 2008). In the DGAI, we aim at learning \(\beta\) in a data-driven fashion.
Let us denote the arm with the largest lower confidence bound at round \(t\) as \(i_{*}=\arg\max_{i\in\mathcal{A}}\hat{\mu}_{i,t}-U_{i,t}\). Let us also define
\[\phi_{i,t}=U_{i,t}+U_{i_{*},t}\text{ \ and \ }\hat{\Delta}_{i,t}=\hat{\mu}_{i_{*},t}- \hat{\mu}_{i,t} \tag{8}\]
and \(\hat{\Delta}_{i,t}\) is the estimated reward gap between \(i_{*}\) and \(i\). Equipped with the above notations, we are now ready to introduce the UCB-based index \(S_{i,t}\) defined as
\[S_{i,t}=\beta\phi_{i,t}-\hat{\Delta}_{i,t} \tag{9}\]
**Lemma 1**.: _The new UCB-based index \(S\) provides the following information: \(S_{i,t}<0\), i.e., \(\mu_{*}-\mu_{i}>0\), implies arm \(i\) is a sub-optimal arm. \(S_{i,t}\geq S_{j,t}\geq 0\), i.e., \(\hat{\mu}_{i,t}+U_{i,t}\geq\hat{\mu}_{j,t}+U_{j,t}\), implies arm \(i\) has larger upper confidence bound._
At each round \(t\in[T]\), the probability for arm \(i\) to be selected is defined as
\[p_{i,t}=\frac{\exp(\gamma_{t}S_{i,t})}{\sum_{j=1}^{K}\exp(\gamma_{t}S_{j,t})} \,\ \gamma_{t}=\frac{\log\left(\frac{\delta|\mathcal{L}_{t}|}{1-\delta}\right)}{ \bar{S}_{\text{max},t}} \tag{10}\]
where \(\gamma_{t}>0\) is the coldness-parameter introduced in (Yang and Toni 2020) for controlling the concentration of the policy distribution explained in lemma 2. \(\mathcal{L}_{t}\) is the set of suboptimal arms (i.e., \(S_{t}<0\)) and \(\mathcal{U}_{t}\) is the set of non-suboptimal arms (i.e., \(S_{t}\geq 0\)) which \(\mathcal{U}_{t}\cup\mathcal{L}_{t}=\mathcal{A}\) and \(\mathcal{U}_{t}\cap\mathcal{L}_{t}=\emptyset\). \(\bar{S}_{\text{max},t}=\max_{i\in\mathcal{U}_{t}}S_{i,t}\), \(|\mathcal{L}_{t}|\) is the cardinality of \(\mathcal{L}_{t}\) and \(\delta\) is a probability hyper-parameter.
**Lemma 2**.: _At any round \(t\in[T]\), for any \(\delta\in(0,1)\), setting \(\gamma_{t}\geq\log(\frac{\delta|\mathcal{L}_{t}|}{1-\delta})/\bar{S}_{\text{ max},t}\) guarantees that \(p_{\mathcal{U}_{t}}=\sum_{i\in\mathcal{U}_{t}}p_{i,t}\geq\delta\) and \(p_{\mathcal{L}_{t}}=\sum_{i\in\mathcal{L}_{t}}p_{i,t}<1-\delta\)._
### Training Objectives
The optimization objective for sampling strategy and identification criteria are independent and learned separately. The optimization objective of the sampling strategy is similar to SoftUCB that maximizes the cumulative reward but with different constraints by the following differentiable function over \(\beta\):
\[\begin{split}\max_{\beta}\sum_{t=1}^{T}\mathbb{E}[y_{t}]=\max_{ \beta}\sum_{t=1}^{T}\sum_{i=1}^{K}p_{i,t}\mu_{i}\,\\ s.t.\ |\mu_{i}-\hat{\mu}_{i,t}|\lessapprox\beta||\mathbf{x}_{i} ||_{\mathbf{V}_{t}^{-1}},\ \forall i\in\mathcal{A},t\in[T]\end{split} \tag{11}\]
The addition constraint force to tighten the upper confidence bound as low as possible and ensures that \(\beta\) is minimal but \(\beta||\mathbf{x}_{i}||_{\mathbf{V}_{t}^{-1}}\) is still indeed an actual upper confidence bound at any round \(t\in[T]\) for any arm \(i\in\mathcal{A}\). Applying the Lagrange multipliers gives the objective with \(\eta\) as the hyperparameter for balancing the constraints:
\[\begin{split}\max_{\beta}\sum_{t=1}^{T}\sum_{i=1}^{K}p_{i,t}\mu_ {i}-\eta_{1}C-\eta_{2}|C|\,\ \ s.t.\ \eta>0\\ \text{where \ }C=|\mu_{i}-\hat{\mu}_{i,t}|-\beta||\mathbf{x}_{i} ||_{\mathbf{V}_{t}^{-1}}\end{split} \tag{12}\]
The objective for the identification criteria is to maximize the _Exploit score_ which is defined as follow:
\[\sum_{i=1}^{K}(T-t_{i})*(\mu_{i}-\xi) \tag{13}\]
where \(t_{i}\) is the round that arm \(i\) is identified. The exploit score gives positive reward when the arm is identified correctly and increases if the arm is identified earlier. This conditional reward is formed as an indicator function with the lower confidence bound in the objective:
\[\begin{split}\max_{\alpha}\sum_{t=1}^{T}\sum_{i=1}^{K}\mathbb{I} [\mu_{i,t}-\alpha\|x_{i}\|_{V_{t}^{-1}}>\xi]*(\mu_{i,t}-\xi)\\ s.t.|\mu_{i,t}-\xi|\gtrapprox\alpha\|x_{i}\|_{V_{t}^{-1}},\ \forall i\in\mathcal{A},t\in[T]\end{split} \tag{14}\]
The indicator function is relaxed and smoothen with sigmoid function and again the Lagrange multipliers yields the following objective:
\[\max_{\beta}\sum_{t=1}^{T}\sum_{i=1}^{K}\sigma((\mu_{i,t}-\alpha \|x_{i}\|_{V_{t}^{-1}}-\xi)*M)*(\mu_{i,t}-\xi)\\ -\eta_{1}D-\eta_{2}|D|\,\ \ s.t.\ \eta>0\,\ \text{where}\\ D=\alpha||\mathbf{x}_{i}||_{\mathbf{V}_{t}^{-1}}-|\mu_{i}-\hat{ \mu}_{i,t}|\,\ \ \sigma(x)=\frac{1}{1+e^{-x}} \tag{15}\]
\(\eta\) is for balancing the constraints and \(M\) is for tuning the sharpness of the indicator function.
### Training Settings
We trained our differentiable algorithm in both offline and online setting. The online setting is more suitable for bandit problem while offline setting shows the converged optimal confidence bound and would be the upper bound of the online setting.
Offline settingIn offline setting, we will run multiple epochs of \(T\)-rounds training trajectories with the identical arm set \(\mathcal{A}\) to train \(\alpha\) and \(\beta\). First, \(\alpha_{0}\) and \(\beta_{0}\) are initialized. Both parameters are used to run one entire \(T\)-rounds training trajectories. After the trajectory, \(\alpha_{0}\) and \(\beta_{0}\) is updated with Eq. 12 and Eq. 15 respectively. After \(N\) epochs of trajectories, we have our finalized confidence bound with learned \(\alpha_{N}\) and \(\beta_{N}\). DGAI for learning the identification criterion in offline setting is shown in Algorithm 2. The time complexity for training in the offline setting is \(\mathcal{O}(NKT)\).
Online settingIn this setting, \(\alpha\) and \(\beta\) is updated online in one single \(T\)-rounds trajectory. First, \(\alpha_{0}\) and \(\beta_{0}\) are initialized and for every \(b\in[T]\) round, \(\alpha\) and \(\beta\) will perform batch update where \(b\) is the batch size.
We adopted policy gradient methods same as to non-episodic reinforcement learning problems [14] to update the parameters in a online fashion. Instead of maximizing the cumulative reward until the end of the trajectory, we update the observed cumulative reward up to round \(t\) and bootstrapped future reward under the current policy \(\mathbf{\pi}_{t}=[p_{1,t},p_{2,t},...,p_{K,t}]\). The online objective for sampling and identification are as follow where \(R\) is the reward feedback we obtain in original objective function Eq. 11 and Eq. 13:
\[\max_{\alpha,\beta}\left(\sum_{s=1}^{t}\sum_{i=1}^{K}R_{i,s}+(T-t)\sum_{i=1}^{ K}R_{i,t}\right)/T \tag{16}\]
The time complexity for training in the online setting is \(O(KT)\).
### Multi-arm bandit with threshold
In multi-arm bandit, the upper confidence bound used for sampling can also be used as identification criteria since \(|\hat{\mu}_{i,t}-\mu_{i}|\leq U\) implies arm \(i\) is good \(\mu_{i}\geq\xi\) when \(\hat{\mu}_{i,T_{i}(t)}-U\geq\xi\). However, different from the original UCB sampling strategy, we applied two different confidence bounds together to help improve solving the multi-arm bandit problem. The confidence bound for sampling strategy derived from Eq. 11 selects the arms with the criteria that soft eliminates arms that are sub-optimal. The other confidence bound for identification derived from Eq. 14 selects the arms with the criteria that rejects arms below threshold.
The additional usage of the identification bound actually uses the threshold as a prior knowledge of the arm set \(A\) so that we can use the threshold to optimize and adjust the confidence bound in order to maximize cumulative reward. The objective function to solve the multi-arm bandit with threshold re-formulates Eq. 11 and Eq. 14 into one:
\[\max_{\alpha,\beta}\sum_{t=1}^{T}\mathbb{E}[y_{t}]=\max_{\alpha, \beta}\sum_{t=1}^{T}\sum_{i=1}^{K}p_{i,t}\mu_{i}\, \tag{17}\] \[p_{i,t}=\frac{exp(S_{i,t}*I_{i,t}^{\xi})}{\sum exp(S_{j,t}*I_{i,t }^{\xi})}\,\] \[I_{i,t}=\sigma(\left(\mu_{i,t}-\alpha\|x_{i}\|_{V_{t}^{-1}}-\xi \right)*M)\]
The additional usage of the identification bound can bring improvements as long as the given threshold guarantees that at least one arm has reward above it. As the threshold is closer to the optimal arm, the performance can be better improved.
## 5 Experiment
Here we would like to evaluate (1) whether DGAI can improve the sampling complexity (2) whether DGAI can benefit the multi-arm bandit problem with given threshold.
### Experiment Setup
DatasetsWe use two synthetic and two real-world datasets whose details are listed in Table 2.
We generated one small and one large synthetic datasets with number of arms = 50 and 1000 respectively. The reward of each arm is generated from uniform distribution \(\in[0.5005,0.49975]\) and the threshold \(\xi\) is set to 0.5.
For real world datasets, we used two public recommendation datasets where each item can be treated as an item: OpenBandit [10] and MovieLens [1]. In the MovieLens dataset, the rating of each movie is treated as a mean to sample the stochastic reward of each arm. The threshold for a good arm is set to the 95 percentile reward of all arms in the entire dataset.
Parameter settingFor the two optimized parameters \(\alpha\), \(\beta\) we set initial values to 0. We used stochastic gradient decent as the optimizer and 1e-1 as the learning rate. \(\eta_{1}\) and \(eta_{2}\) in Eq. 12 and Eq. 15 are both set to 1e-3. It is worth mentioning that fine-tuning there parameters can improve the convergence of the training and a bad hyperparameter may not converge to optimal in one single training trajectory.
### Good arm identification
Evaluation MetricThe metric for evaluating the improvement of good arm identification problem is by the _Exploit score_\(\sum_{i=1}^{K}(T-t_{i})*(\mu_{i}-\xi)\) which also used in Eq. 13.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & \(N\) arms & \(T\) trajectory & \(\xi\) threshold \\ \hline synth small & 50 & 1e6 & 0.5 \\ synth large & 1000 & 1e6 & 0.5 \\ MovieLens & 9527 & 1e5 & 0.071 \\ Openbandit & 80 & 107 & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Details of the datasets.
The increase of the cumulative score from this metric indicates that the agent outputs good arms faster.It not only evaluates the overall performance (when agent finish identifying all arms) but also considers each individual arm (when each arm is identified).
**Baseline** We compare with three SOTA algorithms: HDoC, APT-G, LUCB-G introduced in [10] for solving the good arm identification problem. We also compare with top-two Thompson sampling (TT-TS) [14], which is a Bayesian method designed for arm identification. We also added SoftUCB-G which runs the exact HDoC algorithm except that we replaced the conventional upper confidence bound with SoftUCB [13] such that it also learns the upper bound with differential data-driven methods. The results of all baselines are
Figure 1: Comparison of our proposed method with several baselines on all datasets. The cumulative exploit score shows that our proposed method outperforms the baselines in solving GAI problem as the learned confidence bound w.r.t \(\alpha,\beta\) converges. The performance in online setting converges slower but also eventually outperform other baselines.
Figure 3: Confidence bound comparison. This figure plots the identification bound for the best arm in the arm set \(\mathcal{A}\) during the training trajectory and compare the difference between ours and the baselines.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & synth & synth & MovieLens & Openbandit \\ & small & large & & \\ \hline HDoC & 1.2e6 & 6.5e4 & 1.8e6 & 4e5 \\ LUCB-G & 1.4e6 & 1.1e5 & 1.7e6 & 4e5 \\ APT-G & 8e6 & 8.8e4 & 6e6 & 4e5 \\ TT-TS & 1.7e6 & 1.1e5 & 4.1e7 & 4e5 \\ SoftUCB-G & 2.7e6 & 1.2e5 & 3.8e7 & 1e6 \\ DGAI-online & **3.1e6** & **1.7e5** & **4.2e7** & **1.3e6** \\ DGAI-offline & **4.1e6** & **2.5e5** & **6.1e7** & **1.5e6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: _Exploit score_ of DGAI and the baselines. The result shows that DGAI outperforms the baselines in all cases.
Figure 2: Learning curves of \(\alpha,\beta\). Solid lines represent offline setting and the dash line represent online setting. The horizontal axis in offline setting is training epochs while in online setting is sampling rounds \(t\in[T]\). The curve shows that the parameters converges as the training epochs goes on and it converges slower in online setting.
averaged with ten different random seeds.
**Sampling Bound**
1. [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=topsep,topsep=topsep,topsep=0pt,topsep=topsep,topsep=topsep,topsep=topsep,topsep=topsep,topsep=topsepsepsep,topsep=topsep |
2304.03395 | Gaussian inequality | We prove some special cases of Bergeron's inequality involving two Gaussian
polynomials (or $q$-binomials). | Tewodros Amdeberhan, David Callan | 2023-04-06T21:51:23Z | http://arxiv.org/abs/2304.03395v1 | # Gaussian inequality
###### Abstract.
We prove some special cases of Bergeron's inequality involving two Gaussian polynomials (or \(q\)-binomials).
2010 Mathematics Subject Classification: 11B65, 11A07
## 1. Introduction
We begin by recalling the \(q\)-analogues \([n]!_{q}=\prod_{j=1}^{n}\frac{1-q^{j}}{1-q}\) of factorials and the \(q\)-analogue \({n\choose k}_{q}=\frac{[n]!_{q}}{[k]!_{q}[n-k]!_{q}}\) of binomial coefficients. Adopt the convention \([0]!_{q}=1\). It is well-known that these rational functions \({n\choose k}_{q}\) are polynomials, in \(q\), also called _Gaussian polynomials_, having non-negative coefficients which are also _unimodal_ and symmetric. Furthermore, there are several combinatorial interpretations of which we state two of them.
A _word_ of length \(n\) over the _alphabet_ set \(\{0,1\}\) is a finite sequence \(w=a_{1}\cdots a_{n}\). Construct
\[\mathcal{W}_{n,k}=\{w=a_{1}\cdots a_{n}:w\text{ has $k$ zeros and $n-k$ ones}\}\]
and the _inversion set_ of \(w\) as \(\mathrm{Inv}(w)=\{(i,j):i<j\text{ and }a_{i}>a_{j}\}\). The corresponding _inversion number_ of \(w\) will be denoted \(\mathrm{inv}(w)=\#\mathrm{Inv}(w)\). Then, we have
\[{n\choose k}_{q}=\sum_{w\in\mathcal{W}_{n,k}}q^{\mathrm{inv}(w)}.\]
Yet, another formulation which would come to appeal to many combinatorialists is
\[{a+d\choose a}_{q}=\sum_{T}q^{area(T)}\]
where \(T\) is a lattice path inside an \(a\times d\) box and \(area(T)\) is area above the curve \(T\).
Given two polynomials \(f(q)\) and \(g(q)\), we write \(f(q)\geq g(q)\) provided that \(f(q)-g(q)\) has non-negative coefficients in the powers of \(q\).
The well-known _Foulkes conjecture_ (see, for instance [4]) was generalized by Vessenes [8]. She conjectured that
\[(h_{b}\circ h_{c})-(h_{a}\circ h_{d}) \tag{1}\]
is _Schur positive_ (expands with positive integer coefficients in the Schur basis \(\{s_{\mu}\}_{\mu\vdash n}\) of symmetric polynomials) whenever \(a\leq b<c\leq d\), with \(n=ad=bc\), and one writes \((h_{n}\circ h_{k})\) for the _plethysm_ of complete homogeneous symmetric functions. A well-known fact is that \((h_{n}\circ h_{k})(1,q)={n+k\choose k}_{q}\). Moreover, any non-zero evaluation of a Schur function at \(1\) and \(q\) is of the form \(q^{i}+q^{i+1}+\cdots+q^{j}\) for some \(i<j\). Exploiting these facts on the occasion of [3], and assuming that (1) holds, F. Bergeron (see also [4]) underlined that the evaluation of the difference in (1), at \(1\) and \(q\), would imply the following:
**Conjecture 1**.: _Assume \(0<a\leq b<c\leq d\) are positive integers with \(ad=bc\). Then, the following difference of two Gaussian polynomials is symmetric and satisfies_
\[{b+c\choose b}_{q}-{a+d\choose a}_{q}\geq 0. \tag{2}\]
One can associate a direct combinatorial meaning to Vessenes' conjecture in the context of representation theory of \(GL(V)\). Indeed, if it holds true, it would signify that there is an embedding of the composite of symmetric powers \(S^{a}(S^{d}(V))\) inside \(S^{b}(S^{c}(V))\), as \(GL(V)\)-modules. It may however be more natural to state that there is a surjective \(GL(V)\)-module morphism the other way around (which is also equivalent). Therefore each \(GL(V)\)-irreducible occurs with smaller multiplicity in \(S^{a}(S^{d}(V))\) than it does in \(S^{b}(S^{c}(V))\), and the conjecture reflects this at the level of the corresponding characters (with Schur polynomials appearing as characters of irreducible representations).
The sole attempt [9] toward resolving Conjecture 1 was made by F. Zanello, who attends to the special case \(a\leq 3\), including the property of symmetry and unimodality. A sequence of numbers is _unimodal_ if it does not increase strictly after a strict decrease. The author in [9] offers a strengthening of Conjecture 1 to the effect that
**Conjecture 2**.: _Preserve the hypothesis in Conjecture 1. Then, the coefficients of the symmetric polynomial_
\[{b+c\choose b}_{q}-{a+d\choose a}_{q}\]
_are non-negative and unimodal._
Notice that symmetry is clear, since both \({b+c\choose b}_{q}\) and \({a+d\choose a}_{q}\) are symmetric polynomials of the same degree, \(ad=bc\). We started out this project with the goal of proving the below \(3\)-parameter special case of Conjecture 1 which we dubbed the _\(\beta\)-Conjecture_. Namely,
**Conjecture 3**.: _For integers \(0<a<b\) and \(\beta\geq 1\), we have_
\[{b+\beta a\choose b}_{q}\geq{a+\beta b\choose a}_{q}. \tag{3}\]
The case \(\beta=1\) is trivial. However, our journey in this effort failed short of capturing the \(\beta\)-Conjecture in its fullest. In the sequel, we supply the details of our success in settling the particular instance \(\beta=2\). Let's commence by stating one useful identity.
**Theorem 1**.: _(\(q\)-analogue Vandermonde-Chu). The following holds true_
\[\sum_{j\geq 0}\binom{X}{Z-j}_{q}\binom{Y}{j}_{q}q^{j(X-Z+j)}=\binom{X+Y}{Z}_{q}. \tag{4}\]
**Remark 2**.: In view of (4), Conjecture 1 tantamount
\[\binom{b+c}{b}_{q}=\sum_{k=0}^{\boldsymbol{b}}\binom{b}{k}_{q}\binom{c}{k}_{q }q^{k^{2}}\geq\sum_{k=0}^{\boldsymbol{a}}\binom{a}{k}_{q}\binom{d}{k}_{q}q^{k^ {2}}=\binom{a+d}{a}_{q}.\]
## 2. The Case \(\beta=2\) and \(q=1\)
In this section, we wish to explain the resolution of the \(\beta\)-Conjecture for the ordinary binomial coefficients (\(q=1\)) while \(\beta=2\), which elaborates a natural development.
For \(a<b<c<d\) with \(ad=bc\) and special case \(c=2a,d=2b\), say \(b=a+i\), \(i\geq 1\), it would be desirable to find a bijective proof for
\[\binom{3a+i}{a+i}\geq\binom{3a+2i}{a}.\]
An injection from a set counted by the smaller number to one counted by the larger number would be nice but a better proof would be an expression for the difference as a sum of obviously positive terms. For \(i=1\), we have
\[\binom{3a+1}{a+1}-\binom{3a+2}{a}=\binom{3a+1}{a-1},\]
and the right-hand side is clearly positive. It seems for general \(i=1,2,\dots\),
\[\binom{3a+i}{a+i}-\binom{3a+2i}{a}=\sum_{k=1}^{i}c_{k}(i)\binom{3a+i}{a-k}\]
for integers \(c_{k}(i)\) and, furthermore, the \(c_{k}(i)\)_are all positive_. Here is a table for \(c_{k}(i)\) when \(1\leq k\leq i\leq 8\):
\[\begin{array}{
Let's contract these coefficients as \(c_{k}(i)=\frac{i+3k}{i+k}\binom{i+k}{2k}-\binom{i}{k}\), for \(i,k\geq 1\). Notice \(c_{0}(i)=0\). We need some preliminary results.
**Lemma 1**.: _We have_
\[\binom{3a+2i}{a}=\sum_{k\geq 0}\binom{i}{k}\binom{3a+i}{a-k}.\]
Proof.: This follows from the Vandermonde-Chu identity (Theorem 1 for \(q=1\))
\[\binom{X+Y}{Z}=\sum_{k\geq 0}\binom{X}{k}\binom{Y}{Z-k}\]
applied to \(\binom{3a+2i}{a}=\binom{i+3a+i}{a}\) with \(X=i,Y=3a+i\) and \(Z=a\).
**Lemma 2**.: _We have_
\[\binom{3a+i}{a+i}=\sum_{k\geq 0}\frac{i+3k}{i+k}\binom{i+k}{2k}\binom{3a+i}{a-k }=\sum_{k\geq 0}\left[\binom{i+k}{2k}+\binom{i+k-1}{2k-1}\right]\binom{3a+i}{ a-k}.\]
Proof.: We implement Zeilberger's algorithm (from the Wilf-Zeilberger theory). Define
\[F(i,k)=\frac{i+3k}{i+k}\cdot\frac{\binom{i+k}{2k}\binom{3a+i}{a-k}}{\binom{3a+ i}{a+i}}\qquad\text{and}\qquad G(i,k)=-\frac{\binom{i+k-1}{2k-2}\binom{3a+i}{a-k}}{ \binom{3a+i}{a+i}}.\]
Check that \(F(i+1,k)-F(i,k)=G(i,k+1)-G(i,k)\) and sum both sides over all integer values \(k\). Then, notice the right-hand side vanishes and hence we obtain a sum \(\sum_{k}F(i,k)\) that is _constant_ in the variable \(i\). Determine this constant by substituting, say \(i=1\),
\[\sum_{k=0}^{1}F(1,k)=\frac{\binom{3a+1}{a}}{\binom{3a+1}{a+1}}+\frac{2\binom{3 a+1}{a-1}}{\binom{3a+1}{a+1}}=\frac{a+1}{2a+1}+\frac{a}{2a+1}=1.\]
Therefore, \(\sum_{k}F(i,k)=1\), identically, for all \(i\geq 1\). The proof follows.
We now state the main result of this section.
**Theorem 3**.: _We have_
\[\binom{3a+i}{a+i}-\binom{3a+2i}{a}=\sum_{k\geq 1}\left\{\frac{i+3k}{i+k} \binom{i+k}{2k}-\binom{i}{k}\right\}\binom{3a+i}{a-k}.\]
Proof.: Immediate from Lemma 1 and Lemma 2.
**Lemma 3**.: _For \(k\geq 1\), the coefficients \(c_{k}(i)\) are non-negative._
Proof.: We may look at it in two different ways:
(1) \(c_{k}(i)=\frac{2k}{i+k}\binom{i+k}{2k}+\binom{i+k}{2k}-\binom{i}{k}=\frac{2k }{i+k}\binom{i+k}{2k}+\binom{i+k}{i-k}-\binom{i}{i-k}\). Obviously \(\binom{i+k}{i-k}\geq\binom{i}{i-k}\), therefore \(c_{k}(i)\geq 0\).
(2) \(c_{k}(i)=\binom{i+k}{2k}+\binom{i+k-1}{2k-1}-\binom{i}{k}=\binom{i+k}{2k}+ \sum_{r=0}^{k-2}\binom{i+r}{k+1+r}\) shows clearly that \(c_{k}(i)\geq 0\). The identity \(\binom{i+k-1}{2k-1}=\binom{i}{k}+\sum_{r=1}^{k-1}\binom{i+r-1}{k+r}\) results from a cascading effect of the familiar binomial recurrence \(\binom{u}{v}+\binom{u}{v-1}=\binom{u+1}{v}\).
\(q\)-Analoggues when \(\beta=2\)
In the present section, we aim to generalize our proofs given in the preceding section by lifting the argument from the ordinary binomials to Gaussian polynomials.
**Lemma 4**.: _We have_
\[\binom{3a+2i}{a}_{q}=\sum_{k\geq 0}q^{(a-k)(i-k)}\binom{i}{k}_{q}\binom{3a+i}{a-k} _{q}.\]
Proof.: This follows from the _Vandermonde-Chu_ identity (Theorem 1)
\[\binom{X+Y}{Z}_{q}=\sum_{k\geq 0}q^{(Z-k)(X-k)}\binom{X}{k}_{q}\binom{Y}{Z-k} _{q}\]
on \(\binom{3a+2i}{a}_{q}=\binom{i+3a+i}{a}_{q}\) with \(X=i,Y=3a+i\) and \(Z=a\).
**Lemma 5**.: _We have_
\[\binom{3a+i}{a+i}_{q}=\sum_{k\geq 0}q^{(a-k)(i-k)}\left[\binom{i+k}{2k}_{q}+q ^{a+i}\binom{i+k-1}{2k-1}_{q}\right]\binom{3a+i}{a-k}_{q}.\]
Proof.: Let's rewrite \(\binom{i+k}{2k}_{q}+q^{a+i}\binom{i+k-1}{2k-1}_{q}=\left[1+\frac{q^{a+i}(1-q^ {2k})}{1-q^{i+k}}\right]\binom{i+k}{2k}_{q}\) and define the functions
\[F(i,k) =q^{(a-k)(i-k)}\left[1+q^{a+i}\cdot\frac{1-q^{2k}}{1-q^{i+k}} \right]\frac{\binom{i+k}{2k}_{q}\binom{3a+i}{a-k}_{q}}{\binom{3a+i}{a+i}_{q}},\qquad\text{and}\] \[G(i,k) =-q^{(a-k+1)(i-k+1)}\cdot\frac{\binom{i+k-1}{2k-2}_{q}\binom{3a+i} {a-k}_{q}}{\binom{3a+i}{a+i}_{q}}.\]
Divide both sides of the intended identity by \(\binom{3a+i}{a+i}_{q}\). Our goal is to prove \(\sum_{k}F(i,k)=1\) by adopting the Wilf-Zeilberger technique. To this end, calculate the following two ratios
\[A(i,k):=\frac{F(i+1,k)}{F(i,k)}-1\qquad\text{and}\qquad B(i,k):=\frac{G(i,k+1 )}{F(i,k)}-\frac{G(i,k)}{F(i,k)}\]
resulting in
\[A(i,k) =\frac{q^{a-k}(1-q^{i+k})(1-q^{a+i+1})(1-q^{i+k+1}+q^{a+i+1}-q^{ a+i+2k+1})}{(1-q^{i-k+1})(1-q^{2a+i+k+1})(1-q^{i+k}+q^{a+i}-q^{a+i+2k})}-1 \qquad\text{and}\] \[B(i,k) =\left[-\frac{1-q^{a-k}}{1-q^{2a+i+k+1}}+\frac{q^{a+i-2k+1}(1-q^ {2k})(1-q^{2k-1})}{(1-q^{i+k})(1-q^{i-k+1})}\right]\cdot\frac{1-q^{i+k}}{1-q^ {i+k}+q^{a+i}-q^{a+i+2k}}.\]
Verify routinely \(A(i,k)=B(i,k)\). Thus \(F(i+1,k)-F(i,k)=G(i,k+1)-G(i,k)\). Now, sum both sides over all integer values \(k\). Then, notice that the right-hand side vanishes
and hence we obtain a sum \(\sum_{k}F(i,k)\) that is _constant_ in the variable \(i\). Determine this constant by substituting, say \(i=1\) and proceed with some simplifications leading to
\[\sum_{k=0}^{1}F(1,k)=q^{a}\cdot\frac{1-q^{a+1}}{1-q^{2a+1}}+\frac{1-q^{a}}{1-q^{ 2a+1}}=1.\]
Therefore, \(\sum_{k}F(i,k)=1\), identically, for all \(i\geq 1\). The assertion follows.
**Lemma 6**.: _We have the identity_
\[\binom{i+k}{2k}_{q}=\binom{i}{k}_{q}+\sum_{r=1}^{k}q^{k+r}\binom{i+r-1}{k+r}_ {q}.\]
Proof.: Use the recursive relations \(\binom{a}{b}_{q}=\binom{a-1}{b}_{q}+q^{a-b}\binom{a-1}{b-1}_{q}=q^{b}\binom{a -1}{b}_{q}+\binom{a-1}{b-1}_{q}\).
**Lemma 7**.: _We have the inequality \(\binom{i+k}{i-k}_{q}\geq\binom{i}{i-k}_{q}\)._
Proof.: We use the interpretation of the Gaussian polynomials as the _inversion number_ generating function for all bit strings of length \(n\) with \(k\) zeroes and \(n-k\) ones, that is
\[\binom{n}{k}_{q}=\sum_{w\in\,0^{k}1^{n-k}}q^{\operatorname{inv}(w)}.\]
Let \(w^{\prime}\in\,0^{i-k}1^{k}\sqcup 1^{k}\) denote a bit where the last \(k\) digits are all ones. In this sense, we get
\[\binom{i+k}{i-k}_{q}=\sum_{w\in\,0^{i-k}1^{2k}}q^{\operatorname{ inv}(w)} =\sum_{w^{\prime}\in\,0^{i-k}1^{k}\sqcup 1^{k}}q^{\operatorname{inv}(w^{ \prime})}+\sum_{w^{\prime}\not\in\,0^{i-k}1^{k}\sqcup 1^{k}}q^{\operatorname{inv}(w^{ \prime})}\] \[=\sum_{w\in\,0^{i-k}1^{k}}q^{\operatorname{inv}(w)}+\sum_{w^{ \prime}\not\in\,0^{i-k}1^{k}\sqcup 1^{k}}q^{\operatorname{inv}(w^{\prime})}\] \[=\binom{i}{i-k}_{q}+\sum_{w^{\prime}\not\in\,0^{i-k}1^{k}\sqcup 1 ^{k}}q^{\operatorname{inv}(w^{\prime})}\]
where we note that \(\operatorname{inv}(w^{\prime})=\operatorname{inv}(w)\) if the word \(w^{\prime}\in\,0^{i-k}1^{k}\sqcup 1^{k}\) is associated with \(w\in\,0^{i-k}1^{k}\) found by dropping the last \(k\) ones. The assertion is now immediate.
We prove the main result of this section and our paper, the \(\beta\)-Conjecture for \(\beta=2\).
**Theorem 4**.: _The polynomial \(P(q):=\binom{3a+i}{a+i}_{q}-\binom{3a+2i}{a}_{q}\) has non-negative coefficients._
Proof.: From Lemma 4 and Lemma 5, we infer
\[P(q)=\sum_{k\geq 1}q^{(a-k)(i-k)}\left[\binom{i+k}{2k}_{q}+q^{a+i}\binom{i+k-1} {2k-1}-\binom{i}{k}_{q}\right]\binom{3a+i}{a-k}_{q}.\]
It suffices to verify positivity of the terms inside the inner parenthesis on the right-hand side. We may pair up these terms and compliment the lower index to the effect that
\[\binom{i+k}{2k}_{q}-\binom{i}{k}_{q}+q^{a+i}\binom{i+k-1}{2k-1}_{q}=\binom{i+k}{i- k}_{q}-\binom{i}{i-k}_{q}+q^{a+i}\binom{i+k-1}{2k-1}_{q}.\]
To reach the conclusion, simply apply Lemma 6 or Lemma 7.
## 4. Final remarks
In the present section, we close our discussion with one conjecture as a codicil of certain calculations we encountered while digging up ways to prove the \(\beta\)-Conjecture.
**Conjecture 4**.: _For each \(0\leq k\leq a<b\), we have_
\[\binom{a}{k}_{q}\binom{a+b}{b-k}_{q}\geq\binom{b}{k}_{q}\binom{a+b}{a-k}_{q} \qquad\text{or}\qquad\binom{a}{k}_{q}\binom{b}{k}_{q}\binom{b+a}{b}_{q}\left[ \frac{1}{\binom{a+k}{k}_{q}}-\frac{1}{\binom{b+k}{k}_{q}}\right]\geq 0.\]
The next elementary result might be helpful if one decides to engage this conjecture.
**Lemma 8**.: _For \(0\leq k\leq a<b\), we have_
\[\frac{1}{\binom{a+k}{k}_{q}}-\frac{1}{\binom{b+k}{k}_{q}}=\sum_{i=1}^{k}q^{a+i }\frac{1-q^{b-a}}{1-q^{b+i}}\prod_{j=i}^{k}\frac{1-q^{j}}{1-q^{a+j}}\prod_{j=1} ^{i-1}\frac{1-q^{j}}{1-q^{b+j}}.\]
Proof.: This results from partial fractions.
**Example 1**.: \[\frac{1}{\binom{a+1}{1}_{q}}-\frac{1}{\binom{b+1}{1}_{q}} =\frac{q^{a+1}(1-q)(1-q^{b-a})}{(1-q^{a+1})(1-q^{b+1})}\] \[\frac{1}{\binom{a+2}{2}_{q}}-\frac{1}{\binom{b+2}{2}_{q}} =\frac{q^{a+1}(1-q)(1-q^{2})(1-q^{b-a})}{(1-q^{a+1})(1-q^{a+2})(1- q^{b+1})}+\frac{q^{a+2}(1-q)(1-q^{2})(1-q^{b-a})}{(1-q^{a+2})(1-q^{b+1})(1-q^{b+ 2})}.\]
**Example 2**.: \[\frac{1}{\binom{a+3}{3}_{q}}-\frac{1}{\binom{b+3}{3}_{q}} =\frac{q^{a+1}(1-q)(1-q^{2})(1-q^{3})(1-q^{b-a})}{(1-q^{a+1})(1-q ^{a+2})(1-q^{a+3})(1-q^{b+1})}\] \[+\frac{q^{a+2}(1-q)(1-q^{2})(1-q^{3})(1-q^{b-a})}{(1-q^{a+2})(1-q ^{a+3})(1-q^{b+1})(1-q^{b+2})(1-q^{b+3})}\] \[+\frac{q^{a+3}(1-q)(1-q^{2})(1-q^{3})(1-q^{b-a})}{(1-q^{a+3})(1-q ^{b+1})(1-q^{b+2})(1-q^{b+3})}.\]
**Remark 5**.: As a side note, we recall that G. E. Andrews [2] expresses \(\binom{n}{k}_{q}-\binom{n}{k-1}_{q}\) as the generating function for partitions with particular _Frobenius symbols_, while L. M. Butler [5] does this with the help of the _Kostka-Foulkes polynomials_ to show non-negativity of the coefficients. We shall provide an alternative algebraic approach.
**Lemma 9**.: _For \(0\leq 2k\leq n\), we have \(\binom{n}{k}_{q}-\binom{n}{k-1}_{q}\geq 0\)._
Proof.: Let \(n=\alpha k+d\) where \(0\leq d<k\). Rewrite
\[\binom{n}{k}_{q}-\binom{n}{k-1}_{q}=q^{k}\binom{n}{k-1}_{q}\,\frac{1-q^{( \alpha-2)k}}{1-q^{k}}+q^{(\alpha-1)k}\binom{n}{k-1}_{q}\,\frac{1-q^{d+1}}{1-q^ {k}}.\]
Observe \(\frac{1-q^{(\alpha-2)k}}{1-q^{k}}\) is already a polynomial with non-negative coefficients. Furthermore, since \(U(q):=\binom{n}{k-1}_{q}\) is unimodal [1], [6], [7], the coefficient of \(q^{j}\) in \(U(q)\cdot(1-q^{d+1})\) is non-negative as long as \(2j\leq\deg(U)\). The same is true for \(U(q)\frac{1-q^{d+1}}{1-q^{k}}\) as a formal power series. Since the _polynomial_\(U(q)\frac{1-q^{d+1}}{1-q^{k}}\) is _symmetric,_ having degree no greater than \(U(q)\), all remaining coefficients of \(U(q)\frac{1-q^{d+1}}{1-q^{k}}\) are non-negative.
## Acknowledgement
The first author thanks R. P. Stanley for bringing Bergeron's conjecture to his attention, also F. Bergeron for his generous explanation of the problem studied in the present work.
|
2303.08267 | Koszul duality for Coxeter groups | We construct a "Koszul duality" equivalence relating the (diagrammatic) Hecke
category attached to a Coxeter system and a given realization to the Hecke
category attached to the same Coxeter system and the dual realization. This
extends a construction of Beilinson-Ginzburg-Soergel and Bezrukavnikov-Yun in a
geometric context, and of the first author with Achar, Makisumi and Williamson.
As an application, we show that the combinatorics of the "tilting perverse
sheaves" considered in arXiv:1802.07651 is encoded in the combinatorics of the
canonical basis of the Hecke algebra of $(W,S)$ attached to the dual
realization. | Simon Riche, Cristian Vay | 2023-03-14T22:54:48Z | http://arxiv.org/abs/2303.08267v2 | # Koszul duality for Coxeter groups
###### Abstract.
We construct a "Koszul duality" equivalence relating the (diagrammatic) Hecke category attached to a Coxeter system and a given realization to the Hecke category attached to the same Coxeter system and the dual realization. This extends a construction of Beilinson-Ginzburg-Soergel [BGS] and Bezrukavnikov-Yun [BY] in a geometric context, and of the first author with Achar, Makisumi and Williamson [AMRW2]. As an application, we show that the combinatorics of the "tilting perverse sheaves" considered in [ARV] is encoded in the combinatorics of the canonical basis of the Hecke algebra of \((W,S)\) attached to the dual realization.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (S.R., grant agreement No. 101002592). C. V. is partially supported by CONICET PIP 11220200102916CO, Foncyt PICT 2020-SERIA-02847 and Secyt (UNC).
This raised the question of the existence of a version of Koszul duality in the general setting of [10], involving in particular a general Coxeter system.1 The main result of this paper is a realization of this idea based on some prior work with Achar [1], under some technical conditions that we discuss in SS1.4 below.
Footnote 1: A first suggestion of the existence of such a construction can be found in [10, Remark 3.5].
### Statement
Let us consider a Coxeter system \((W,S)\) and a realization \(\mathfrak{h}\) of \((W,S)\) over a field \(\Bbbk\) satisfying appropriate assumptions (see SS1.4). To these data Elias-Williamson attach a \(\Bbbk\)-linear monoidal category \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\) endowed with a "shift" autoequivalence (1), defined by generators and relations, and whose split Grothendieck group identifies with the Hecke algebra of \((W,S)\). For any objects \(X\) and \(Y\) in \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\), the \(\Bbbk\)-vector space
\[\bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)}(X,Y(n))\]
admits a canonical structure of graded bimodule over the symmetric algebra \(R\) of \(V^{*}\), where \(V\) is the representation underlying \(\mathfrak{h}\), and by "killing" the left, resp. right, action of this algebra one obtains a category \(\overline{\mathscr{D}}_{\mathrm{BS}}(\mathfrak{h},W)\), resp. \(\underline{\mathscr{Q}}_{\mathrm{BS}}(\mathfrak{h},W)\). We then introduce the "biequivariant", "right equivariant" and "left equivariant" categories attached to \((\mathfrak{h},W)\) as
\[\mathsf{BE}(\mathfrak{h},W) =K^{\mathrm{b}}\mathscr{D}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W),\] \[\mathsf{RE}(\mathfrak{h},W) =K^{\mathrm{b}}\overline{\mathscr{D}}_{\mathrm{BS}}^{\oplus}( \mathfrak{h},W),\] \[\mathsf{LE}(\mathfrak{h},W) =K^{\mathrm{b}}\underline{\mathscr{Q}}_{\mathrm{BS}}^{\oplus}( \mathfrak{h},W)\]
where the subscript "\(\oplus\)" indicates the additive hull. (The terminology, taken from [1], is motivated by the special case when the Hecke category can be described in terms of constructible sheaves: the biequivariant category involves sheaves on the group which are equivariant for a Borel subgroup on both sides, while the right, resp. left, equivariant category involves sheaves which are equivariant for the action on the right, resp. left.)
With this notation, in the special case considered there, one form of the Koszul duality of [1] is an equivalence of triangulated categories
\[\kappa:\mathsf{RE}(\mathfrak{h},W)\xrightarrow{\sim}\mathsf{LE}(\mathfrak{h} ^{*},W)\]
which satisfies \(\kappa\circ(1)=(-1)[1]\circ\kappa\), where \(\mathfrak{h}^{*}\) is the dual realization (obtained by switching roots and coroots; in the case related to geometry this amounts to Langlands duality). To state further properties of this equivalence one needs to recall that the categories \(\mathsf{RE}(\mathfrak{h},W)\) and \(\mathsf{LE}(\mathfrak{h}^{*},W)\) admit canonical "perverse" t-structures (again, the terminology comes from geometry) whose hearts admit canonical highest weight structures. More precisely, at the time when [1, 1] were written this was known only in the case of Cartan realizations of crystallographic Coxeter systems, but in the meantime this construction was extended to the general setting in [1]. As in any highest weight category one can consider the indecomposable tilting objects in these categories, and the main property of Koszul duality can be roughly stated as the fact that it exchanges the indecomposable objects in the karoubian envelope of \(\overline{\mathscr{D}}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\), resp. \(\underline{\mathscr{Q}}_{\mathrm{BS}}^{\oplus}(\mathfrak{h}^{*},W)\), with the indecomposable tilting objects in the heart of the perverse t-structure on \(\mathsf{LE}(\mathfrak{h}^{*},W)\), resp. \(\mathsf{RE}(\mathfrak{h},W)\).
One of the main results of the present paper is a version of this statement for general realizations of general Coxeter groups, see Theorem 7.4.
### Strategy
The main strategy of our proof is similar to that used in [1] (which, itself, is an adaptation of constructions considered in [2, 10, 11]). Namely, instead of constructing the equivalence \(\kappa\) directly, we first consider a "free monodromic" variant, from which (the inverse of) \(\kappa\) will be obtained by essentially "killing" a left \(R\)-action on morphisms. The main point is that in this setting one works with _monoidal_ categories, where the definition of \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\) by generators and relations can be used with great effect.
We therefore construct a category of "free monodromic tilting objects" \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) attached to \((\mathfrak{h},W)\), and then an equivalence of monoidal categories
\[\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\xrightarrow{\sim}\mathscr{T}_{ \mathrm{BS}}(\mathfrak{h},W),\]
see Theorem 7.1. The main difficulty lies in the _definition_ of such a functor; once this is known the same arguments as in [1] can be developed to prove that it is an equivalence. To define this functor, in view of the definition of \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\) we need to describe the images of the generating morphisms, and then check that these images satisfy the relations imposed in \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\). The construction of all the morphisms involving only one color can be copied from [1], but the definition of the morphism corresponding the \(2\)-colors generators given there relies partially on geometry. Here we give a different (and general) construction of this morphism in Section 6.
This construction relies on the prior construction of a "functor \(\mathbb{V}\)" in case \(W\) is finite, explained in Section 5. This construction is similar to that in [1], with one notable exception: in [1] this functor takes values in "usual" Soergel bimodules, which leads to imposing technical assumptions on the characteristic of \(\Bbbk\); these assumptions can be removed later in the paper using some change-of-scalars arguments which make sense only for Cartan realizations. Here we use a variant of the construction of Soergel bimodules developed in the meantime by Abe [1], which allows to avoid these technical assumptions completely.
The proof that these morphisms satisfy the required relations is again similar to the corresponding part of [1], but using Abe's category of bimodules rather than plain bimodules.
### Assumptions
The assumptions that we have to impose on our relation \(\mathfrak{h}\) are explained in detail in SS2.1, SS2.3 and SS4.9. The assumptions of SSSS2.1-2.3 are "standard" assumptions that are required for the theories in [12] and [1] to behave appropriately. They are known to hold in the main examples of realizations that arise "in nature," i.e. the Cartan realizations of crystallographic Coxeter systems and the geometric realization (and its variants considered by Soergel) of any Coxeter system not involving type \(\mathsf{H}_{3}\).
The assumption of SS4.9 is of a different kind: in [1] an ad-hoc version of the "free monodromic completed category" of [11] was constructed in the diagrammatic setting. This category (or a subcategory) should be monoidal, and all the structures involved can indeed be constructed, but the question of whether these structures satisfy the appropriate "interchange law" was left open. Here we assume that this property is satisfied for appropriate objects. A proof that it is indeed the case in full generality has been announced by Hogancamp and Makisumi, but no written account of their work is available as of now. In [1] it was proved
that this property holds for Cartan realizations of crystallographic Coxeter systems, and here we remark that the same approach, combined with the results of [ARV], also applies under some technical assumptions that are satisfied for geometric and Soergel realizations.
In particular, the presently available literature is enough to show that all of our statements hold at least for the \(2\) main families of examples of realizations of Coxeter systems that are known.
### Application
As an application, we show in SS7.7 that the combinatorics of the indecomposable tilting objects in \(\mathsf{RE}(\mathfrak{h},W)\) (constructed in [ARV]) is governed by the combinatorics of the "canonical basis" attached to the dual realization. In particular, in the case of Soergel realizations, this combinatorics is governed by Kazhdan-Lusztig polynomials (see Example 7.6), and the heart of the perverse t-structure on \(\mathsf{RE}(\mathfrak{h},W)\) is equivalent to the category of finite-dimensional graded modules over a finite-dimensional Koszul ring (see Remark 7.7).
### Acknowledgements
The results in this paper are the realization of a project initiated with Pramod Achar, Shotaro Makisumi and Geordie Williamson. We thank them for very helpful discussions at various stages of its completion, and for their contributions in the formulation of several key ideas. This work also owes much to the work of Noriyuki Abe [A1, A2], which provided the necessary ingredients to generalize the approach used in [AMRW2].
A large part of our work was accomplished during a visit of the second author in Clermont-Ferrand funded by CIMPA-ICTP fellowships program "Research in Pairs" and ERC.
## 2. Preliminaries
### Conventions and assumptions
Throughout the work \((W,S)\) denotes a Coxeter system with \(S\) finite. (As usual, we will usually only indicate \(W\) in our notations, although all the structures we consider also depend on the choice of Coxeter generators \(S\).) We consider on \(W\) the Bruhat order \(\leq\) and the length function \(\ell\). We will use the standard terminology regarding Coxeter systems, as recalled e.g. in [ARV, SS3.1]. In particular, an _expression_ is a word \(\underline{w}=(s_{1},\ldots,s_{n})\) in \(S\) and \(\pi(\underline{w})=s_{1}\cdots s_{n}\) denotes the element in \(W\) expressed by \(\underline{w}\). The set of expressions will be denoted \(\operatorname{Exp}(W)\). The _length_ of an expression \(\underline{w}\) is its length as a word; it will be denoted \(\ell(\underline{w})\). We will identify simple reflections with the corresponding \(1\)-letter expression whenever convenient. Given a pair \((s,t)\) of simple reflections, we will denote by \(m_{s,t}\in\mathbb{Z}_{\geq 1}\cup\{\infty\}\) the order of \(st\) in \(W\), and by \(\langle s,t\rangle\) the subgroup generated by \(s\) and \(t\).
We fix a field \(\Bbbk\) and a realization
\[\mathfrak{h}=\big{(}V,(\alpha_{s}^{\vee}:s\in S),(\alpha_{s}:s\in S)\big{)}\]
of \((W,S)\) over \(\Bbbk\) in the sense of Elias-Williamson [EW2, Definition 3.1]. In particular, \(V\) is a finite-dimensional \(\Bbbk\)-vector space, \((\alpha_{s}^{\vee}:s\in S)\) is a collection of vectors in \(V\), \((\alpha_{s}:s\in S)\) is a collection of vectors in \(V^{*}:=\operatorname{Hom}_{\Bbbk}(V,\Bbbk)\), and there exists an action of \(W\) on \(V\) such that for \(s\in S\) and \(v\in V\) we have
\[s\cdot v=v-\langle\alpha_{s},v\rangle\alpha_{s}^{\vee}.\]
Realizations can be restricted to parabolic subsystems of \((W,S)\), by simply forgetting part of the elements \(\alpha_{s}\) and \(\alpha_{s}^{\vee}\): if \(S^{\prime}\subset S\) is a subset, and \(W^{\prime}\subset W\) is
the subgroup generated by \(S^{\prime}\), we will denote by \(\mathfrak{h}_{|W^{\prime}}\) the realization \((V,(\alpha_{s}^{\vee}:s\in S^{\prime}),(\alpha_{s}:s\in S^{\prime}))\) of \((W^{\prime},S^{\prime})\).
In addition to the conditions appearing in this definition, it has been recently explained in [10] that some further restrictions have to be imposed in order for the theory developed in [10] to behave as expected, most of which were made more explicit in [11]. Here we will assume that the following conditions are satisfied.
1. The realization is balanced (see [10, Definition 3.7]) and satisfies Demazure surjectivity (see [10, Assumption 3.9]).
2. In case \(W\) admits a parabolic subgroup of type \(\mathsf{H}_{3}\), we assume that there exists a linear combination of diagrams as in [10, Equation (5.12)] which is sent to \(0\) under the operation described in [10, SS2].
3. For any pair \((s,t)\) of distinct simple reflections such that \(m_{s,t}<\infty\) we have \[\genfrac{[}{]}{0.0pt}{}{m_{s,t}}{k}_{s}(\langle\alpha_{s}^{\vee},\alpha_{t} \rangle,\langle\alpha_{t}^{\vee},\alpha_{s}\rangle)=\genfrac{[}{]}{0.0pt}{}{m_ {s,t}}{k}_{t}(\langle\alpha_{s}^{\vee},\alpha_{t}\rangle,\langle\alpha_{t}^{ \vee},\alpha_{s}\rangle)=0\] for all integers \(1\leq k\leq m_{st}-1\), where we use the notation of [11].
The assumptions in (1) are standard in this theory. The second one is really necessary; the first one can usually be relaxed at the cost of a more complicated combinatorics (see e.g. [10, SS7]), but we will not consider this question here. Assumptions (2) and (3) are also necessary for the theory of [10], hence also for all of its applications, although this was not made explicit before [10, 11]. (In particular, they should be imposed in [1] and in [12].) Here, by the main result of [11], (3) implies the existence and rotatability of Jones-Wenzl projectors, which as explained in [10] plays a crucial role in this story. Note that (3) is also the technical condition imposed in [1] to ensure that the theory of [1] applies.
It is important to note that if the assumptions (1)-(3) are satisfied by a realization, then they are satisfied by its restriction to any parabolic subsystem of \((W,S)\). Further assumptions will be imposed and discussed in SS2.3 and SS4.9; they are also stable under restriction to a parabolic subsystem.
_Example 2.1_.: The main examples of data as above the reader can keep in mind are the following.
1. Let \(A=(a_{i,j})_{i,j\in I}\) be a generalized Cartan matrix. A _Kac-Moody root datum_ associated with \(A\) is a triple \[(\mathbf{X},(\alpha_{i}:i\in I),(\alpha_{i}^{\vee}:i\in I))\] where \(\mathbf{X}\) is a finite free \(\mathbb{Z}\)-module, \((\alpha_{i}:i\in I)\) is a family of elements of \(\mathbf{X}\), and \((\alpha_{i}^{\vee}:i\in I)\) is a family of elements of \(\operatorname{Hom}_{\mathbb{Z}}(\mathbf{X},\mathbb{Z})\), such that \(\langle\alpha_{i}^{\vee},\alpha_{j}\rangle=a_{i,j}\) for any \(i,j\in I\). To \(A\) one can associate a Coxeter system \((W,S)\) where \(S\) is in bijection with \(I\) (through a map \(s\mapsto i_{s}\)), and the order \(m_{s,t}\) of \(st\) is determined as follows: \[m_{s,t}=\begin{cases}2&\text{ if }a_{i_{s}i_{t}}a_{i_{t}i_{s}}=0;\\ 3&\text{ if }a_{i_{s}i_{t}}a_{i_{t}i_{s}}=1;\\ 4&\text{ if }a_{i_{s}i_{t}}a_{i_{t}i_{s}}=2;\\ 6&\text{ if }a_{i_{s}i_{t}}a_{i_{t}i_{s}}=3;\\ \infty&\text{ if }a_{i_{s}i_{t}}a_{i_{t}i_{s}}\geq 4.\end{cases}\]
To \((\mathbf{X},(\alpha_{i}:i\in I),(\alpha_{i}^{\vee}:i\in I))\) one can associate a realization of \((W,S)\) over any field \(\Bbbk\) by setting \(V:=\Bbbk\otimes_{\mathbb{Z}}\operatorname{Hom}_{\mathbb{Z}}(\mathbf{X}, \mathbb{Z})\), and choosing for \((\alpha_{s}^{\vee}:s\in S)\) and \((\alpha_{s}:s\in S)\) the images of \((\alpha_{i}^{\vee}:i\in I)\) and \((\alpha_{i}:i\in I)\) in \(V\) and \(V^{*}\) respectively.
Such realizations are called _Cartan realizations of crystallographic Coxeter groups_. The status of our assumptions above for such realizations is discussed at length in [R4, Chap. II, SS2.4 and SS3.2]. Regarding (1), such realizations are always balanced; they satisfy Demazure surjectivity at least when \(\operatorname{char}(\Bbbk)\neq 2\). Assumption (2) is irrelevant since crystallographic Coxeter groups do not admit parabolic subgroups of type \(\mathsf{H}_{3}\). Assumption (3) is automatically satisfied.
Cartan realizations of crystallographic Coxeter groups are the realizations considered in [AMRW1, Chap. 10-11] and [AMRW2] (where more general coefficient rings are allowed.) For such a realization, the associated Hecke category (see SS3.1) can be realized geometrically as a category of parity complexes on the flag variety of the Kac-Moody group associated with \((\mathbf{X},(\alpha_{i}:i\in I),(\alpha_{i}^{\vee}:i\in I))\); see [RW, Part III] for details.
2. In the special case when \(A\) is a Cartan matrix, the datum of a Kac-Moody root datum of \(A\) is equivalent to the datum of a based root datum with associated Cartan matrix \(A\). In this case, \(W\) is the Weyl group of the associated connected reductive algebraic group (over any algebraically closed field).
3. Let now \((W,S)\) be an arbitrary Coxeter system with \(S\) finite. Let \(V\) be the associated _geometric representation_ of \(W\); it is a representation over \(\mathbb{R}\), and comes with a basis \((e_{s}:s\in S)\) indexed by \(S\) and a bilinear form \(\langle-,-\rangle\). One can"upgrade" this representation to a realization of \((W,S)\) over \(\mathbb{R}\), called the geometric realization, by setting \(\alpha_{s}^{\vee}:=e_{s}\) and \(\alpha_{s}:=2\langle e_{s},-\rangle\). As explained in [R4, Chap. II, SS2.4 and SS3.2], for this realization our assumptions (1) and (3) are satisfied. The status of assumption (2) (in case \((W,S)\) has a parabolic subsystem of type \(\mathsf{H}_{3}\)) is unclear to us.
4. For an arbitrary Coxeter system \((W,S)\) with \(S\) finite, one can also consider variants of the geometric realization considered by Soergel in [S3], see e.g. [R4, Chap. II, SS1.2.2]. Namely, consider a vector space \(V\) endowed with linearly independent families \((e_{s}:s\in S)\) of vectors of \(V\) and \((e_{s}^{*}:s\in S)\) of vectors of \(V^{*}\) such that \[\langle e_{t},e_{s}^{*}\rangle=-2\cos\left(\frac{\pi}{m_{s,t}}\right)\] where \(m_{s,t}\) is the order of \(st\) in \(W\). (We use the convention that \(\frac{\pi}{\infty}=0\). Note also that such data always exist.) Then \((V,(e_{s}:s\in S),(e_{s}^{*}:s\in S))\) is a realization of \((W,S)\), see [R4, Chap. II, Remark 2.7]. These realizations will be called Soergel realizations. (Note that in case \(W\) is finite, the geometric realization is an example of a Soergel realization.) In this case again, our assumptions (1) and (3) are satisfied (see [R4, Chap. II, SS2.4 and SS3.2]), but the status of assumption (2) (in case \((W,S)\) has a parabolic subsystem of type \(\mathsf{H}_{3}\)) is unclear to us. For such a realization, the Hecke category is equivalent to the corresponding category of Soergel bimodules by [EW2, SS6.7].
### Gradings
By a "graded" (resp. "bigraded") vector space, we mean a \(\mathbb{Z}\)-graded (resp. \(\mathbb{Z}^{2}\)-graded) vector space. Whenever convenient, we will identify graded vector spaces with bigraded vector spaces which are zero in all degrees belonging to \((\mathbb{Z}\smallsetminus\{0\})\times\mathbb{Z}\). If \(M\) is graded, resp. bigraded, its component in degree \(n\), resp. in bidegree \((n,m)\), will be denoted \(M_{n}\), resp. \(M_{m}^{n}\). The shift-of-grading functor (1) on a bigraded vector space \(M=\oplus_{i,j\in\mathbb{Z}}M_{j}^{i}\) is defined by
\[M(1)_{j}^{i}=M_{j+1}^{i+1}.\]
We also have shift functors [1] and \(\langle 1\rangle:=(-1)[1]\), which satisfy \((M[1])_{j}^{i}=M_{j}^{i+1}\) and \((M\langle 1\rangle)_{j}^{i}=M_{j-1}^{i}\). Note that \(\langle 1\rangle\) stabilizes graded vectors spaces.
We will work in particular with the symmetric algebras
\[R:=\operatorname{Sym}(V^{*}),\quad R^{\vee}:=\operatorname{Sym}(V),\quad R^{ \wedge}:=\operatorname{Sym}(V),\]
considered as graded rings where \(V^{*}\subset R\) is in degree \(2\), \(V\subset R^{\vee}\) is in degree \(-2\) and \(V\subset R^{\wedge}\) is in degree \(2\). We will also consider the localization \(Q\) of the ring \(R\) with respect to the multiplicative subset generated by \(\{w(\alpha_{s}):s\in S,\,w\in W\}\); this ring has a natural grading where the elements \(\frac{1}{w(\alpha_{s})}\) are in degree \(-2\). Analogously, we denote by \(Q^{\vee}\) and \(Q^{\wedge}\) the corresponding localizations of \(R^{\vee}\) and \(R^{\wedge}\).
Let \(R^{\vee}\)-\(\operatorname{Mod}^{\mathbb{Z}}\)-\(R^{\vee}\) denote the category of graded \(R^{\vee}\)-bimodules, and define analogously \(R^{\wedge}\)-\(\operatorname{Mod}^{\mathbb{Z}}\)-\(R^{\wedge}\). Then \(\langle 1\rangle\) induces autoequivalences of these categories, which will be denoted similarly. If \(M\) belongs to \(R^{\vee}\)-\(\operatorname{Mod}^{\mathbb{Z}}\)-\(R^{\vee}\), we let \(M^{\wedge}\) be the graded \(R^{\wedge}\)-bimodule which is \(M\) as ungraded bimodule and whose homogeneous components are \((M^{\wedge})_{i}=M_{-i}\), \(i\in\mathbb{Z}\). This induces a functor from \(R^{\vee}\)-\(\operatorname{Mod}^{\mathbb{Z}}\)-\(R^{\vee}\) to \(R^{\wedge}\)-\(\operatorname{Mod}^{\mathbb{Z}}\)-\(R^{\wedge}\) which satisfies
\[(M\langle 1\rangle)^{\wedge}=M^{\wedge}\langle-1\rangle. \tag{2.1}\]
Given a graded free right \(R^{\vee}\)-module, resp. vector space, of finite rank \(M\simeq\oplus_{i}R^{\vee}(n_{i})\), resp. \(M\simeq\oplus_{i}\Bbbk(n_{i})\), we set
\[\operatorname{grk}_{R^{\vee}}M:=\sum_{i}v^{n_{i}},\quad\text{resp.}\quad \operatorname{grk}_{\Bbbk}M:=\sum_{i}v^{n_{i}},\]
considered as elements in \(\mathbb{Z}[v,v^{-1}]\) where \(v\) is an indeterminate. Of course, if \(N\) is another such module, then \(M\simeq N\) if and only if \(\operatorname{grk}_{R^{\vee}}M=\operatorname{grk}_{R^{\vee}}N\). We define analogously the function \(\operatorname{grk}_{R^{\wedge}}\). Note that if \(M\) is a graded free right \(R^{\vee}\)-module, then \(M^{\wedge}\) is a graded free right \(R^{\wedge}\)-module and we have
\[\operatorname{grk}_{R^{\wedge}}M^{\wedge}=\overline{\operatorname{grk}_{R^{ \vee}}M} \tag{2.2}\]
where \(\overline{\cdot}\) is the unique ring automorphism of \(\mathbb{Z}[v,v^{-1}]\) such that \(\overline{v}=v^{-1}\).
Below we will need the following application of the graded Nakayama lemma, where we consider \(\Bbbk\) (concentrated in degree \(0\)) as a graded \(R^{\vee}\)-module by letting \(V\) act by zero.
**Lemma 2.2**.: _Let \(M\) and \(N\) be graded free right \(R^{\vee}\)-modules of finite rank and \(f:M\to N\) be a morphism of graded \(R^{\vee}\)-modules. If_
\[f\otimes_{R^{\vee}}\Bbbk:M\otimes_{R^{\vee}}\Bbbk\to N\otimes_{R^{\vee}}\Bbbk\]
_is injective (resp. surjective), then \(f\) is injective (resp. surjective)._
Proof.: Let \(C\) denote the complex
\[\cdots 0\to M\xrightarrow{f}N\to 0\cdots\]
with \(M\), resp. \(N\), placed in degree \(-1\), resp. \(0\). If \(f\otimes_{R^{\vee}}\Bbbk\) is injective, this complex satisfies the assumptions of [1, Lemma 3.4.1]. In particular, it follows that \(H^{-1}(C)=0\), that is \(f\) is injective. If \(f\otimes_{R^{\vee}}\Bbbk\) is surjective, we use [1, Lemma 3.4.1] with the same complex with \(M\), resp. \(N\), placed in degree \(0\), resp. \(1\), to deduce that \(f\) is also surjective.
### Dual realization
We will also consider the dual realization
\[\mathfrak{h}^{*}=(V^{*},(\alpha_{s}:s\in S),(\alpha_{s}^{\vee}:s\in S))\]
of \((W,S)\) over \(\Bbbk\). It is clear that this realization satisfies our assumptions (1) and (3). It is not clear (to us) wether assumption (2) is automatically satisfied; in doubt, we will assume that it is also satisfied by \(\mathfrak{h}^{*}\). Note that the graded ring playing, with respect to \(\mathfrak{h}^{*}\), the role that \(R\) plays for \(\mathfrak{h}\) is \(R^{\wedge}\).
_Example 2.3_.:
1. In the setting of Example 2.1(1), if \((\mathbf{X},(\alpha_{i}:i\in I),(\alpha_{i}^{\vee}:i\in I))\) is a Kac-Moody root datum associated with a generalized Cartan matrix \(A\), then \[(\operatorname{Hom}_{\mathbb{Z}}(\mathbf{X},\mathbb{Z}),(\alpha_{i}^{\vee}:i \in I),(\alpha_{i}:i\in I))\] is a Kac-Moody root datum associated with the generalized Cartan matrix \({}^{\mathrm{t}}A\). These generalized Cartan matrices share the same associated Coxeter system \((W,S)\), and for any field \(\Bbbk\) the dual of the realization over \(\Bbbk\) associated with \((\mathbf{X},(\alpha_{i}:i\in I),(\alpha_{i}^{\vee}:i\in I))\) is the realization over \(\Bbbk\) associated with \((\operatorname{Hom}_{\mathbb{Z}}(\mathbf{X},\mathbb{Z}),(\alpha_{i}^{\vee}:i \in I),(\alpha_{i}:i\in I))\). Note that this "duality" of Kac-Moody root data restricts to Langlands' duality in the setting of Example 2.1(2).
2. Consider the setting of Example 2.1(4). By definition the dual of a Soergel realization is also a Soergel realization. In particular, in case \(W\) is finite, the geometric realization of Example 2.1(3) is self dual.
## 3. Two incarnations of the Hecke category
We continue with our data \((W,S)\) and \(\mathfrak{h}\) as in Section 2, which satisfy the conditions of SS2.1 and SS2.3. We recall in this section the definitions of the Elias-Williamson diagrammatic category and of Abe's category attached to \(\mathfrak{h}\) and \((W,S)\).
### The Elias-Williamson diagrammatic category
We will denote by
\[\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\]
the category attached to \((W,S)\) and \(\mathfrak{h}\) in [1, SS5.2] (see also [1, Chap. II, SS2.5]). This category is a \(\Bbbk\)-linear monoidal category, which can be considered equivalently as enriched over graded vector spaces or endowed with a shift-of-grading autoequivalence (1), see [1, SS2.1]. From the first of these points of view, the objects in \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\) are the symbols \(B_{\underline{w}}\) for \(\underline{w}\in\mathrm{Exp}(W)\). The monoidal product is defined by \(B_{\underline{w}}\star B_{\underline{w}}=B_{\underline{vw}}\) where \(\underline{vw}\) is the concatenation of \(\underline{v}\) and \(\underline{w}\). The morphisms are generated (under horizontal and vertical concatenation, and \(\Bbbk\)-linear combinations) by morphisms depicted by some diagrams recalled below, and are subject to a number of relations for which we refer to [1], [1, SS2.3] or [1]. The generating morphisms are (to be read from bottom to top):
1. for any homogeneous \(f\in R\), a morphism \[\begin{CD}:\overbrace{\begin{array}{c}f\cr\vdots\end{array}}^{*}:\\ \vdots\end{CD}\] from \(B_{\varnothing}\) to itself, of degree \(\deg(f)\);
2. for any \(s\in S\), "dot" morphisms \[\begin{CD}@V{}V{s}V@V{}V{}V\\ \end{CD}\] and \[\begin{CD}@V{}V{}V\\ \end{CD}\] from \(B_{s}\) to \(B_{\varnothing}\) and from \(B_{\varnothing}\) to \(B_{s}\) respectively, of degree \(1\);
3. for any \(s\in S\), trivalent morphisms \[\begin{CD}@V{s}V{}V\\ \end{CD}\] from \(B_{s}\) to \(B_{(s,s)}\) and from \(B_{(s,s)}\) to \(B_{s}\) respectively, of degree \(-1\);
4. for any pair \((s,t)\) of distinct simple reflections such that \(st\) has finite order \(m_{st}\) in \(W\), a morphism \[\begin{CD}@V{}V{t}V{s}V@V{}V{}V\\ \end{CD}\] from \(B_{(s,t,\cdots)}\) to \(B_{(t,s,\cdots)}\) (where each expression has length \(m_{st}\), and colors alternate), of degree \(0\).
The graded vector space of morphisms from \(B_{\underline{w}}\) to \(B_{\underline{v}}\) will be denoted
\[\operatorname{Hom}^{\bullet}_{\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)} (B_{\underline{w}},B_{\underline{v}}).\]
Considering \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\) as a category with shift-of-grading autoequivalence (1), its objects are the symbols \(B_{\underline{w}}(n)\) where \(\underline{w}\in\operatorname{Exp}(W)\) and \(n\in\mathbb{Z}\), and the vector space of morphisms from \(B_{\underline{w}}(n)\) to \(B_{\underline{v}}(m)\) is \(\operatorname{Hom}^{m-n}_{\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)}(B _{\underline{w}},B_{\underline{v}})\). This is the point of view we will mostly use below.
More generally, given \(X,Y\) in \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\) we will set
\[\operatorname{Hom}^{\bullet}_{\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W )}(X,Y):=\bigoplus_{n\in\mathbb{Z}}\operatorname{Hom}_{\mathscr{D}_{ \operatorname{BS}}(\mathfrak{h},W)}(X,Y(n)).\]
This graded vector space has a natural structure of graded \(R\)-bimodule, where the left (resp. right) action of \(f\in R_{n}\) is given by adding a box labelled by \(f\) to the left (resp. right) of a diagram. With this structure, \(\operatorname{Hom}^{\bullet}_{\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W )}(X,Y)\) is graded free of finite rank as a left \(R\)-module and as a right \(R\)-module, see [1, Corollary 6.14].
We will denote by \(\mathscr{D}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\) the additive hull of \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\), and by \(\mathscr{D}(\mathfrak{h},W)\) the karoubian envelope of \(\mathscr{D}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\). The latter category is Krull-Schmidt, and there exists a family \((B_{w}:w\in W)\) of objects in \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\) characterized in [1, Theorem 6.26] and such that the assignment \((w,n)\mapsto B_{w}(n)\) induces a bijection between \(W\times\mathbb{Z}\) and the set of isomorphism classes of indecomposable objects in \(\mathscr{D}(\mathfrak{h},W)\).
We will also consider the category \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) which has the same objects as \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\), and such that the morphism space from \(X\) to \(Y\) is the subspace of
degree-\(0\) elements in the graded vector space
\[\Bbbk\otimes_{R}\operatorname{Hom}_{\mathscr{D}_{\operatorname{BS}}(\mathfrak{h}, W)}^{\bullet}(X,Y)\]
(where \(\Bbbk\) is in degree \(0\) and \(R\) acts via the quotient \(R/V\cdot R=\Bbbk\)). The functor (1) induces an autoequivalence of \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) which will be denoted similarly, and \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) is naturally a right module category for the monoidal category \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\). We have a natural functor
\[\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\to\overline{\mathscr{D}}_{ \operatorname{BS}}(\mathfrak{h},W); \tag{3.1}\]
the image of \(B_{\underline{w}}\) under this functor will be denoted \(\overline{B}_{\underline{w}}\). As for \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\), we will denote by \(\overline{\mathscr{D}}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\) the additive hull of \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\), and by \(\overline{\mathscr{D}}(\mathfrak{h},W)\) the karoubian envelope of \(\overline{\mathscr{D}}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\).
The functor (3.1) induces a functor \(\mathscr{D}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\to\overline{ \mathscr{D}}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\), and then a functor \(\mathscr{D}(\mathfrak{h},W)\to\overline{\mathscr{D}}(\mathfrak{h},W)\). The category \(\overline{\mathscr{D}}(\mathfrak{h},W)\) is Krull-Schmidt, being karoubian and with finite-dimensional morphism spaces, see [CYZ, Corollary A.2]. For \(w\in W\), we will denote by \(\overline{B}_{w}\) the image of \(B_{w}\) in \(\overline{\mathscr{D}}(\mathfrak{h},W)\). Then \(\operatorname{End}_{\overline{\mathscr{D}}(\mathfrak{h},W)}(\overline{B}_{w})\) is a quotient of \(\operatorname{End}_{\mathscr{D}(\mathfrak{h},W)}(B_{w})\), hence is a local ring, which implies that \(\overline{B}_{w}\) is an indecomposable object. Using this, it is easily seen that the assignment \((w,n)\mapsto\overline{B}_{w}(n)\) induces a bijection between \(W\times\mathbb{Z}\) and the set of isomorphism classes of indecomposable objects in \(\overline{\mathscr{D}}(\mathfrak{h},W)\).
We will also denote by \(\underline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\), \(\underline{\mathscr{D}}_{\operatorname{BS}}^{\oplus}(\mathfrak{h},W)\) and \(\underline{\mathscr{D}}(\mathfrak{h},W)\) the categories obtained in the same way using the tensor product on the _right_ with \(\Bbbk\) over \(R\). Here \(\underline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) is naturally a _left_ module category over \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\), and the same considerations as above for \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) apply. We will use obvious variants of the notations introduced above; in particular, for \(w\in W\), resp. for \(\underline{w}\in\operatorname{Exp}(W)\), we define the object \(\underline{B}_{w}\in\underline{\mathscr{D}}(\mathfrak{h},W)\), resp. \(\underline{B}_{\underline{w}}\in\underline{\mathscr{D}}_{\operatorname{BS}}( \mathfrak{h},W)\), in the same way as for \(\overline{B}_{w}\), resp. \(\overline{B}_{\underline{w}}\).
_Remark 3.1_.:
1. Of course, all the constructions above can also be considered for the realization \(\mathfrak{h}^{*}\) of SS2.3, giving rise to the category \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\) and all its cousins. To distinguish the two cases, the object of \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\) attached to an expression \(\underline{w}\) will be denoted \(B_{\underline{w}}^{\wedge}\). Similar conventions will be used for the objects \(B_{w}\), \(\overline{B}_{w}\), \(\underline{B}_{w}\).
2. It is a standard fact that the category \(\mathscr{D}_{\operatorname{BS}}(\mathfrak{h},W)\) admits a canonical autoequivalence induced by reflecting diagrams along a horizontal axis. This autoequivalence exchanges left and right multiplication by polynomials, hence induces an equivalence between \(\underline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) and \(\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\).
### Abe's category
Below we will also use a different incarnation of the Hecke category attached to \((W,S)\) and \(\mathfrak{h}\), which we will denote by \(\mathscr{A}_{\operatorname{BS}}(\mathfrak{h},W)\), and which was introduced by Abe in [A1].
_Remark 3.2_.: Although this is not written explicitly, the conventions on realizations in [A1,A2] are different from those of [EW2, EW3] (which we follow here). Namely, in [A1, A2] a realization is a triple \((V,(\alpha_{s}:s\in S),(\alpha_{s}^{\vee}:s\in S))\) where \(\alpha_{s}\in V\) and \(\alpha_{s}^{\vee}\in V^{*}\), and the algebra \(R\) is defined as the symmetric algebra of \(V\). In other words, the module "\(V\)" of [A1, A2] is the module "\(V^{*}\)" of [EW2, EW3]. Here we have decided to follow the conventions of [EW2, EW3]; we will therefore translate all the results and constructions from [A1, A2] into these conventions.
In order to construct the category \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)\), Abe first introduces the category \(\mathscr{C}(\mathfrak{h},W)\) (denoted \(\mathcal{C}^{\prime}\) in [1, 2]2) whose objects are the triples
Footnote 2: In fact the definition of \(\mathcal{C}^{\prime}\) in [1] is slightly different, but this creates difficulties. The definition we use below solves these problems, as explained in [2].
\[(M,(M^{w}_{Q})_{w\in W},\xi_{M})\]
where \(M\) is a graded \(R\)-bimodule, each \(M^{w}_{Q}\) is a graded \((R,Q)\)-bimodule such that \(m\cdot f=w(f)\cdot m\) for any \(m\in M^{w}_{Q}\) and \(f\in R\), these bimodules being \(0\) except for finitely many \(w\)'s, and
\[\xi_{M}:M\otimes_{R}Q\to\bigoplus_{w\in W}M^{w}_{Q} \tag{3.2}\]
is an isomorphism of graded \((R,Q)\)-bimodules. A morphism in \(\mathscr{C}(\mathfrak{h},W)\) from the object \((M,(M^{w}_{Q})_{w\in W},\xi_{M})\) to \((N,(N^{w}_{Q})_{w\in W},\xi_{N})\) is a morphism of graded \(R\)-bimodules \(\varphi:M\to N\) such that
\[\left(\xi_{N}\circ(\varphi\otimes_{R}Q)\circ\xi_{M}^{-1}\right)(M^{w}_{Q}) \subset N^{w}_{Q}\]
for any \(w\in W\). This category has a natural monoidal structure induced by \(\otimes_{R}\), with neutral object the \(R\)-bimodule \(R\) (upgraded in the obvious way to an object of \(\mathscr{C}(\mathfrak{h},W)\)). The shift-of-grading functor \(\langle 1\rangle\) (see SS2.2) induces in the natural way an autoequivalence of \(\mathscr{C}(\mathfrak{h},W)\), which will be denoted similarly. For simplicity, we often write \(M\) for \((M,(M^{w}_{Q})_{w\in W},\xi_{M})\).
For \(s\in S\), let
\[R^{s}=\{f\in R\mid s\cdot f=f\}\]
be the subring of \(s\)-invariant elements in \(R\), and choose an element \(\delta_{s}\in V^{*}\) such that \(\langle\delta_{s},\alpha^{\vee}_{s}\rangle=1\). (Such a vector exists because our realization is assumed to satisfy Demazure surjectivity.) The \(R\)-bimodule \(B_{s}=R\otimes_{R^{s}}R\langle-1\rangle\) can be upgraded to an object in \(\mathscr{C}(\mathfrak{h},W)\) by setting
\[(B_{s})^{e}_{Q}=Q(\delta_{s}\otimes 1-1\otimes s(\delta_{s})),\quad(B_{s})^{s} _{Q}=Q(\delta_{s}\otimes 1-1\otimes\delta_{s}) \tag{3.3}\]
and \((B_{s})^{w}_{Q}=0\) for all \(w\notin\{e,s\}\); see [1, SS2.4] or [4, Chap. II, SS3.1.4].
The category \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)\) is defined as the smallest full subcategory of \(\mathscr{C}(\mathfrak{h},W)\) which contains the neutral object \(R\) and the objects \((B_{s}:s\in S)\) and is stable under the monoidal product \(\otimes_{R}\) and the shift functor \(\langle 1\rangle\). In other words, the objects in \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)\) are the objects of the form
\[B_{s_{1}}\otimes_{R}\cdots\otimes_{R}B_{s_{r}}\langle n\rangle\]
with \(r\in\mathbb{Z}_{\geq 0}\), \(s_{1},\cdots,s_{r}\in S\) and \(n\in\mathbb{Z}\). As in the diagrammatic category we will set
\[\mathrm{Hom}^{\bullet}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)}(X,Y)= \bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h },W)}(X,Y\langle-n\rangle),\]
considered as a graded vector space with \(\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)}(X,Y\langle-n\rangle)\) in degree \(n\). We will denote by \(\mathscr{A}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\) the additive hull of \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)\), and by \(\mathscr{A}(\mathfrak{h},W)\) the karoubian envelope of \(\mathscr{A}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\).
For an expression \(\underline{w}=(s_{1},\cdots,s_{n})\), we define the object \(B_{\underline{w}}\) in \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h},W)\) as
\[B_{\underline{w}}:=B_{s_{1}}\otimes_{R}\cdots\otimes_{R}B_{s_{n}}=R\otimes_{R^ {s_{1}}}\cdots\otimes_{R^{s_{n}}}R\langle-n\rangle\]
if \(n\geq 1\), and \(B_{\varnothing}=R\). The so-called \(1\)-tensor element \(u_{\underline{w}}=(1\otimes 1)\otimes_{R}\cdots\otimes_{R}(1\otimes 1)\in B_{ \underline{w}}\) plays a singular role in the theory. (In case \(\underline{w}=\varnothing\) is the empty expression, the element \(u_{\varnothing}\) is interpreted as \(1\in R\).)
As the reader might have noticed, we have used the same notation as for some objects in \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\). This should not lead to any confusion, because of the following result due to Abe (see [1, Theorem 3.15]).
**Theorem 3.3**.: _There exists an equivalence of monoidal categories_
\[\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\xrightarrow{\sim}\mathscr{A}_{ \mathrm{BS}}(\mathfrak{h},W)\]
_which intertwines the autoequivalences \((1)\) and \(\langle-1\rangle\) and sends \(B_{\underline{w}}\) to \(B_{\underline{w}}\) for any \(\underline{w}\in\mathrm{Exp}(W)\)._
_Remark 3.4_.: The proof of Theorem 3.3 has two parts: (a) the construction of the functor, and (b) the proof that it is an equivalence. Once (a) has been solved in the appropriate way, (b) is guaranteed by the results of [1]. The current proof of (a) relies on the computations in [1]. We believe that considerations similar (or, in a sense, "Koszul dual") to those in Sections 5-6 (using, for \(W\) finite, the indecomposable object \(B_{w_{0}}\) rather than \(\mathcal{T}_{w_{0}}\)) can be used to provide an alternative construction of this functor. Details will appear elsewhere if this finds any application.
## 4. Perverse and tilting sheaves
In this section we briefly recall the definitions of a series of homotopy-type categories constructed from the Hecke category. These categories are the main objects of study of [1] and [1]. We also extend some results of [1, Chap. 10-11] to our present setting.
### The biequivariant category
The biequivariant category \(\mathsf{BE}(\mathfrak{h},W)\) is defined in [1, SS4.2] as
\[\mathsf{BE}(\mathfrak{h},W):=K^{\mathrm{b}}\mathscr{D}_{\mathrm{BS}}^{\oplus}( \mathfrak{h},W);\]
the natural functor from \(\mathsf{BE}(\mathfrak{h},W)\) to \(K^{\mathrm{b}}\mathscr{D}(\mathfrak{h},W)\) is an equivalence, see [1, Lemma 4.9.1], and we will therefore identify these categories whenever convient. The biequivariant category is monoidal for a certain product \(\underline{\star}\) which restricted to \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h},W)\) coincides with \(\star\) and is triangulated on both sides; see [1, SS4.2] for details.
The cohomological shift functors on the triangulated category \(\mathsf{BE}(\mathfrak{h},W)\) is denoted \([1]\). The shift-of-grading functor \((1)\) on \(\mathsf{BE}(\mathfrak{h},W)\) is the functor sending a complex \((\mathcal{F}^{n},d^{n})_{n\in\mathbb{Z}}\) to the complex \((\mathcal{F}^{n}(1),-d^{n})_{n\in\mathbb{Z}}\). We also have the shift functor \(\langle 1\rangle=(-1)[1]\).
As in [1, SS4.2], given \(\mathcal{F},\mathcal{G}\) in \(\mathsf{BE}(\mathfrak{h},W)\), we will denote by
\[\mathbb{H}\mathrm{om}_{\mathsf{BE}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\]
the bigraded \(\Bbbk\)-vector space whose homogeneous components are
\[\mathbb{H}\mathrm{om}_{\mathsf{BE}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G}) ^{i}_{j}:=\mathrm{Hom}_{\mathsf{BE}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G}[ i]\langle-j\rangle)\]
for all \(i,j\in\mathbb{Z}\). We also set \(\mathbb{E}\mathrm{nd}_{\mathsf{BE}(\mathfrak{h},W)}(\mathcal{F})=\mathbb{H} \mathrm{om}_{\mathsf{BE}(\mathfrak{h},W)}(\mathcal{F},\mathcal{F})\).
Following [1, Example 4.2.2], we define the standard object \(\Delta_{\underline{w}}\) and the costandard object \(\nabla_{\underline{w}}\) for any expression \(\underline{w}=(s_{1},\ldots,s_{n})\) as
\[\Delta_{\underline{w}}=\Delta_{s_{1}\,\underline{\star}}\cdots\,\underline{ \star}\,\Delta_{s_{n}}\quad\text{and}\quad\nabla_{\underline{w}}=\nabla_{s_{1 }\,\underline{\star}}\cdots\,\underline{\star}\,\nabla_{s_{n}},\]
where \(\Delta_{s}\) and \(\nabla_{s}\) denote the complexes
\[\cdots 0\to B_{s}\stackrel{{\mbox{\tiny$\bullet$}}}{{\longrightarrow}}B_{ \varnothing}(1)\to 0\cdots\quad\text{and}\quad\cdots 0\to B_{\varnothing}(-1) \stackrel{{\mbox{\tiny$\bullet$}}}{{\longrightarrow}}B_{s}\to 0\cdots \tag{4.1}\]
concentrated in degrees \(0\) and \(1\), and \(-1\) and \(0\), respectively. (By convention, \(\Delta_{\varnothing}=\nabla_{\varnothing}=B_{\varnothing}\).)
On the other hand, standard and costandard objects \(\Delta_{w}\) and \(\nabla_{w}\) in \(\mathsf{BE}(\mathfrak{h},W)\) were defined for every \(w\in W\) in [ARV, SS6.3].3 We have
Footnote 3: In [ARV, §6.3] such objects are defined for certain subsets \(I\subset W\) containing \(w\). Here we take \(I=W\), and omit it from the notation, following the conventions in [ARV]. The same comment applies to various constructions from [ARV] considered below.
\[\Delta_{w}\simeq\Delta_{\underline{w}}\quad\text{and}\quad\nabla_{w}\simeq \nabla_{\underline{w}} \tag{4.2}\]
if \(\underline{w}\) is a reduced expression for \(w\), see [ARV, Proposition 6.11].
A t-structure on \(\mathsf{BE}(\mathfrak{h},W)\) is constructed in [ARV, SS7.2]; its heart will be denoted \(\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W)\). It turns out that the (co)standard objects \(\Delta_{w}\) and \(\nabla_{w}\) (\(w\in W\)) belong to \(\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W)\), see [ARV, Proposition 7.8], and that the shift functor \(\langle 1\rangle\) is t-exact, see [ARV, Lemma 7.3]. For every \(w\in W\), there is (up to scalar) a unique nonzero morphism \(f_{w}:\Delta_{w}\to\nabla_{w}\)[ARV, Lemma 6.6]; we let
\[\mathscr{L}_{w}:=\operatorname{Im}(f_{w})\]
be the image of this morphism in \(\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W)\). Then the assignment \((w,n)\mapsto\mathscr{L}_{w}\langle n\rangle\) induces a bijection between \(W\times\mathbb{Z}\) and the set of isomorphism classes of simple objects in \(\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W)\), see [ARV, SS8.1].
The following statement, which follows from the construction of standard and costandard objects via the recollement formalism of [ARV, SS5], will be required below.
**Lemma 4.1**.: _For any \(w\in W\), the cone of any nonzero morphism \(\Delta_{w}\to\nabla_{w}\) in \(\mathsf{BE}(\mathfrak{h},W)\) belongs to the triangulated subcategory generated by the objects of the form \(\Delta_{v}(n)\) with \(v\in W\) satisfying \(v<w\) and \(n\in\mathbb{Z}\)._
### The right-equivariant category
The right-equivariant category is defined in [AMRW1, SS4.3] as
\[\mathsf{RE}(\mathfrak{h},W):=K^{\mathrm{b}}\overline{\mathscr{D}}_{\mathrm{ BS}}^{\oplus}(\mathfrak{h},W);\]
the natural functor from \(\mathsf{RE}(\mathfrak{h},W)\) to \(K^{\mathrm{b}}\overline{\mathscr{D}}(\mathfrak{h},W)\) is an equivalence, see [AMRW1, Lemma 4.9.1], and we will therefore identify these two categories whenever convenient. The category \(\mathsf{RE}(\mathfrak{h},W)\) is in a natural way a right module category for the monoidal category \((\mathsf{BE}(\mathfrak{h},W),\,\underline{\star})\); the action bifunctor will also be denoted \(\underline{\star}\), it is triangulated on both sides. There exists a natural "forgetful" functor
\[\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}:\mathsf{BE}(\mathfrak{h},W)\to\mathsf{ RE}(\mathfrak{h},W)\]
induced by (3.1); this functor satisfies
\[\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}(\mathcal{F}\,\underline{\star}\, \mathcal{G})=\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}(\mathcal{F})\,\underline{ \star}\,\mathcal{G} \tag{4.3}\]
for \(\mathcal{F},\mathcal{G}\) in \(\mathsf{BE}(\mathfrak{h},W)\). The shifts functors [1], (1) and \(\langle 1\rangle\), and the bigraded vector spaces \(\mathbb{H}_{\mathsf{Hom}_{\mathsf{RE}}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) are defined as for the biequivariant category. Then the functor \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\) commutes with the functors [1], (1) and \(\langle 1\rangle\) in the obvious way.
A t-structure on \(\mathsf{RE}(\mathfrak{h},W)\) is also constructed in [ARV, SS9]; its heart will be denoted \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\). For this structure, the functors \(\langle 1\rangle\) and \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\) are t-exact.
The category \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) shares many properties with the categories of Bruhat-constructible perverse sheaves on flag varieties of Kac-Moody groups, but "with an extra grading;" in particular it has a natural structure of graded highest weight category (in the sense of [AR, Appendix]4) with weight poset \((W,\leq)\) and normalized standard and costandard objects
Footnote 4: Compared to this reference, we make two modifications. First we use the term “highest weight” instead of “quasihereditary.” Second, we allow our weight poset \(\mathcal{S}\) to be infinite, but with the condition that for any \(s\in\mathcal{S}\) the set \(\{t\in\mathcal{S}\mid t\leq s\}\) is finite; this does not affect the theory, except for the existence of enough projective objects, which is not used here. See e.g. [R4, §A] for the ungraded setting.
\[\overline{\Delta}_{w}:=\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}(\Delta_{w}) \quad\text{and}\quad\overline{\nabla}_{w}=\mathsf{For}_{\mathsf{RE}}^{\mathsf{ BE}}(\nabla_{w})\]
for \(w\in W\), see [ARV, Theorem 9.6]. (In case \(W\) is a the Weyl group of a reductive group, and for an appropriate choice of \(\mathfrak{h}\), one can in fact relate explicitly \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) with the corresponding category of perverse sheaves; see [AR] for details.) The restriction of the functor \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\) to the hearts of the perverse t-structures defines a fully faithful functor \(\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W)\to\mathsf{P}_{\mathsf{RE}}(\mathfrak{ h},W)\), see [ARV, Proposition 9.4]. Up to isomorphism, the simple objects in \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) are the objects
\[\overline{\mathscr{L}}_{w}\langle n\rangle:=\mathsf{For}_{\mathsf{RE}}^{ \mathsf{BE}}(\mathscr{L}_{w}\langle n\rangle)\]
for \((w,n)\in W\times\mathbb{Z}\).
### Tilting objects
Since \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) has a natural structure of graded highest weight category, it makes sense to consider its tilting objects, i.e. the objects admitting both a filtration with subquotients of the form \(\overline{\Delta}_{w}\langle n\rangle\) (\(w\in W\), \(n\in\mathbb{Z}\)) and a filtration with subquotients of the form \(\overline{\nabla}_{w}\langle n\rangle\) (\(w\in W\), \(n\in\mathbb{Z}\)). The full subcategory of \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) whose objects are tilting will be denoted \(\mathsf{Tilt}_{\mathsf{RE}}(\mathfrak{h},W)\). The category \(\mathsf{Tilt}_{\mathsf{RE}}(\mathfrak{h},W)\) is Krull-Schmidt, and its isomorphism classes of indecomposable objects is in a natural bijection with \(W\times\mathbb{Z}\); for \(w\in W\) we will denote by \(\overline{\mathcal{T}}_{w}\) the indecomposable object corresponding to \((w,0)\). Then, for any \(n\in\mathbb{Z}\), the indecomposable object corresponding to \((w,n)\) is \(\overline{\mathcal{T}}_{w}\langle n\rangle\). (For all of this, see [AR, Appendix].) It follows from [AR, Lemma A.5 and Lemma A.6] that the natural functors
\[K^{\mathsf{b}}\mathsf{Tilt}_{\mathsf{RE}}(\mathfrak{h},W)\to D^{\mathsf{b}} \mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\to\mathsf{RE}(\mathfrak{h},W) \tag{4.4}\]
are equivalences of triangulated categories.
_Example 4.2_.: The simple object in \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) corresponding to the neutral element of \(W\) is \(\overline{B}_{\varnothing}\), viewed as a complex concentrated in degree \(0\), and it coincides with the object \(\overline{\Delta}_{1}=\overline{\nabla}_{1}=\overline{\mathcal{T}}_{1}\).
_Example 4.3_.: Let \(s\in S\). The indecomposable tilting object \(\overline{\mathcal{T}}_{s}\) in \(\mathsf{Tilt}_{\mathsf{RE}}(\mathfrak{h},W)\) is the complex
\[\cdots 0\to\overline{B}_{\varnothing}(-1)\stackrel{{\mbox{\Large$ \downarrow$}}}{{\longrightarrow}}\overline{B}_{s}\stackrel{{\mbox{ \Large$\downarrow$}}}{{\longrightarrow}}\overline{B}_{\varnothing}(1)\to 0\cdots\]
concentrated in degrees \(-1\), \(0\) and \(1\). It fits in exact sequences
\[0\to\overline{\Delta}_{s}\to\overline{\mathcal{T}}_{s}\to\overline{\Delta}_{1 }\langle 1\rangle\to 0\quad\text{and}\quad 0\to\overline{\nabla}_{1}\langle-1 \rangle\to\overline{\mathcal{T}}_{s}\to\overline{\nabla}_{s}\to 0.\]
For this, see [AMRW1, Example 4.3.4].
### The left-equivariant category
We now define the left-equivariant category \(\mathsf{LE}(\mathfrak{h},W)\) by
\[\mathsf{LE}(\mathfrak{h},W):=K^{\mathrm{b}}\mathscr{D}^{\oplus}_{\mathrm{BS}}( \mathfrak{h},W).\]
Of course, all the constructions and properties of \(\mathsf{RE}(\mathfrak{h},W)\) have analogues for \(\mathsf{LE}(\mathfrak{h},W)\). (In fact, these two categories are equivalent by Remark 3.1(2).) In particular, we will denote by the natural "forgetful" functor (induced by the natural functor \(\mathscr{D}^{\oplus}_{\mathrm{BS}}(\mathfrak{h},W)\to\mathscr{D}^{\oplus}_{ \mathrm{BS}}(\mathfrak{h},W)\)), and by
\[\underline{\Delta}_{w}:=\mathsf{For}^{\mathsf{BE}}_{\mathsf{LE}}(\Delta_{w}), \quad\text{resp.}\quad\underline{\nabla}_{w}=\mathsf{For}^{\mathsf{BE}}_{ \mathsf{LE}}(\nabla_{w})\]
the standard, resp. costandard, object associated with \(w\in W\). The category \(\mathsf{LE}(\mathfrak{h},W)\) admits a natural "perverse" t-structure whose heart, denoted \(\mathsf{P}_{\mathsf{LE}}(\mathfrak{h},W)\), admits a canonical structure of graded highest weight category with weight poset \((W,\leq)\) and normalized standard, resp. costandard, objects the objects \((\underline{\Delta}_{w}:w\in W)\), resp. \((\underline{\nabla}_{w}:w\in W)\). Its full subcategory of tilting objects will be denoted \(\mathsf{Tilt}_{\mathsf{LE}}(\mathfrak{h},W)\). The indecomposable objects in \(\mathsf{Tilt}_{\mathsf{LE}}(\mathfrak{h},W)\) are in a canonical bijection with \(W\times\mathbb{Z}\), and the indecomposable object associated with \((w,0)\) will be denoted \(\underline{\mathcal{T}}_{w}\) (for \(w\in W\)).
Recall that \(\mathscr{D}^{\oplus}_{\mathrm{BS}}(\mathfrak{h},W)\) is a left module category for the monoidal category \(\mathscr{D}^{\oplus}_{\mathrm{BS}}(\mathfrak{h},W)\). This structure "extends" to a bifunctor
\[\underline{\star}\,:\mathsf{BE}(\mathfrak{h},W)\times\mathsf{LE}(\mathfrak{h},W)\to\mathsf{LE}(\mathfrak{h},W)\]
which defines a left action of the monoidal category \((\mathsf{BE}(\mathfrak{h},W),\,\underline{\star}\,)\) on \(\mathsf{LE}(\mathfrak{h},W)\). This bifunctor is triangulated on both sides, and for \(\mathcal{F},\mathcal{G}\in\mathsf{BE}(\mathfrak{h},W)\) we have a canonical (in particular, bifunctorial) isomorphism
\[\mathsf{For}^{\mathsf{BE}}_{\mathsf{LE}}(\mathcal{F}\,\underline{\star}\, \mathcal{G})\cong\mathcal{F}\,\underline{\star}\,\mathsf{For}^{\mathsf{BE}}_{ \mathsf{LE}}(\mathcal{G}).\]
_Remark 4.4_.: As in Remark 3.1(1), below we will also consider all the constructions above for the realization \(\mathfrak{h}^{*}\) instead of \(\mathfrak{h}\). We will add a superscript "\(\wedge\)" to all our notations in this case.
### Free-monodromic category
We will denote by
\[\mathsf{FM}(\mathfrak{h},W)\]
the "free-monodromic" category defined in [1, SS5.1]. It is a \(\Bbbk\)-linear additive category, whose objects are sequences of objects in \(\mathscr{D}^{\oplus}_{\mathrm{BS}}(\mathfrak{h},W)\) endowed with a kind of differential; the precise construction is rather technical, and will not be recalled here. The category \(\mathsf{FM}(\mathfrak{h},W)\) has shift functors [1], (1) and \(\langle 1\rangle\) commuting with each other and such that \(\langle 1\rangle=(-1)[1]\). (Note that [1] is a priori not the suspension functor for a triangulated structure on \(\mathsf{FM}(\mathfrak{h},W)\); in fact it is not known at this point whether this category admits a triangulated structure.) As in the biequivariant category (see SS4.1), for \(\mathcal{F},\mathcal{G}\in\mathsf{FM}(\mathfrak{h},W)\) we define the bigraded vector space \(\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) by setting
\[\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G}) ^{i}_{j}:=\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G}[ i]\langle-j\rangle)\]
and \(\mathbb{E}\mathrm{nd}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F})=\mathbb{H} \mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{F})\). We point out that we have
\[\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G} \langle 1\rangle)=\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F}, \mathcal{G})\langle 1\rangle \tag{4.5}\]
for all \(\mathcal{F},\mathcal{G}\in\mathsf{FM}(\mathfrak{h},W)\).
As in SS2.2 we consider the graded ring \(R^{\vee}\) as a bigraded ring with nonzero components concentrated in \(\{0\}\times\mathbb{Z}\). Then as explained in [1, SS5.2], for
\(\mathcal{F},\mathcal{G}\in\mathsf{FM}(\mathfrak{h},W)\) the bigraded vector space \(\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) has a natural structure of bigraded \(R^{\vee}\)-bimodule. Both actions are denoted \(\widehat{\star}\) and are compatible with composition in the sense that \(x\,\widehat{\star}\,(f\circ g)=(x\,\widehat{\star}\,f)\circ g=f\circ(x\, \widehat{\star}\,g)\) for \(x\in R^{\vee}\), and similarly for the right action. In particular, the left action is induced by certain bigraded algebra homomorphisms
\[\mu_{\mathcal{F}}:R^{\vee}\to\mathbb{E}\mathrm{nd}_{\mathsf{FM}(\mathfrak{h},W )}(\mathcal{F})\]
for any object \(\mathcal{F}\); if \(f\in\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{ G})\) and \(x\in R^{\vee}\), then \(x\,\widehat{\star}\,f=\mu_{\mathcal{G}}(x)\circ f=f\circ\mu_{\mathcal{F}}(x)\).
It should be the case that \(\mathsf{FM}(\mathfrak{h},W)\) (or an appropriate subcategory) admits a structure of monoidal category. Unfortunately this construction turns out to be delicate, and no general answer is known at present. But at least part of this structure has been constructed in [1, Chap. 6-7]. Explicitly, there is a notion of "convolutive complexes" in \(\mathsf{FM}(\mathfrak{h},W)\), see [1, Definition 6.1.1], and the full subcategory of \(\mathsf{FM}(\mathfrak{h},W)\) whose objects are the convolutive complexes is equipped with an operation \(\widehat{\star}\) such that \(\mathcal{F}\,\widehat{\star}\,(-)\) and \((-)\,\widehat{\star}\,\mathcal{F}\) are functors for any fixed convolutive object \(\mathcal{F}\). Moreover, for a fixed convolutive complex \(\mathcal{F}\) the functor \(\mathcal{F}\,\widehat{\star}\,(-)\) has an extension to the whole category \(\mathsf{FM}(\mathfrak{h},W)\), see [1, Proposition 7.6.3]. The operation \(\widehat{\star}\) satisfies all the axioms of a monoidal category except for the "interchange law" stating that for \(f:\mathcal{F}\to\mathcal{G}\), \(g:\mathcal{G}\to\mathcal{H}\), \(h:\mathcal{F}^{\prime}\to\mathcal{G}^{\prime}\) and \(k:\mathcal{G}^{\prime}\to\mathcal{H}^{\prime}\) morphisms between convolutive complexes we have
\[(g\circ f)\,\widehat{\star}\,(k\circ h)=(g\,\widehat{\star}\,k)\circ(f\, \widehat{\star}\,h);\]
see [1, Chap. 7] for details. In particular:
* the operation \(\widehat{\star}\) is associative, so that we can omit parentheses when considering multiple instances of \(\widehat{\star}\), see [1, SS7.2];
* we have a convolutive object \(\widetilde{\mathcal{T}}_{\varnothing}\) constructed in [1, SS5.3.1] and which behaves like a unit object; see [1, SS7.1].
We will return to this question in SS4.9 below.
### Left-monodromic category
We will denote by
\[\mathsf{LM}(\mathfrak{h},W)\]
the "left monodromic" category defined in [1, SS4.4]. As for the free-monodromic category, the objects in this category are sequences of objects in \(\mathscr{D}^{\oplus}_{\mathrm{BS}}(\mathfrak{h},W)\) endowed with (another kind of) "differential". There are shift functors [1], (1) and \(\langle 1\rangle\) with analogous properties to those in \(\mathsf{FM}(\mathfrak{h},W)\), which allows to defined bigraded vector spaces \(\mathbb{H}\mathrm{om}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) for \(\mathcal{F},\mathcal{G}\in\mathsf{LM}(\mathfrak{h},W)\) by the same recipe as in \(\mathsf{FM}(\mathfrak{h},W)\). This time, the bigraded space \(\mathbb{H}\mathrm{om}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) has a canonical structure of bigraded left \(R^{\vee}\)-module. There exists a functor
\[\mathsf{For}^{\mathsf{FM}}_{\mathsf{LM}}:\mathsf{FM}(\mathfrak{h},W)\to \mathsf{LM}(\mathfrak{h},W)\]
which satisfies
\[\mathsf{For}^{\mathsf{FM}}_{\mathsf{LM}}\circ(1)=(1)\circ\mathsf{For}^{ \mathsf{FM}}_{\mathsf{LM}},\quad\mathsf{For}^{\mathsf{FM}}_{\mathsf{LM}}\circ [1]=[1]\circ\mathsf{For}^{\mathsf{FM}}_{\mathsf{LM}},\quad\mathsf{For}^{ \mathsf{FM}}_{\mathsf{LM}}\circ\langle 1\rangle=\langle 1\rangle\circ\mathsf{For}^{ \mathsf{FM}}_{\mathsf{LM}}.\]
Moreover, \(\mathsf{LM}(\mathfrak{h},W)\) has a natural structure of triangulated category with suspension functor [1]. There is also a natural right action of the monoidal category \((\mathsf{BE}(\mathfrak{h},W),\,\underline{\star})\) on \(\mathsf{LM}(\mathfrak{h},W)\); the corresponding bifunctor will again be denoted \(\,\underline{\star}\,\). (See [1, (4.23)] for an explicit construction.)
There is a notion of convolutive objects in \(\mathsf{LM}(\mathfrak{h},W)\) and, for \(\mathcal{F}\in\mathsf{FM}(\mathfrak{h},W)\) and \(\mathcal{G}\in\mathsf{LM}(\mathfrak{h},W)\) both convolutive, a convolutive object \(\mathcal{F}\,\widehat{\star}\,\mathcal{G}\in\mathsf{LM}(\mathfrak{h},W)\). There is also a "convolution" operation on morphisms, such that for \(\mathcal{F}\in\mathsf{FM}(\mathfrak{h},W)\) and \(\mathcal{G}\in\mathsf{LM}(\mathfrak{h},W)\) convolutive the operations \(\mathcal{F}\,\widehat{\star}\,(-)\) and \((-)\,\widehat{\star}\,\mathcal{G}\) are functorial; see [1, SS6.6]. Moreover, for a fixed convolutive object \(\mathcal{F}\in\mathsf{FM}(\mathfrak{h},W)\) the functor \(\mathcal{F}\,\widehat{\star}\,(-)\) extends to a triangulated functor from \(\mathsf{LM}(\mathfrak{h},W)\) to itself, see [1, Proposition 7.6.4]. The functor \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\) sends convolutive complexes to convolutive complexes and satisfies
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\mathcal{F}\,\widehat{\star}\, \mathcal{G})\simeq\mathcal{F}\,\widehat{\star}\,\mathsf{For}_{\mathsf{LM}}^{ \mathsf{FM}}(\mathcal{G}) \tag{4.6}\]
for all \(\mathcal{F},\mathcal{G}\) in \(\mathsf{FM}(\mathfrak{h},W)\) convolutive, see [1, (6.18)].
There is also an equivalence of triangulated categories
\[\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}:\mathsf{LM}(\mathfrak{h},W) \xrightarrow{\sim}\mathsf{RE}(\mathfrak{h},W) \tag{4.7}\]
given in [1, Theorem 4.6.2] and we have a functor
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}:\mathsf{BE}(\mathfrak{h},W)\to \mathsf{LM}(\mathfrak{h},W)\]
such that \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\circ\mathsf{For}_{\mathsf{LM}}^{ \mathsf{BE}}=\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\), cf. [1, SS4.6]. By [1, Lemma 6.6.1], for any \(\mathcal{G}\in\mathsf{BE}(\mathfrak{h},W)\) the object \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}(\mathcal{G})\) is convolutive, and for all \(\mathcal{F}\in\mathsf{FM}(\mathfrak{h},W)\) we have a canonical isomorphism
\[\mathcal{F}\,\widehat{\star}\,\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}( \mathcal{G})\simeq\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\mathcal{F})\, \underline{\star}\,\mathcal{G}.\]
The functors \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\) and \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}\) commute with the shift functors in the obvious way.
We can use that \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\) is an equivalence to translate all of the structures and properties of \(\mathsf{RE}(\mathfrak{h},W)\) to \(\mathsf{LM}(\mathfrak{h},W)\). Explicitly, we endow \(\mathsf{LM}(\mathfrak{h},W)\) with the t-structure obtained from that of \(\mathsf{RE}(\mathfrak{h},W)\) (see SS4.2) and denote by \(\mathsf{P}_{\mathsf{LM}}(\mathfrak{h},W)\) its heart, i.e. the inverse image of \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) under the equivalence \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\). Of course, this category is stable under \(\langle 1\rangle\) and inherites the graded highest weight structure from \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\). The normalized standard and costandard objects are
\[\Delta_{w}:=\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}(\Delta_{w})\quad\text{and} \quad\nabla_{w}:=\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}(\nabla_{w}) \tag{4.8}\]
for all \(w\in W\), since \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}(\mathsf{For}_{\mathsf{LM}}^{ \mathsf{BE}}(\Delta_{w}))=\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}(\Delta_{w})= \overline{\Delta}_{w}\) and similarly for the costandard object. The objects
\[\mathcal{L}_{w}\langle n\rangle=\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}( \mathscr{L}_{w}\langle n\rangle),\]
\((w,n)\in W\times\mathbb{Z}\), form a family of representatives of the isomorphism classes of simple objects in \(\mathsf{P}_{\mathsf{LM}}(\mathfrak{h},W)\). By Example 4.2, we have \(\mathcal{L}_{1}=\Delta_{1}=\nabla_{1}\).
The following property is useful to extend results from [1] to our more general context.
**Proposition 4.5**.: _Let \(w\in W\)._
1. _The socle of_ \(\Delta_{w}\) _is isomorphic to_ \(\mathcal{L}_{1}\langle-\ell(w)\rangle\)_, and the cokernel of the inclusion_ \(\mathcal{L}_{1}\langle-\ell(w)\rangle\hookrightarrow\Delta_{w}\) _has no composition factor of the form_ \(\mathcal{L}_{1}\langle n\rangle\) _with_ \(n\in\mathbb{Z}\)_._
2. _The head of_ \(\nabla_{w}\) _is isomorphic to_ \(\mathcal{L}_{1}\langle\ell(w)\rangle\)_, and the kernel of the surjection_ \(\nabla_{w}\twoheadrightarrow\mathcal{L}_{1}\langle\ell(w)\rangle\) _has no composition factor of the form_ \(\mathcal{L}_{1}\langle n\rangle\) _with_ \(n\in\mathbb{Z}\)_._
Proof.: The objects \(\mathscr{L}_{1}\), \(\Delta_{w}\) and \(\nabla_{w}\) satisfy an analogous statement in \(\mathsf{BE}(\mathfrak{h},W)\), see [1, Proposition 8.3]. Since the functor \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}:\mathsf{P}_{\mathsf{BE}}(\mathfrak{h},W) \to\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) is fully faithful, and since its essential image contains all simple objects, we deduce
that the socle of \(\mathbf{\Delta}_{w}\) is isomorphic to \(\mathcal{L}_{1}\langle-\ell(w)\rangle\) and the head of \(\nabla_{w}\) is isomorphic to \(\mathcal{L}_{1}\langle\ell(w)\rangle\). The other claims also follow, since \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\) is exact and sends simple objects to simple objects.
### Left-monodromic tilting category
The following results were established for Cartan realizations of crystallographic Coxeter groups in [1, Chap. 10]. Using the theory developed in [1], we can extend them to our present setting, with identical proofs.
For \(s\in S\), recall the object \(\widetilde{\mathcal{T}}_{s}\in\mathsf{FM}(\mathfrak{h},W)\) defined in [1, SS5.3.2]; this object is convolutive by definition. We therefore have a triangulated functor \(\widetilde{\mathcal{T}}_{s}\,\widehat{\star}\,(-):\mathsf{LM}(\mathfrak{h},W) \to\mathsf{LM}(\mathfrak{h},W)\), see SS4.6.
**Lemma 4.6**.: _Let \(w\in W\) and \(s\in S\)._
1. _If_ \(sw>w\)_, we have distinguished triangles_ \[\mathbf{\Delta}_{sw}\to\widetilde{\mathcal{T}}_{s}\,\widehat{\star}\, \mathbf{\Delta}_{w}\to\mathbf{\Delta}_{w}\langle 1\rangle\stackrel{{[1]}}{{ \longrightarrow}}\quad\text{and}\quad\nabla_{w}\langle-1\rangle\to\widetilde{ \mathcal{T}}_{s}\,\widehat{\star}\,\nabla_{w}\to\nabla_{sw}\stackrel{{ [1]}}{{\longrightarrow}}\] _in_ \(\mathsf{LM}(\mathfrak{h},W)\)_, where in each case the second morphism is nonzero._
2. _If_ \(sw<w\)_, we have distinguished triangles_ \[\mathbf{\Delta}_{w}\langle-1\rangle\to\widetilde{\mathcal{T}}_{s}\,\widehat{ \star}\,\mathbf{\Delta}_{w}\to\mathbf{\Delta}_{sw}\stackrel{{ [1]}}{{\longrightarrow}}\quad\text{and}\quad\nabla_{sw}\to\widetilde{ \mathcal{T}}_{s}\,\widehat{\star}\,\nabla_{w}\to\nabla_{w}\langle 1\rangle \stackrel{{[1]}}{{\longrightarrow}}\] _in_ \(\mathsf{LM}(\mathfrak{h},W)\)_, where in each case the second morphism is nonzero._
Proof.: The existence of the distinguished triangles can be obtained as in [1, Lemma 10.5.3], citing [1] instead of [1, Proposition 4.4] when tensoring with (co)standard objects. We prove that the second morphism in the first triangle in (1) is nonzero; the other cases are similar. Assume for a contradiction that this is not the case; then we have \(\mathbf{\Delta}_{sw}\cong\widetilde{\mathcal{T}}_{s}\,\widehat{\star}\, \mathbf{\Delta}_{w}\oplus\mathbf{\Delta}_{w}\langle 1\rangle[-1]\). This contradicts the fact that \(\operatorname{Hom}(\mathbf{\Delta}_{sw},\mathbf{\Delta}_{w}\langle 1\rangle[-1])=0\), which follows e.g. from the fact that both \(\mathbf{\Delta}_{sw}\) and \(\mathbf{\Delta}_{w}\langle 1\rangle\) belong to \(\mathsf{P}_{\mathsf{LM}}(\mathfrak{h},W)\).
**Lemma 4.7**.: _The triangulated functor \(\widetilde{\mathcal{T}}_{s}\,\widehat{\star}\,(-):\mathsf{LM}(\mathfrak{h},W) \to\mathsf{LM}(\mathfrak{h},W)\) is t-exact with respect to the perverse t-structure._
Proof.: By [1, Lemma 7.5 and Remark 7.6], the nonpositive and nonnegative parts of the t-structure are generated under extensions by appropriate shifts of standard and costandard objects, respectively. Thus, the claim follows from Lemma 4.6.
We set \(\mathcal{T}_{1}:=\mathsf{For}_{\mathsf{LM}}^{\mathsf{BE}}(B_{\varnothing})\) and, given an expression \(\underline{w}=(s_{1},\ldots,s_{r})\), we set
\[\mathcal{T}_{\underline{w}}:=\widetilde{\mathcal{T}}_{s_{1}}\,\widehat{\star} \,\cdots\,\widehat{\star}\,\widetilde{\mathcal{T}}_{s_{r}}\,\widehat{\star} \,\mathcal{T}_{1}\in\mathsf{LM}(\mathfrak{h},W).\]
Let \(\mathsf{Tilt}_{\mathsf{LM}}^{\oplus}(\mathfrak{h},W)\) be the full subcategory of \(\mathsf{LM}(\mathfrak{h},W)\) whose objects are the direct sums of objects of the form \(\mathcal{T}_{\underline{w}}\langle n\rangle\) with \(\underline{w}\in\operatorname{Exp}(W)\) and \(n\in\mathbb{Z}\). We define the left-monodromic tilting category \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) as its karoubian envelope. (Note that \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) is a full subcategory of \(\mathsf{LM}(\mathfrak{h},W)\) as the latter has a bounded t-structure and hence is karoubian by the main result of [1].) This definition is justified by the following result.
**Proposition 4.8**.: _The functor \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\) induces an equivalence of additive categories_
\[\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\stackrel{{\sim}}{{ \longrightarrow}}\mathsf{Tilt}_{\mathsf{RE}}(\mathfrak{h},W).\]
_As a consequence, the natural functors_
\[K^{\mathrm{b}}\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\to D^{\mathrm{b}} \mathsf{P}_{\mathsf{LM}}(\mathfrak{h},W)\to\mathsf{LM}(\mathfrak{h},W) \tag{4.9}\]
_are equivalences of triangulated categories._
Proof.: For the first claim, the proof of [1, Proposition 10.5.1] applies in the present setting, using Lemma 4.6 instead of [1, Lemma 10.5.3]. Then, the fact that the functors in (4.9) are equivalences follows from the similar property for the functors in (4.4).
_Remark 4.9_.: Standard arguments show that the obvious functor
\[K^{\mathrm{b}}\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\to K^{\mathrm{b}} \mathsf{Tilt}_{\mathsf{LM}}^{\oplus}(\mathfrak{h},W)\]
is an equivalence of triangulated categories. Hence we also obtain an equivalence of categories
\[K^{\mathrm{b}}\mathsf{Tilt}_{\mathsf{LM}}^{\oplus}(\mathfrak{h},W)\xrightarrow{ \sim}\mathsf{LM}(\mathfrak{h},W), \tag{4.10}\]
as in [1, Lemma 2.4].
Using Proposition 4.8 we can transfer to \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) the usual properties satisfied by the tilting objects of a graded highest weight category, cf. [1, SS9.5], and obtain the following statement.
**Corollary 4.10**.: _The category \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) is Krull-Schmidt. For any \(w\in W\), there exists a unique (up to isomorphism) indecomposable object \(\mathcal{T}_{w}\) characterized by the following properties:_
1. _for any reduced expression_ \(\underline{w}\) _expressing_ \(w\)_,_ \(\mathcal{T}_{w}\) _occurs as a direct summand of_ \(\mathcal{T}_{\underline{w}}\) _with multiplicity_ \(1\)_;_
2. \(\mathcal{T}_{w}\) _does not occur as a direct summand of any object_ \(\mathcal{T}_{\underline{v}}\langle n\rangle\) _with_ \(\underline{v}\) _an expression such that_ \(\ell(\underline{v})<\ell(w)\) _and_ \(n\in\mathbb{Z}\)_._
_Moreover, the assignment \((w,n)\mapsto\mathcal{T}_{w}\langle n\rangle\) induces a bijection bewteen \(W\times\mathbb{Z}\) and the set of isomorphism classes of indecomposable objects in \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\).
### Free-monodromic tilting category
Given an expression \(\underline{w}=(s_{1},\ldots,s_{r})\), we set
\[\widetilde{\mathcal{T}}_{\underline{w}}=\widetilde{\mathcal{T}}_{s_{1}} \widehat{\star}\,\ldots\widehat{\star}\,\widetilde{\mathcal{T}}_{s_{r}}\in \mathsf{FM}(\mathfrak{h},W).\]
Note that by (4.6) we have \(\mathcal{T}_{\underline{w}}=\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}( \widetilde{\mathcal{T}}_{\underline{w}})\).
Proposition 4.8 implies that for any expressions \(\underline{v}\), \(\underline{w}\) and \(i,j\in\mathbb{Z}\) we have
\[\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{T}_{\underline{v}}, \mathcal{T}_{\underline{w}}[i]\langle j\rangle)=0\quad\text{unless }i=0\]
(see [1, Corollary 10.6.2], the above equality and [1, Lemma 5.2.3] imply the following statement.
**Proposition 4.11**.: _For any expressions \(\underline{v}\), \(\underline{w}\) and any \(i,j\in\mathbb{Z}\), we have_
\[\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{ \underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})^{i}_{j}=0\quad\text{ unless}\quad i=0.\]
_Moreover \(\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{ \underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})^{0}_{\bullet}\) is graded free as a right \(R^{\vee}\)-module, and the morphism_
\[\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{ \underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})^{0}_{\bullet}\otimes R ^{\vee}\Bbbk\to\mathbb{H}\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}( \mathcal{T}_{\underline{v}},\mathcal{T}_{\underline{w}})^{0}_{\bullet}\]
_induced by the functor \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\) is an isomorphism. _
Proof.: We first show that \(\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{T}_{\underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})^{0}_{\bullet}\) is graded free as a right \(R^{\vee}\)-module, and the morphism
\[\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{T}_{\underline{v }},\widetilde{\mathcal{T}}_{\underline{w}})^{0}_{\bullet}\otimes R^{\vee} \Bbbk\to\mathbb{H}\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{T}_{ \underline{v}},\mathcal{T}_{\underline{w}})^{0}_{\bullet}\]
is graded free as a right \(R^{\vee}\)-module, and the morphism
\[\mathbb{H}\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{T}_{\underline{v }},\widetilde{\mathcal{T}}_{\underline{w}})^{0}_{\bullet}\otimes R^{\vee} \Bbbk\to\mathbb{H}\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{T}_{ \underline{v}},\mathcal{T}_{\underline{w}})^{0}_{\bullet}\]
We define the free-monodromic tilting category \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) as the full subcategory of \(\mathsf{FM}(\mathfrak{h},W)\) whose objects are those of the form \(\widetilde{\mathscr{T}}_{w}\langle n\rangle\) with \(\underline{w}\in\mathrm{Exp}(W)\) and \(n\in\mathbb{Z}\). We denote by \(\mathscr{T}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\) the additive hull of \(\widetilde{\mathscr{T}}_{\mathrm{BS}}(\mathfrak{h},W)\), and by \(\mathscr{T}(\mathfrak{h},W)\) the karobuian envelope of \(\mathscr{T}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\). By construction, the convolution operation \(\widehat{\star}\,\) restricts to an operation
\[\widehat{\star}\,:\,\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\times\mathsf{ Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\to\mathsf{Tilt}_{\mathsf{LM}}( \mathfrak{h},W). \tag{4.11}\]
The functor \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}:\,\mathscr{T}_{\mathrm{BS}}( \mathfrak{h},W)\to\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) extends to \(\mathscr{T}(\mathfrak{h},W)\) by the universal property of the karobuian envelope. By a minor abuse of notation we also denote this extension by \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\). With this definition, Proposition 4.11 extends to all objects in \(\mathscr{T}(\mathfrak{h},W)\), as follows.
**Proposition 4.12**.: _Let \(X,Y\) be objects in \(\mathscr{T}(\mathfrak{h},W)\). Then the graded vector space \(\bigoplus_{n}\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(X,Y\langle n\rangle)\) is graded free as a right \(R^{\vee}\)-module, and the morphism_
\[\left(\bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(X, Y\langle n\rangle)\right)\otimes_{R^{\vee}}\Bbbk\to\mathbb{H}\mathrm{om}_{ \mathsf{LM}(\mathfrak{h},W)}(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(X), \mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(Y))_{\bullet}^{0}\]
_induced by the functor \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\) is an isomorphism._
Proof.: By construction of the Karoubian envelope, \(\bigoplus_{n}\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(X,Y\langle n\rangle)\) identifies with a direct summand of a similar space of morphisms in \(\mathscr{T}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\). Hence this space is a projective graded \(R^{\vee}\)-module by Proposition 4.11 and thus is graded-free. The fact that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\) induces an isomorphism also follows from Proposition 4.11.
The classification of the indecomposable tilting objects in \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) can be "upgraded" to \(\mathscr{T}(\mathfrak{h},W)\) using Proposition 4.12, as in [1, Theorem 10.7.1].
**Theorem 4.13**.: _The category \(\mathscr{T}(\mathfrak{h},W)\) is Krull-Schmidt. For any \(w\in W\), there exists a unique (up to isomorphism) indecomposable object \(\widetilde{\mathcal{T}}_{w}\) such that_
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widetilde{\mathcal{T}}_{w})= \mathcal{T}_{w}.\]
_In addition, \(\widetilde{\mathcal{T}}_{w}\) is characterized by the following properties:_
1. _for any reduced expression_ \(\underline{w}\) _expressing_ \(w\)_,_ \(\widetilde{\mathcal{T}}_{w}\) _occurs as a direct summand of_ \(\widetilde{\mathcal{T}}_{\underline{w}}\) _with multiplicity_ \(1\)_;_
2. \(\widetilde{\mathcal{T}}_{w}\) _does not occur as a direct summand of any object_ \(\widetilde{\mathcal{T}}_{\underline{v}}\) _with_ \(\underline{v}\) _an expression such that_ \(\ell(\underline{v})<\ell(w)\) _and_ \(n\in\mathbb{Z}\)_._
_Moreover, the assignment \((w,n)\mapsto\widetilde{\mathcal{T}}_{w}\langle n\rangle\) induces a bijection bewteen \(W\times\mathbb{Z}\) and the set of isomorphism classes of indecomposable objects in \(\mathscr{T}(\mathfrak{h},W)\). _
In case \(w=1\) we have \(\widetilde{\mathcal{T}}_{1}=\widetilde{\mathcal{T}}_{\varnothing}\), and if \(\underline{w}=s\in S\) then \(\widetilde{\mathcal{T}}_{s}\) is the object denoted in this way above. In these cases the object \(\widetilde{\mathcal{T}}_{w}\) is canonical; in general, it is only defined up to isomorphism.
### The bifunctoriality assumption
As explained in SS4.5, it is not known whether the operation \(\widehat{\star}\,\) is a bifunctor on the subcategory of convolutive objects in \(\mathsf{FM}(\mathfrak{h},W)\). As explained in [1, SS7.7], it is expected that this condition holds at least on the subcategory \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\), which boils down to the fact that for
\(f:\mathcal{F}\to\mathcal{G}\), \(g:\mathcal{G}\to\mathcal{H}\), \(h:\mathcal{F}^{\prime}\to\mathcal{G}^{\prime}\) and \(k:\mathcal{G}^{\prime}\to\mathcal{H}^{\prime}\) morphisms between objects in \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) we have
\[(g\circ f)\,\widehat{\star}\,(k\circ h)=(g\,\widehat{\star}\,k)\circ(f\, \widehat{\star}\,h). \tag{4.12}\]
Below we will assume that this property holds for our given realization, which implies that \(\widehat{\star}\) induces a monoidal structure on the category \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) (and then on \(\mathscr{T}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\) and on \(\mathscr{T}(\mathfrak{h},W)\)).
_Remark 4.14_.:
1. In case our realization is a Cartan realization of a crystallographic Coxeter group, the main result of [1, Chap. 11] states that the condition above is satisfied.
2. Using the results of SS4.8, the methods of [1, Chap. 11] also apply for a general realization of a general Coxeter group, provided for any pair \(s,t\in S\) of distinct simple reflections generating a finite subgroup of \(W\), the conditions in [1, Chap. 8] are satisfied by \(\mathfrak{h}_{|\langle s,t\rangle}\). This case covers at least, for any Coxeter system, the geometric realization and the Soergel realizations (see SS2.1), provided the condition (2) of SS2.1 is satisfied in case \((W,S)\) admits a parabolic subsystem of type \(\mathsf{H}_{3}\).
3. Matthew Hogancamp and Shotaro Makisumi have announced a proof of this condition in full generality (provided the assumptions (1)-(2)-(3) of SS2.1 are satisfied). Unfortunately, no written account of their proof is available at this point.
4. Using Proposition 4.11 one obtains that, if the condition above is satisfied, the operation (4.11) is a bifunctor, and defines on \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\) a structure of module category for the monoidal category \((\mathscr{T}(\mathfrak{h},W),\,\widehat{\star}\,)\).
## 5. A functor \(\mathbb{V}\)
In this section we assume that the conditions of SS2.1, SS2.3 and SS4.9 are satisfied. In addition, we assume that \(W\) is finite.
### Statement
Recall the dual realization \(\mathfrak{h}^{*}\) of \((W,S)\) considered in SS2.3. Our aim is to prove the following statement.
**Theorem 5.1**.: _There exists a canonical equivalence of monoidal additive categories_
\[\mathbb{V}:\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\xrightarrow{\sim} \mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\]
_which satisfies \(\mathbb{V}\circ\langle 1\rangle=\langle 1\rangle\circ\mathbb{V}\) and \(\mathbb{V}(\widehat{\mathcal{T}}_{\underline{w}})\simeq B_{\underline{w}}^{\wedge}\) for any expression \(\underline{w}\)._
We construct the functor \(\mathbb{V}\) and prove the theorem in SS5.4 and SS5.5. Before that we need some preliminaries.
### The big tilting object
Let \(w_{0}\) be the longest element in \(W\), and consider the corresponding indecomposable tilting object \(\mathcal{T}_{w_{0}}\) in \(\mathsf{Tilt}_{\mathsf{LM}}(\mathfrak{h},W)\), see Corollary 4.10. We set
\[\mathcal{P}:=\mathcal{T}_{w_{0}}\langle-\ell(w_{0})\rangle.\]
By [1, Theorem 10.3], \(\mathcal{P}\) is the projective cover of the simple object \(\mathcal{T}_{1}=\mathcal{L}_{1}\), and [1, (10.5)] implies that
\[\dim_{\Bbbk}\operatorname{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{P}, \nabla_{\!\!w}\langle m\rangle)=\begin{cases}1&\text{if }m=-\ell(w);\\ 0&\text{otherwise.}\end{cases} \tag{5.1}\]
In particular, we have
\[\mathbb{H}\mathrm{om}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{P},\mathcal{T}_{1})^{ i}_{j}=\begin{cases}\Bbbk&\text{if $i=j=0$;}\\ 0&\text{otherwise.}\end{cases} \tag{5.2}\]
We fix an indecomposable tilting object \(\widetilde{\mathcal{T}}_{w_{0}}\) in \(\mathscr{T}(\mathfrak{h},W)\) such that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widetilde{\mathcal{T}}_{w_{0}})= \mathcal{T}_{w_{0}}\), see Theorem 4.13, and set
\[\widetilde{\mathcal{P}}:=\widetilde{\mathcal{T}}_{w_{0}}\langle-\ell(w_{0})\rangle. \tag{5.3}\]
Proposition 4.12 and (5.2) imply that we have an isomorphism of graded \(R^{\vee}\)-modules
\[\bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}( \widetilde{\mathcal{P}},\widetilde{\mathcal{T}}_{1}\langle n\rangle)\simeq R ^{\vee},\]
hence in particular that
\[\mathrm{dim}_{\Bbbk}(\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(\widetilde{ \mathcal{P}},\widetilde{\mathcal{T}}_{1}))=1.\]
We fix from now on a nonzero morphism
\[\xi:\widetilde{\mathcal{P}}\to\widetilde{\mathcal{T}}_{1}, \tag{5.4}\]
which is automatically a generator of \(\bigoplus_{n}\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(\widetilde{\mathcal{P }},\widetilde{\mathcal{T}}_{1}\langle n\rangle)\) as a right \(R^{\vee}\)-module. We also set \(\xi^{\prime}=\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\xi)\), a generator of \(\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{P},\mathcal{T}_{1})\).
The objects \(\widetilde{\mathcal{P}}\) and \(\mathcal{P}\) are studied in [1, SSSS3.1-3.4] for Cartan realizations of (finite) crystallographic Coxeter groups. As in SS4.8, these results hold in our present setting, and their proofs can be copied from [1], replacing the references to [1] by references to [1]. Below we state the results we will use, and give sketches of proofs.
**Lemma 5.2**.: _In the abelian category \(\mathsf{P}_{\mathsf{LM}}(\mathfrak{h},W)\) we have_
\[[\mathcal{P}:\mathcal{L}_{1}\langle m\rangle]=0\]
_unless \(m\leq 0\), and moreover \([\mathcal{P}:\mathcal{L}_{1}]=1\). In particular, \(\mathrm{End}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{P})=\Bbbk\cdot\mathrm{id}\)._
Proof.: The proof is similar to that of [1, Lemma 3.1]. Namely, as \(\mathcal{P}\) is tilting it admits a standard filtration. Using the reciprocity formula (see e.g. [1, (10.1)]), for \(w\in W\) and \(n\in\mathbb{Z}\) we have
\[(\mathcal{P}:\Delta_{w}\langle n\rangle)=[\nabla_{w}\langle n\rangle: \mathcal{L}_{1}],\]
which is equal to \(1\) if \(n=-\ell(w)\) and to \(0\) otherwise by Proposition 4.5. Using again Proposition 4.5 we deduce that
\[[\mathcal{P}:\mathcal{L}_{1}\langle n\rangle]=\#\{w\in W\mid n=-2\ell(w)\},\]
which implies the statement.
Let \(s\in S\) and \(\widehat{\epsilon}_{s}:\widetilde{\mathcal{T}}_{s}\to\widetilde{\mathcal{T}} _{1}\langle 1\rangle\) be the morphism defined in [1, SS5.3.4]. As in [1, SS3.3], there exists a unique morphism
\[\zeta^{\prime}_{s}:\mathcal{P}\to\mathcal{T}_{s}\langle-1\rangle\]
such that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{s}\langle-1 \rangle)\circ\zeta^{\prime}_{s}=\xi^{\prime}\). In turn, as in [1, Lemma 3.5], there exist a unique morphism
\[\zeta_{s}:\widetilde{\mathcal{P}}\to\widetilde{\mathcal{T}}_{s}\langle-1\rangle \tag{5.5}\]
such that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\zeta_{s})=\zeta_{s}^{\prime}\), and this morphism satisfies
\[\widehat{\epsilon}_{s}\langle-1\rangle\circ\zeta_{s}=\xi. \tag{5.6}\]
And there is a unique isomorphism of graded \(R^{\vee}\)-bimodules
\[\gamma_{s}^{\prime}:R^{\vee}\otimes_{(R^{\vee})^{s}}R^{\vee}\langle 1\rangle \xrightarrow{\sim}\bigoplus_{m}\operatorname{Hom}_{\mathcal{T}(\mathfrak{h},W )}^{\bullet}(\widetilde{\mathcal{P}},\widetilde{\mathcal{T}}_{s}\langle-m\rangle) \tag{5.7}\]
sending \(u_{s}=1\otimes 1\) to \(\zeta_{s}\).
The following statement is the analogue in our setting of [1, Proposition 3.6], for which the same proof applies.
**Proposition 5.3**.: _The object \(\widetilde{\mathcal{P}}\) admits a canonical coalgebra structure in the monoidal category \(\mathcal{T}(\mathfrak{h},W)\), with counit \(\xi:\widetilde{\mathcal{P}}\to\widetilde{\mathcal{T}}_{1}\) and the comultiplication morphism \(\delta:\widetilde{\mathcal{P}}\to\widetilde{\mathcal{P}}\,\widehat{\star} \widetilde{\mathcal{P}}\) characterized as the unique morphism such that \((\xi\,\widehat{\star}\,\xi)\circ\delta=\xi\). _
### Localization
We now recall the localization procedure from [1, SS11.1]. (Once again, in [1], this construction is considered only for Cartan realizations of crystallographic Coxeter groups, but it now makes sense in our present setting.)
Recall the graded ring \(Q^{\vee}\) from SS2.2. If \(\mathcal{F},\mathcal{G}\) are objects in \(\mathsf{FM}(\mathfrak{h},W)\) we set
\[\mathbb{H}\mathrm{om}_{\mathrm{loc}}(\mathcal{F},\mathcal{G}):=\mathbb{H} \mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\otimes_{R^ {\vee}}Q^{\vee},\]
where we consider the _right_ action of \(R^{\vee}\) on \(\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\mathcal{F},\mathcal{G})\) from SS4.5. Let
\[\mathsf{Loc}(\mathfrak{h},W)\]
be the category whose objects are the same as those of \(\mathsf{FM}(\mathfrak{h},W)\), and such that the space of morphisms from \(\mathcal{F}\) to \(\mathcal{G}\) consists of the elements of bi-degree \((0,0)\) in \(\mathbb{H}\mathrm{om}_{\mathrm{loc}}(\mathcal{F},\mathcal{G})\). By construction there exists a canonical functor
\[\mathsf{Loc}:\mathsf{FM}(\mathfrak{h},W)\to\mathsf{Loc}(\mathfrak{h},W).\]
Below we will need to consider also the karoubian envelope \(\mathsf{FM}^{\mathrm{Kar}}(\mathfrak{h},W)\) of the category \(\mathsf{FM}(\mathfrak{h},W)\). Using the fact that the right action of \(R^{\vee}\) on morphism spaces is compatible with composition, one sees that for any \(\mathcal{F},\mathcal{G}\) in \(\mathsf{FM}^{\mathrm{Kar}}(\mathfrak{h},W)\) we have a canonical (right) action of \(R^{\vee}\) on \(\bigoplus_{n,m}\operatorname{Hom}_{\mathsf{FM}^{\mathrm{Kar}}(\mathfrak{h},W)} (\mathcal{F},\mathcal{G}\langle-m\rangle[n])\), so that one can consider the category \(\mathsf{Loc}^{\prime}(\mathfrak{h},W)\) obtained from \(\mathsf{FM}^{\mathrm{Kar}}(\mathfrak{h},W)\) by the same procedure as \(\mathsf{Loc}(\mathfrak{h},W)\) is obtained from \(\mathsf{FM}(\mathfrak{h},W)\). Then there exists a canonical fully faithful functor \(\mathsf{Loc}(\mathfrak{h},W)\to\mathsf{Loc}^{\prime}(\mathfrak{h},W)\).
Recall from [1, SS7.4] that for any \(s\in S\) we have a certain convolutive object \(\widetilde{\nabla}_{s}\) in \(\mathsf{FM}(\mathfrak{h},W)\). Then, for an expression \(\underline{w}=(s_{1},\cdots,s_{n})\) we can consider the object
\[\widetilde{\nabla}_{\underline{w}}=\widetilde{\nabla}_{s_{1}}\widehat{\star} \cdots\widehat{\star}\,\widetilde{\nabla}_{s_{n}}.\]
When \(\underline{w}=\varnothing\) we set \(\widetilde{\nabla}_{\varnothing}:=\widetilde{\mathcal{T}}_{\varnothing}\). Using [1, Lemma 7.4.2 and Corollary 11.3.2], one sees that we have isomorphisms of graded \(Q^{\vee}\)-modules
\[\mathbb{H}\mathrm{om}_{\mathrm{loc}}(\widetilde{\nabla}_{\underline{x}}, \widetilde{\nabla}_{\underline{y}})\cong\begin{cases}Q^{\vee}&\text{if }\pi( \underline{x})=\pi(\underline{y});\\ 0&\text{otherwise}\end{cases} \tag{5.8}\]
where \(\pi\) is as in SS2.1.
Consider the morphism
\[\phi_{s}:\widetilde{\mathcal{T}}_{s}\to\widetilde{\nabla}_{\varnothing}\langle 1 \rangle\oplus\widetilde{\nabla}_{s} \tag{5.9}\]
in \(\mathsf{FM}(\mathfrak{h},W)\) denoted \(\varkappa_{s}^{2}\) in [1, Proposition 11.2.1]. Given an expression \(\underline{w}=(s_{1},\cdots,s_{n})\), we set
\[\phi_{\underline{w}}:=\phi_{s_{1}}\,\widehat{\star}\,\cdots\,\widehat{\star}\, \phi_{s_{n}}:\widetilde{\mathcal{T}}_{\underline{w}}\to(\widetilde{\nabla}_{ \varnothing}\langle 1\rangle\oplus\widetilde{\nabla}_{s_{1}})\,\widehat{\star}\, \cdots\,\widehat{\star}\,(\widetilde{\nabla}_{\varnothing}\langle 1\rangle \oplus\widetilde{\nabla}_{s_{n}}). \tag{5.10}\]
With this definition, it is clear that for any expressions \(\underline{v},\underline{w}\) we have
\[\phi_{\underline{v}}\,\widehat{\star}\,\phi_{\underline{w}}=\phi_{\underline {vw}}. \tag{5.11}\]
The following statement is our version of [1, Corollary 11.2.2], which follows from the same arguments.
**Lemma 5.4**.: _For any expression \(\underline{w}\), the morphism \(\mathsf{Loc}(\phi_{\underline{w}})\) is an isomorphism. _
### Construction of the functor \(\mathbb{V}\)
Recall the functor \((-)^{\wedge}\) considered in SS2.2. For \(\mathcal{F}\in\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) we set
\[\mathbb{V}(\mathcal{F}):=\left(\bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{ \mathscr{T}(\mathfrak{h},W)}(\widetilde{\mathcal{P}},\mathcal{F}\langle n \rangle)\right)^{\wedge}.\]
We observe that \(\mathbb{V}\circ\langle 1\rangle=\langle 1\rangle\circ\mathbb{V}\) by definition and (2.1). This defines a functor from \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) to \(R^{\wedge}\)-\(\mathrm{Mod}^{\mathbb{Z}}\)-\(R^{\wedge}\). Our goal in this subsection is to show that this functor factors through a monoidal functor from \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) to \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\).
If \(\underline{w}=(s_{1},\cdots,s_{n})\) is an expression, we denote by \(S_{\underline{w}}\) the multiset of subexpressions of \(\underline{w}\), i.e. expressions which can be obtained from \(\underline{w}\) by removing some letters. (The multiplicities come from the fact that the same expression can sometimes be obtained by removing letters from \(\underline{w}\) in several ways.) Then we have
\[(\widetilde{\nabla}_{\varnothing}\langle 1\rangle\oplus\widetilde{\nabla}_{s_{1}})\, \widehat{\star}\,\cdots\,\widehat{\star}\,(\widetilde{\nabla}_{\varnothing} \langle 1\rangle\oplus\widetilde{\nabla}_{s_{n}})\cong\bigoplus_{\underline{v} \in S_{\underline{w}}}\widetilde{\nabla}_{\underline{v}}\langle\ell( \underline{w})-\ell(\underline{v})\rangle.\]
For \(u\in W\), we set
\[\widetilde{\mathcal{T}}_{\underline{w}}^{u}:=\bigoplus_{\begin{subarray}{c} \underline{v}\in S_{\underline{w}}\\ \pi(\underline{v})=u\end{subarray}}\widetilde{\nabla}_{\underline{v}}\langle \ell(\underline{w})-\ell(\underline{v})\rangle.\]
Then \(\phi_{\underline{w}}\) defines a morphism
\[\widetilde{\mathcal{T}}_{\underline{w}}\to\bigoplus_{u\in W}\widetilde{ \mathcal{T}}_{\underline{w}}^{u},\]
and if \(\underline{v}\) and \(\underline{w}\) are expressions, for \(x,y\in W\) we have
\[\mathbb{H}\mathrm{om}_{\mathrm{loc}}(\widetilde{\mathcal{T}}_{\underline{v}} ^{x},\widetilde{\mathcal{T}}_{\underline{w}}^{y})=0\quad\text{unless }x=y \tag{5.12}\]
by (5.8).
If \(\underline{w}\) is an expression and \(u\in W\), we consider the \((R^{\wedge},Q^{\wedge})\)-bimodule
\[\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})_{Q^{\wedge}}^{u}:=\left( \bigoplus_{n\in\mathbb{Z}}\mathrm{Hom}_{\mathsf{FM}^{\mathsf{K}ar}(\mathfrak{h },W)}(\widetilde{\mathcal{P}},\widetilde{\mathcal{T}}_{\underline{w}}^{u} \langle n\rangle)\right)^{\wedge}\otimes_{R^{\wedge}}Q^{\wedge},\]
and the morphism of graded \((R^{\wedge},Q^{\wedge})\)-bimodules
\[\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}:\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{w}})\otimes_{R^{\wedge}}Q^{\wedge}\to \bigoplus_{u\in W}\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})_{Q^{ \wedge}}^{u}\]
induced by the assignment \(g\mapsto\phi_{\underline{w}}\circ g\) for \(g\in\mathrm{Hom}_{\mathscr{T}(\mathfrak{h},W)}(\widetilde{\mathcal{P}}, \widetilde{\mathcal{T}}_{\underline{w}}\langle n\rangle)\). It follows from Lemma 5.4 that \(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}\) is an isomorphism.
Recall the definition of the category \(\mathscr{C}(\mathfrak{h}^{*},W)\) in SS3.2.
**Lemma 5.5**.:
1. _For any expression_ \(\underline{w}\)_, the triple_ \[\left(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}}),(\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}})_{u\in W},\xi_{ \mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}\right)\] _is an object of_ \(\mathscr{C}(\mathfrak{h}^{*},W)\)_._
2. _For any expressions_ \(\underline{v},\underline{w}\) _and any_ \(\varphi\in\mathbb{H}_{\mathsf{Hom}_{\mathsf{FM}(\mathfrak{h},W)}}(\widetilde{ \mathcal{T}}_{\underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})\)_,_ \(\mathbb{V}(\varphi)\) _is a morphism in_ \(\mathscr{C}(\mathfrak{h}^{*},W)\) _from the triple_ \((\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}}),(\mathbb{V}(\widetilde{ \mathcal{T}}_{\underline{v}})^{u}_{Q^{\wedge}})_{u\in W},\xi_{\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{v}})})\) _to the triple_ \((\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}}),(\mathbb{V}(\widetilde{ \mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}})_{u\in W},\xi_{\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{w}})})\)_._
Proof.: (1) Let \(\underline{w}\) be an expression, and let \(x\in R^{\vee}\). For any expression \(\underline{y}\) and any \(g\in\mathbb{H}_{\mathsf{Hom}_{\mathsf{FM}^{\mathsf{Kar}}(\mathfrak{h},W)}}( \widetilde{\mathcal{P}},\widetilde{\mathbb{V}}_{\underline{y}})\) we have
\[x\,\widehat{\star}\,g=g\circ\mu_{\widetilde{\mathbb{V}}_{\underline{y}}}(x)= g\,\widehat{\star}\,\pi(\underline{y})^{-1}(x)\]
by [1, Lemma 7.4.1]. Therefore the left and right \(R^{\wedge}\)-action on \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}}\) satisfy the compatibility property in the definition of \(\mathscr{C}(\mathfrak{h}^{*},W)\) for all \(u\in W\). Hence our triple is an object of \(\mathscr{C}(\mathfrak{h}^{*},W)\), as desired.
(2) Fix \(\underline{v},\underline{w}\) and \(\varphi\) as in the statement. What we have to prove is that for any \(u\in W\) and any \(\widetilde{f}\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{ \wedge}}\) the element
\[h:=\left(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}\circ( \mathbb{V}(\varphi)\otimes_{R^{\wedge}}1)\circ(\xi_{\mathbb{V}(\widetilde{ \mathcal{T}}_{\underline{w}})})^{-1}\right)(\widetilde{f})\]
belongs to \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}}\).
First, let us assume that \(\widetilde{f}=\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}(f)\) for some \(f\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})\). In this case, \(h=\phi_{\underline{w}}\circ\varphi\circ f\) in \(\mathsf{Loc}^{\prime}(\mathfrak{h},W)\). Since \(\phi_{\underline{w}}\) is an isomorphism in this category, we have
\[h=(\phi_{\underline{w}}\circ\varphi\circ\phi_{\underline{w}}^{-1})\circ( \phi_{\underline{w}}\circ f)=(\phi_{\underline{w}}\circ\varphi\circ\phi_{ \underline{w}}^{-1})\circ\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{ w}})}(f)\]
and hence \(h\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}}\) by our assumption on \(f\) and (5.12), see Figure 1.
Now we consider the general case. There exist \(a\in R^{\wedge}\) and \(f\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})\) such that \(\widetilde{f}=\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}(f\frac {1}{a})\). Then \(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}(f)\in\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{v}})^{u}_{Q^{\wedge}}\) since \(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})}\) is morphism of right \(Q^{\wedge}\)-modules, hence \(ha\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}}\), which implies that \(h\) also belongs to \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{u}_{Q^{\wedge}}\).
Lemma 5.5 shows that the functor \(\mathbb{V}\) factors through a functor
\[\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\to\mathscr{C}(\mathfrak{h}^{*},W),\]
which will be denoted similarly. We will next show that this functor can be endowed with a monoidal structure. For that, we introduce the morphism
\[\gamma_{\varnothing}:B^{\wedge}_{\varnothing}=R^{\wedge}\to\mathbb{V}( \widetilde{\mathcal{T}}_{\varnothing})=\mathbb{V}(\widetilde{\mathcal{T}}_{ \underline{1}}) \tag{5.13}\]
given by the assignment \(x\mapsto\xi\,\widehat{\star}\,x\) (where \(\xi\) is as in (5.4)), and the bifunctorial morphism
\[\beta_{\mathcal{F},\mathcal{G}}:\mathbb{V}(\mathcal{F})\otimes_{R^{\wedge}} \mathbb{V}(\mathcal{G})\to\mathbb{V}(\mathcal{F}\,\widehat{\star}\,\mathcal{G})\]
defined by \(\beta_{\mathcal{F},\mathcal{G}}(f\otimes g)=(f\,\widehat{\star}\,g)\circ\delta \in\mathbb{V}(\mathcal{F}\,\widehat{\star}\,\mathcal{G})_{m+n}\) for all \(f\in\mathbb{V}(\mathcal{F})_{m}\) and \(g\in\mathbb{V}(\mathcal{G})_{n}\) (where \(\delta\) is as in Proposition 5.3).
**Proposition 5.6**.: _The triple \((\mathbb{V},\beta,\gamma)\) is a monoidal functor from \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) to \(\mathscr{C}(\mathfrak{h}^{*},W)\)._
Proof.: We first observe that the proof of [1, Proposition 3.7] shows that \((\mathbb{V},\beta,\gamma)\) is a monoidal functor from \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) to \(R^{\wedge}\)-\(\mathrm{Mod}^{\mathbb{Z}}\)-\(R^{\wedge}\). In fact, all the ingredients of this proof have been repeated above, except for [1, Lemma 2.6], which can be proved by the same considerations using Proposition 4.5 instead of [1, Lemma 4.9].
It is clear also that \(\gamma\) is a morphism in \(\mathscr{C}(\mathfrak{h}^{*},W)\), so all that remains to be justified is that \(\beta_{\mathcal{F},\mathcal{G}}\) is a morphism in \(\mathscr{C}(\mathfrak{h}^{*},W)\) for any \(\mathcal{F},\mathcal{G}\in\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\), i.e. that for any expressions \(\underline{v},\underline{w}\) the morphism
\[\beta_{\underline{v},\underline{w}}:=\beta_{\widetilde{\mathcal{T}}_{ \underline{v}},\widetilde{\mathcal{T}}_{\underline{w}}}:\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{v}})\otimes_{R^{\wedge}}\mathbb{V}( \widetilde{\mathcal{T}}_{\underline{w}})\to\mathbb{V}(\widetilde{\mathcal{T}} _{\underline{v}}\,\widehat{\star}\,\widetilde{\mathcal{T}}_{\underline{w}})\]
is a morphism in \(\mathscr{C}(\mathfrak{h}^{*},W)\). Let \(x,y\in W\), \(\widetilde{f}\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})^{x}_{Q^{ \wedge}}\) and \(\widetilde{g}\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{y}_{Q^{ \wedge}}\). What we have to prove that the element
\[h:=\left(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}\circ( \beta_{\underline{v},\underline{w}}\otimes_{R^{\wedge}}1)\circ(\xi_{\mathbb{V }(\widetilde{\mathcal{T}}_{\underline{v}})}^{-1}\otimes_{Q^{\wedge}}\xi_{ \mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}^{-1})\right)( \widetilde{f}\otimes_{Q^{\wedge}}\widetilde{g})\]
belongs to \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{xy}_{Q^{\wedge}}\). As in the proof of Lemma 5.5(2), we can assume that there exist \(f\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})\) and \(g\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})\) such that \(\widetilde{f}=\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}(f)\in \mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})^{x}_{Q^{\wedge}}\) and \(\widetilde{g}=\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}(g)\in \mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{y}_{Q^{\wedge}}\). Thus, in \(\mathsf{Loc}^{\prime}(\mathfrak{h},W)\) we have
\[h=\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})} \left((\beta_{\underline{v},\underline{w}}\otimes_{R^{\wedge}}1)((f\otimes_ {R^{\wedge}}1)\otimes_{Q^{\wedge}}(g\otimes_{R^{\wedge}}1))\right) \stackrel{{(a)}}{{=}}\phi_{\underline{v}\underline{w}} \circ(f\,\widehat{\star}\,g)\circ\delta\] \[\stackrel{{(b)}}{{=}}(\phi_{\underline{v}}\,\widehat{ \star}\,\phi_{\underline{w}})\circ(f\,\widehat{\star}\,g)\circ\delta\] \[\stackrel{{(c)}}{{=}}(\phi_{\underline{v}}\circ f)\, \widehat{\star}\,(\phi_{\underline{w}}\circ g)\circ\delta\] \[\stackrel{{(d)}}{{=}}(\xi_{\mathbb{V}(\widetilde{ \mathcal{T}}_{\underline{v}})}(f)\,\widehat{\star}\,\xi_{\mathbb{V}(\widetilde {\mathcal{T}}_{\underline{w}})}(g))\circ\delta.\]
Here:
* \((a)\) and \((d)\) follow from the definitions of \(\xi_{\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})}\) and \(\beta\);
* \((b)\) follows from (5.11);
* \((c)\) follows from the interchange law, see SS4.9.
Therefore \(h\in\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})^{xy}_{Q^{\wedge}}\) by our assumptions and (5.12), see Figure 2.
Recall the morphism \(\gamma^{\prime}_{s}\) defined in (5.7).
**Lemma 5.7**.: _For any \(s\in S\), the morphism \(\gamma_{s}:=(\gamma^{\prime}_{s})^{\wedge}\) defines an isomorphism \(B^{\wedge}_{s}\xrightarrow{\sim}\mathbb{V}(\widetilde{\mathcal{T}}_{s})\) in \(\mathscr{C}(\mathfrak{h}^{*},W)\)._
Proof.: We already know that \(\gamma_{s}\) is an isomorphism of graded \(R^{\wedge}\)-bimodules, so we only have to show that it defines a morphism from \(B^{\wedge}_{s}\) to \(\mathbb{V}(\widetilde{\mathcal{T}}_{s})\) in \(\mathscr{C}(\mathfrak{h}^{*},W)\). However, the bimodules \((B^{\wedge}_{s})^{u}_{Q^{\wedge}}\) and \((\mathbb{V}(\widetilde{\mathcal{T}}_{s}))^{u}_{Q^{\wedge}}\) vanish unless \(u\in\{1,s\}\), and identify with the subset of elements \(m\) which satisfy \(m\cdot x=u(x)\cdot m\) for any \(x\in R^{\wedge}\) if \(u\in\{1,s\}\) (see the comments at the beginning of [A1, SS2.4]), so that the forgetful functor induces an isomorphism
\[\operatorname{End}_{\mathscr{C}(\mathfrak{h}^{*},W)}(B^{\wedge}_{s},\mathbb{ V}(\widetilde{\mathcal{T}}_{s}))\xrightarrow{\sim}\operatorname{End}_{R^{ \wedge}\text{-}\operatorname{Mod}^{\mathbb{Z}}\text{-}R^{\wedge}}(B^{\wedge} _{s},\mathbb{V}(\widetilde{\mathcal{T}}_{s})).\]
Hence the desired property is automatically satisfied.
Using Proposition 5.6 and Lemma 5.7 we obtain, for any nonempty expression \(\underline{w}\), a canonical isomorphism
\[\gamma_{\underline{w}}:B^{\wedge}_{\underline{w}}\xrightarrow{\sim}\mathbb{V }(\widetilde{\mathcal{T}}_{\underline{w}}) \tag{5.14}\]
in \(\mathscr{C}(\mathfrak{h}^{*},W)\). (For the empty expression, such an isomorphism was already constructed in (5.13).)
We have therefore proved that the functor \(\mathbb{V}\) defines a monoidal functor
\[\mathbb{V}:\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\to\mathscr{A}_{\mathrm{ BS}}(\mathfrak{h}^{*},W)\]
such that \(\mathbb{V}\circ\langle 1\rangle=\langle 1\rangle\circ\mathbb{V}\), and canonical isomorphisms \(\gamma_{\underline{w}}:B^{\wedge}_{\underline{w}}\xrightarrow{\sim}\mathbb{V }(\widetilde{\mathcal{T}}_{\underline{w}})\) for all expressions \(\underline{w}\). To finish the proof of Theorem 5.1, it only remains to prove that this functor is fully faithful, which will be done in the next subsection.
### Invertibility of \(\mathbb{V}\)
We start by comparing the graded ranks of morphisms in the source and target categories of \(\mathbb{V}\).
**Lemma 5.8**.: _For any expressions \(v,\underline{w}\) we have_
\[\operatorname{grk}_{R^{\wedge}}\mathbb{H}\mathrm{om}_{\mathscr{T}_{\mathrm{BS }}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{\underline{v}},\widetilde{ \mathcal{T}}_{\underline{w}})^{\wedge}=\operatorname{grk}_{R^{\wedge}} \mathrm{Hom}^{\bullet}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)}(B^{ \wedge}_{\underline{w}},B^{\wedge}_{\underline{w}}).\]
Proof.: As in [AMRW2, SS2.7], we consider the Hecke algebra \(\mathcal{H}_{(W,S)}\) of \((W,S)\), its "standard" basis \((H_{w}:w\in W)\) (as a \(\mathbb{Z}[v,v^{-1}]\)-module) and the \(\mathbb{Z}[v,v^{-1}]\)-bilinear pairing \(\langle-,-\rangle\) on \(\mathcal{H}_{(W,S)}\) which satisfies \(\langle H_{x},H_{y}\rangle=\delta_{x,y}\), for \(x,y\in W\). For \(s\in S\) we set \(\underline{H}_{s}=H_{s}+v\), and if \(\underline{w}=(s_{1},\cdots,s_{n})\) is an expression we set
\[\underline{H}_{\underline{w}}=\underline{H}_{s_{1}}\cdots\underline{H}_{s_{n}}.\]
The expansion of this element in the basis \((H_{w}:w\in W)\) will be denoted
\[\underline{H}_{\underline{v}}=\sum_{x\in W}p_{\underline{v}}^{x}(v)H_{x}.\]
By [1, Lemma 2.8] (which holds in our setting, with the same proof), for any expressions \(\underline{v},\underline{w}\) we have
\[\langle\underline{H}_{\underline{v}},\underline{H}_{\underline{w}}\rangle= \operatorname{grk}_{\Bbbk}\mathbb{H}\mathrm{om}_{\mathtt{LM}(\mathfrak{h},W)}( \mathcal{T}_{\underline{v}},\mathcal{T}_{\underline{w}})=\operatorname{grk}_{ R^{\vee}}\mathbb{H}\mathrm{om}_{\mathtt{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{ \underline{v}},\widetilde{\mathcal{T}}_{\underline{w}}) \tag{5.15}\]
where the second equality is due to Proposition 4.11. On the other hand, by [1, Corollary 4.7] we have
\[\overline{\langle\underline{H}_{\underline{v}},\underline{H}_{\underline{w}} \rangle}=\overline{\sum_{x\in W}p_{\underline{v}}^{x}(q)p_{\underline{w}}^{x} (q)}=\operatorname{grk}_{R^{\wedge}}\mathrm{Hom}_{\mathscr{A}_{\mathrm{HS}( \mathfrak{h}^{*},W)}}^{\bullet}(B_{\underline{v}}^{\wedge},B_{\underline{w}} ^{\wedge})\]
This proves the lemma, in view of (2.2).
Let us now denote by \(R^{\vee}\)-\(\operatorname{Mod}^{\mathbb{Z}}\), resp. \(R^{\wedge}\)-\(\operatorname{Mod}^{\mathbb{Z}}\), the category of graded left \(R^{\vee}\)-modules, resp. \(R^{\wedge}\)-modules. Then as for bimodules we have a natural equivalence of categories
\[(-)^{\wedge}:R^{\vee}\text{-}\operatorname{Mod}^{\mathbb{Z}}\to R^{\wedge} \text{-}\operatorname{Mod}^{\mathbb{Z}}\]
which replaces the grading by the opposite grading. We consider the functor
\[\mathbb{V}^{\prime}:\mathsf{Tilt}_{\mathtt{LM}}(\mathfrak{h},W)\to R^{\wedge} \text{-}\operatorname{Mod}^{\mathbb{Z}} \tag{5.16}\]
defined by
\[\mathbb{V}^{\prime}(\mathcal{F})=\mathbb{H}\mathrm{om}_{\mathtt{LM}(\mathfrak{ h},W)}(\mathcal{P},\mathcal{F})^{\wedge}.\]
Then by Proposition 4.12 we have a commutative diagram
The following lemma is an analogue of [1, Lemma 3.9], which follows from the same arguments using Proposition 4.5 instead of [1, Lemma 4.9].
**Lemma 5.9**.: _The functor \(\mathbb{V}^{\prime}\) is faithful. _
We are now ready to complete the proof of Theorem 5.1.
Proof of Theorem 5.1.: As explained at the end of SS5.4, it only remains to prove that \(\mathbb{V}\) is fully faithful. First we show that \(\mathbb{V}\) is faithful. Let \(\underline{w}\) and \(\underline{v}\) be expressions. Fix a finite family of \(n\) homogeneous generators of the \(R^{\vee}\)-bimodule \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})\); by Proposition 4.12 this also provides a family of homogeneous generators of the left \(R^{\vee}\)-module \(\mathbb{V}^{\prime}(\mathcal{T}_{\underline{v}})\). Consider the commutative diagram
where:
* the left horizontal arrows are induced by the functors \(\mathbb{V}\) and \(\mathbb{V}^{\prime}\);
* the right horizontal arrows are induced by our choice of generators of \(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}})\) and \(\mathbb{V}^{\prime}(\mathcal{T}_{\underline{v}})\);
* the vertical arrows are induced by the functors \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\) and \((-)\otimes_{R^{\wedge}}\Bbbk\).
Proposition 4.12 shows that in the leftmost column and in the rightmost column, the \(\Bbbk\)-vector space on the bottom line is obtained from the free right \(R^{\vee}\)-module on the upper line by applying the functor \((-)\otimes_{R^{\wedge}}\Bbbk\). The lower composition is injective by Lemma 5.9 and the fact that our family generates \(\mathbb{V}^{\prime}(\mathcal{T}_{\underline{v}})\). By Lemma 2.2, it follows that the upper composition is also injective, hence so is the upper left morphism, showing that \(\mathbb{V}\) is faithful.
Now, for any \(m\in\mathbb{Z}\), the functor \(\mathbb{V}\) induces a morphism between the homogeneous components of degree \(-m\)
\[\left(\mathbb{H}\mathrm{om}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{ T}}_{\underline{v}},\widetilde{\mathcal{T}}_{\underline{w}})^{\wedge} \right)_{m}\quad\text{and}\quad\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}( \mathfrak{h}^{*},W)}(\mathbb{V}(\widetilde{\mathcal{T}}_{\underline{v}}), \mathbb{V}(\widetilde{\mathcal{T}}_{\underline{w}})\langle m\rangle)\]
which is injective as \(\mathbb{V}\) is faithful. By Lemma 5.8, these vector spaces have the same finite dimension. Therefore this morphism is an isomorphism, proving that \(\mathbb{V}\) is fully faithful.
_Remark 5.10_.: Our proof of full faithfulness above is less direct than that of the corresponding claim in [1]. This is due to the fact that there is no theory of "Soergel modules" in the setting of [1].
## 6. Construction of the \(2m_{st}\)-valent morphisms
From now on we drop the assumption that \(W\) is finite. We therefore consider an arbitrary Coxeter system \((W,S)\) and a realization \(\mathfrak{h}\) of \((W,S)\) which satisfies the conditions of SS2.1, SS2.3 and SS4.9.
### Overview
Given a pair \(s,t\) of distinct simple reflections such that the product \(st\) have finite order \(m_{s,t}\), we will denote by \(w(s,t)\) the word \((s,t,\cdots)\) of length \(m_{s,t}\) where the letters alternate between \(s\) and \(t\). Note that the relation \(\pi(w(s,t))=\pi(w(t,s))\) in \(W\) is precisely the braid relation associated with \(s\) and \(t\). Our goal in this section is to construct, for any such pair \(s,t\), a canonical morphism
\[f_{s,t}:\widetilde{\mathcal{T}}_{w(s,t)}\to\widetilde{\mathcal{T}}_{w(t,s)}\]
which will eventually be the image under Koszul duality of the \(2m_{s,t}\)-valent morphism associated with \((s,t)\) in the Hecke category.
We will proceed as follows. By Abe's theory there exists a unique morphism
\[\varphi_{s,t}:B^{\wedge}_{w(s,t)}\to B^{\wedge}_{w(t,s)}\]
in \(\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\) such that
\[\varphi_{s,t}(u_{w(s,t)})=u_{w(t,s)}, \tag{6.1}\]
see [1, Theorem 3.9]. (More specifically, this reference states the existence of such a morphism. Unicity follows from [1, Theorem 4.6] and the computations in [1, SS4.1].) Now, choose a subset \(S^{\prime}\subset S\) which contains \(s\) and \(t\) and generates a finite subgroup \(W^{\prime}\) of \(W\). Choose also the data that allow to define a functor \(\mathbb{V}_{W^{\prime}}\) as in SS5.4 and its monoidal structure, for the Coxeter system \((W^{\prime},S^{\prime})\) and its realization \(\mathfrak{h}_{|W^{\prime}}\) (see (5.3)-(5.4)). Then the category \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h}_{|W^{\prime}},W^{\prime})\) identifies in the natural way with a full subcategory of \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\), in such a way that the objects
\(\widetilde{\mathcal{T}}_{w(s,t)}\) and \(\widetilde{\mathcal{T}}_{w(t,s)}\) in these two categories coincide. By full faithfulness of \(\mathbb{V}_{W^{\prime}}\) (see Theorem 5.1) there exists a unique morphism \(f_{s,t}\) as above such that
\[\mathbb{V}_{W^{\prime}}(f_{s,t})=\gamma_{w(t,s)}\circ\varphi_{s,t}\circ(\gamma_ {w(s,t)})^{-1}, \tag{6.2}\]
which provides the desired morphism.
Obviously there is a problem with this definition, since it might depend on the extra data we have introduced: the subset \(S^{\prime}\), and the data involved in the definition of \(\mathbb{V}_{W^{\prime}}\). Our goal is exactly to prove that, in fact, it is not the case. Note that we could have solved the problem of the dependence on the choice of \(S^{\prime}\) by setting \(S^{\prime}=\{s,t\}\); however, later we will need to use that (6.2) holds also for some other choices of \(S^{\prime}\).
### Proof of independence
Our goal is therefore to prove the following claim.
**Proposition 6.1**.: _For any pair \((s,t)\) of simple reflections generating a finite subgroup of \(W\), there exists a morphism_
\[f_{s,t}:\widetilde{\mathcal{T}}_{w(s,t)}\to\widetilde{\mathcal{T}}_{w(t,s)}\]
_which satisfies the following property. For any subset \(S^{\prime}\subset S\) containing \(s\) and \(t\) and generating a finite subgroup \(W^{\prime}\) of \(W\), and for any choices (5.3)-(5.4) allowing to define a monoidal functor \(\mathbb{V}_{W^{\prime}}\) as in SS5.4 for the Coxeter system \((W^{\prime},S^{\prime})\) and its realization \(\mathfrak{h}_{|W^{\prime}}\), the equality (6.2) holds._
The proof of Proposition 6.1 will use the following preliminary result. Recall, for any \(s\in S\), the morphism
\[\widehat{\epsilon}_{s}:\widetilde{\mathcal{T}}_{s}\to\widetilde{\mathcal{T}} _{1}\langle 1\rangle \tag{6.3}\]
in \(\mathscr{T}_{\rm BS}(\mathfrak{h},W)\) constructed in [1, SS5.3.4]. For an expression \(\underline{w}=(s_{1},\cdots,s_{r})\) we set
\[\widehat{\epsilon}_{\underline{w}}:=\widehat{\epsilon}_{s_{1}}\,\widehat{ \star}\,\cdots\widehat{\star}\widehat{\epsilon}_{s_{r}}:\widetilde{\mathcal{T }}_{\underline{w}}\to\widetilde{\mathcal{T}}_{1}\langle\ell(\underline{w})\rangle. \tag{6.4}\]
**Lemma 6.2**.: _Let \((s,t)\) be a pair of simple reflections generating a finite subgroup of \(W\). The \(\Bbbk\)-vector space_
\[\operatorname{Hom}_{\mathtt{LM}(\mathfrak{h},W)}(\mathcal{T}_{w(s,t)}, \mathcal{T}_{1}\langle\ell(w(s,t))\rangle)\]
_is \(1\)-dimensional, and is spanned by \(\mathsf{For}_{\mathtt{LM}}^{\mathtt{FM}}(\widehat{\epsilon}_{w(s,t)})\). Moreover, for any subset \(S^{\prime}\subset S\) generating a finite subgroup \(W^{\prime}\) of \(W\), and for any data (5.3)-(5.4) allowing to define a monoidal functor \(\mathbb{V}_{W^{\prime}}\) as in SS5.4, if \(f_{s,t}\) is the unique morphism which satisfies (6.2) we have_
\[\mathsf{For}_{\mathtt{LM}}^{\mathtt{FM}}(\widehat{\epsilon}_{w(s,t)})=\mathsf{ For}_{\mathtt{LM}}^{\mathtt{FM}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t}).\]
Proof.: Keeping the notation introduced in the proof of Lemma 5.8, by (5.15) we have
\[p^{1}_{w(s,t)}(v)=\langle\underline{H}_{w(s,t)},\underline{H}_{1}\rangle=\sum _{n\in\mathbb{Z}}\dim_{\Bbbk}\left(\operatorname{Hom}_{\mathtt{LM}(\mathfrak{ h},W)}(\mathcal{T}_{w(s,t)},\mathcal{T}_{1}\langle n\rangle)\right)v^{n}.\]
(In Section 5 it is assumed that \(W\) is finite, but this condition is not required for this specific statement to hold.) On the other hand, using the formula in [1, Lemma 2.7] one sees that the highest monomial appearing in \(p^{1}_{w(s,t)}(v)\) is \(v^{\ell(w(s,t))}\), and that its coefficient is \(1\); we deduce that \(\dim_{\Bbbk}\operatorname{Hom}_{\mathtt{LM}(\mathfrak{h},W)}(\mathcal{T}_{w (s,t)},\mathcal{T}_{1}\langle\ell(w(s,t))\rangle)=\)
1. The argument in the proof of [1, Lemma 4.4] shows that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(s,t)})\neq 0\); this morphism is therefore a generator of \(\mathrm{Hom}_{\mathsf{LM}(\mathfrak{h},W)}(\mathcal{T}_{w(s,t)},\mathcal{T}_{ 1}\langle\ell(w(s,t))\rangle)\).
Let us now fix data as in the third sentence of the lemma, and consider the corresponding morphism \(f_{s,t}\). What we have shown above implies that there exists \(a\in\Bbbk\) such that
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t})=a\cdot\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(s,t) }), \tag{6.5}\]
and what we have to prove is that \(a=1\). For this, we will describe some morphisms in different ways and then compare them.
We first compute the morphism
\[\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t})\left( \gamma_{w(s,t)}(u_{w(s,t)})\right)\in\mathbb{V}_{W^{\prime}}(\widetilde{ \mathcal{T}}_{1})=\mathrm{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{ \mathcal{P}}_{W^{\prime}},\widetilde{\mathcal{T}}_{1}),\]
where \(\widetilde{\mathcal{P}}_{W^{\prime}}\) is the object (5.3) we have chosen and, for any \(\underline{w}\in\mathrm{Exp}(W^{\prime})\), \(\gamma_{\underline{w}}\) is the isomorphism (5.14) obtained from our choice of morphism (5.4). Explicitly, we note that
\[\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t })\circ\gamma_{w(s,t)}=\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)}) \circ\mathbb{V}_{W^{\prime}}(f_{s,t})\circ\gamma_{w(s,t)}\\ =\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)})\circ\gamma _{w(t,s)}\circ\varphi_{s,t}\]
by (6.2). Now we have
\[(\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)})\circ\gamma_{w(t,s)} \circ\varphi_{s,t})(u_{w(s,t)})=(\widehat{\epsilon}_{w(t,s)}\langle-\ell(w(t, s))\rangle)\circ(\gamma_{w(t,s)}(u_{w(t,s)})).\]
By definition, the morphism
\[\gamma_{w(t,s)}(u_{w(t,s)}):\widetilde{\mathcal{P}}_{W^{\prime}}\to \widetilde{\mathcal{T}}_{w(t,s)}\]
is the morphism
\[(\zeta_{t}\,\widehat{\star}\,\zeta_{s}\,\widehat{\star}\,\cdots)\circ\delta ^{\ell(w(s,t))-1}\]
where we use the notation from (5.5) and where, for any \(n\geq 1\), the morphism
\[\delta^{n}:\widetilde{\mathcal{P}}_{W^{\prime}}\to\underbrace{\widetilde{ \mathcal{P}}_{W^{\prime}}\,\widehat{\star}\,\cdots\,\widehat{\star}\, \widetilde{\mathcal{P}}_{W^{\prime}}}_{n+1\text{ times}}\]
is the \(n\)-th comultiplication morphism for \(\widetilde{\mathcal{P}}_{W^{\prime}}\), see Proposition 5.3. By definition of \(\widehat{\epsilon}_{w(t,s)}\) (see (6.4)), we deduce that
\[(\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)})\circ\gamma _{w(t,s)}\circ\varphi_{s,t})(u_{w(s,t)})\\ =(\widehat{\epsilon}_{t}\langle-1\rangle\,\widehat{\star}\, \widehat{\epsilon}_{s}\langle-1\rangle\,\widehat{\star}\,\cdots)\circ(\zeta_ {t}\,\widehat{\star}\,\zeta_{s}\,\widehat{\star}\,\cdots)\circ\delta^{\ell(w(s,t))-1}\\ =((\widehat{\epsilon}_{t}\circ\zeta_{t}\langle-1\rangle)\, \widehat{\star}\,(\widehat{\epsilon}_{s}\circ\zeta_{s}\langle-1\rangle)\, \widehat{\star}\,\cdots)\circ\delta^{\ell(w(s,t))-1}\]
by the exchange law (4.12). Using (5.6) and the axioms of a coalgebra, we finally deduce that
\[(\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)})\circ\gamma_{w(t,s)} \circ\varphi_{s,t})(u_{w(s,t)})=\xi,\]
hence that
\[\left(\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t})\circ \gamma_{w(s,t)}\right)(u_{w(s,t)})=\xi.\]
Similar considerations show that \(\mathbb{V}(\widehat{\epsilon}_{w(s,t)})\left(\gamma_{w(s,t)}(u_{w(s,t)}) \right)=\xi\), which finally shows that
\[\xi=\mathbb{V}_{W^{\prime}}(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t})\left( \gamma_{w(s,t)}(u_{w(s,t)})\right)=\mathbb{V}(\widehat{\epsilon}_{w(s,t)}) \left(\gamma_{w(s,t)}(u_{w(s,t)})\right). \tag{6.6}\]
Using (6.5), we deduce that
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\xi)=\mathsf{For}_{\mathsf{LM}}^{ \mathsf{FM}}\left(\widehat{\epsilon}_{w(t,s)}\circ f_{s,t}\circ(\gamma_{w(s,t)}( u_{w(s,t)}))\right)\]
\[=\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\left(\widehat{\epsilon}_{w(t,s)} \circ f_{s,t}\right)\circ\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\left(\gamma_{w (s,t)}(u_{w(s,t)})\right)\] \[=a\cdot\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\left(\widehat{ \epsilon}_{w(s,t)}\right)\circ\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\left( \gamma_{w(s,t)}(u_{w(s,t)})\right)\] \[=a\cdot\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}\left(\widehat{ \epsilon}_{w(s,t)}\circ\gamma_{w(s,t)}(u_{w(s,t)})\right)=a\cdot\mathsf{For}_ {\mathsf{LM}}^{\mathsf{FM}}(\xi).\]
Since \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\xi)\neq 0\) (see the comments following (5.4)) this implies that \(a=1\), which finishes the proof.
_Remark 6.3_.: The equalities in (6.6) and the fact that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\xi)\neq 0\) also imply that \(\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(s,t)})\neq 0\).
Proof of Proposition 6.1.: What we have to prove is that if \(f_{s,t}\) and \(f_{s,t}^{\prime}\) are two morphisms constructed as in Lemma 6.2 then \(f_{s,t}=f_{s,t}^{\prime}\). Now the \(\Bbbk\)-vector space \(\operatorname{Hom}_{\mathsf{FM}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{ \widetilde{s}},\widetilde{\mathcal{T}}_{\widetilde{t}})\) is \(1\)-dimensional, e.g. because it identifies with the space \(\operatorname{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)}(B_{w(s,t)} ^{\wedge},B_{w(t,s)}^{\wedge})\), which is \(1\)-dimensional as explained in SS6.1. This argument also shows that \(f_{s,t}\) and \(f_{s,t}^{\prime}\) are nonzero; hence there exists \(a\in\Bbbk\) such that \(f_{s,t}^{\prime}=a\cdot f_{s,t}\). Using Lemma 6.2 we then obtain that
\[\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(s,t)})= \mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{\epsilon}_{w(t,s)}\circ f _{s,t}^{\prime})=a\cdot\mathsf{For}_{\mathsf{LM}}^{\mathsf{FM}}(\widehat{ \epsilon}_{w(t,s)}\circ f_{s,t})=a\cdot\mathsf{For}_{\mathsf{LM}}^{\mathsf{ FM}}(\widehat{\epsilon}_{w(s,t)}).\]
In view of Remark 6.3 this implies that \(a=1\), which finishes the proof.
## 7. Koszul duality
We continue with the setting of Section 6.
### Monoidal Koszul duality
The first formulation of our Koszul duality, which generalizes [1, Theorem 5.1], is the following.
**Theorem 7.1**.: _There exists an equivalence of monoidal categories_
\[\Phi:\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\xrightarrow{\sim}\mathscr{ T}_{\mathrm{BS}}(\mathfrak{h},W)\]
_which satisfies \(\Phi\circ(1)=\langle 1\rangle\circ\Phi\) and \(\Phi(B_{\underline{w}}^{\wedge})=\widetilde{\mathcal{T}}_{\underline{w}}\) for any \(\underline{w}\in\mathrm{Exp}(W)\)._
The monoidal functor \(\Phi\) is constructed in SS7.2-7.3. By definition it satisfies \(\Phi\circ(1)=\langle 1\rangle\circ\Phi\) and \(\Phi(B_{\underline{w}}^{\wedge})=\widetilde{\mathcal{T}}_{\underline{w}}\) for any \(\underline{w}\in\mathrm{Exp}(W)\); in particular, it induces a bijection between the sets of objects in \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\) and in \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\). To prove that this functor is an equivalence, it therefore suffices to prove that it is fully faithful, which will be done in SS7.5.
_Remark 7.2_.: In the case when \(W\) is finite, Theorem 7.1 can be deduced from the combination of Theorem 5.1 and Theorem 3.3 (applied to \(\mathfrak{h}^{*}\)). The main point of the proof given below is that it applies also to infinite Coxeter groups.
### Construction of the functor \(\Phi\)
In view of the definition of the category \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\), to define a monoidal functor from \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\) to \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) which satisfies \(\Phi\circ(1)=\langle 1\rangle\circ\Phi\) it suffices to define the image of each object \(B_{s}^{\wedge}\) (\(s\in S\)), of each generating morphism, and then to check that the images of these morphisms satisfy the defining relations of \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\) (in \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\)), see SS3.1. In this subsection we explain how to define the images of the generating objects and morphisms, and in SS7.3 we will show that these images satisfy the required relations.
First, our functor \(\Phi\) will send \(B_{s}^{\wedge}\) to \(\widetilde{\mathcal{T}}_{s}\), for any \(s\in S\). By monoidality, for any expression \(\underline{w}\), the image of \(B_{\underline{w}}^{\wedge}\) will then be \(\widetilde{\mathcal{T}}_{\underline{w}}\).
#### 7.2.1. Polynomials
As we noted in SS4.5, we have a graded algebra morphism \(\mu_{\widetilde{\mathcal{T}}_{1}}:R^{\vee}\to\mathbb{E}\mathrm{nd}_{\mathsf{FM}( \mathfrak{h},W)}(\widetilde{\mathcal{T}}_{1})\). Thus, for \(x\in R_{m}^{\wedge}\) we can set
\[\Phi\left(\begin{array}{c}\includegraphics[height=14.226378pt]{images/.eps} \end{array}\right):=\mu_{\widetilde{\mathcal{T}}_{1}}(x):\widetilde{\mathcal{T }}_{1}\to\widetilde{\mathcal{T}}_{1}\langle m\rangle.\]
#### 7.2.2. Dot morphisms
Let \(s\in S\). Recall that we have considered a certain morphism \(\widehat{\epsilon}_{s}\) in (6.3). Consider also the morphism
\[\widehat{\eta}_{s}:\widetilde{\mathcal{T}}_{1}\langle-1\rangle\to\widetilde{ \mathcal{T}}_{s}\]
defined in [1, SS5.3.4]. We set
\[\Phi\left(\begin{array}{c}\includegraphics[height=14.226378pt]{images/.eps} \end{array}\right):=\widehat{\eta}_{s}\quad\text{and}\quad\Phi\left(\begin{array} []{c}\includegraphics[height=14.226378pt]{images/.eps}\end{array}\right):= \widehat{\epsilon}_{s}.\]
#### 7.2.3. Trivalent vertices
Let again \(s\in S\). We note that the proof of [1, Lemma 4.2] is completely diagrammatic, hence that it also applies in our present context. It follows that there exists a unique morphism
\[\widehat{b}_{1}:\widetilde{\mathcal{T}}_{s}\to\widetilde{\mathcal{T}}_{s} \,\widehat{\star}\,\widetilde{\mathcal{T}}_{s}\langle-1\rangle,\quad\text{ resp.}\quad\widehat{b}_{2}:\widetilde{\mathcal{T}}_{s}\,\widehat{\star}\, \widetilde{\mathcal{T}}_{s}\to\widetilde{\mathcal{T}}_{s}\langle-1\rangle,\]
which satisfies
\[(\mathrm{id}_{\widetilde{\mathcal{T}}_{s}}\,\,\widehat{\star}\,\widehat{ \epsilon}_{s})\circ\widehat{b}_{1}=\mathrm{id}_{\widetilde{\mathcal{T}}_{s}},\quad\text{resp.}\quad\widehat{b}_{2}\circ(\mathrm{id}_{\widetilde{\mathcal{ T}}_{s}}\,\,\widehat{\star}\,\widehat{\eta}_{s})=\mathrm{id}_{\widetilde{ \mathcal{T}}_{s}}, \tag{7.1}\]
for any \(s\in S\). We set
\[\Phi\left(\begin{array}{c}\includegraphics[height=14.226378pt]{images/.eps} \end{array}\right)=\widehat{b}_{1}\quad\text{and}\quad\Phi\left(\begin{array} []{c}\includegraphics[height=14.226378pt]{images/.eps}\end{array}\right)= \widehat{b}_{2}.\]
#### 7.2.4. \(2m_{st}\)-valent vertices
Let \(s,t\in S\) be distinct simple reflections generating a finite subgroup of \(W\). We set
\[\Phi\left(\begin{array}{c}\includegraphics[height=14.226378pt]{images/.eps} \end{array}\right)=f_{s,t},\]
where the right-hand side is as in Proposition 6.1.
### The functor \(\Phi\) is well defined
To complete the construction of \(\Phi\) we now need to check that the morphisms considered in SS7.2 satisfy the defining relations of the category \(\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\). Our proof of this fact follows the same strategy as in [1, SS4.3].
Let us fix an arbitrary relation to be checked. There exists a subset \(S^{\prime}\subset S\) generating a finite subgroup \(W^{\prime}\) of \(W\) such that this relation involves only words in \(S^{\prime}\). (The subset \(S^{\prime}\) has cardinality at most \(3\), but this will not be important for our purposes.) Replacing \((W,S)\) by \((W^{\prime},S^{\prime})\), we can therefore assume that \(W\) is finite. Under this assumption we can consider a functor \(\mathbb{V}\) as in Section 5. Since this functor is fully faithful, the relation under consideration holds in \(\mathscr{T}_{\mathrm{BS}}(\mathfrak{h},W)\) if and only if it holds after applying \(\mathbb{V}\). To check this we will first compute the images under \(\mathbb{V}\circ\Phi\) of our generating morphisms, see SS7.3.1-7.3.5. These computations will show that the isomorphisms \(\gamma_{\underline{w}}\) (see (5.14)) define an isomorphism of monoidal functors between \(\mathbb{V}\circ\Phi\) and the functor \(\mathcal{F}:\mathscr{D}_{\mathrm{BS}}(\mathfrak{h}^{*},W)\to\mathscr{A}_{ \mathrm{BS}}(\mathfrak{h}^{*},W)\) considered
in [A2, SS3.5]. The fact that the relation under consideration holds will then follow from the fact that \(\mathcal{F}\) is indeed well defined, see [A2, Lemma 3.14].
#### 7.3.1. Polynomials
If \(x\in R_{m}^{\wedge}\), then the morphism
\[\mathbb{V}\circ\Phi\left(\begin{smallmatrix}\cdots\cdots\\ \vdots\cdots\vdots\\ \cdots\cdots\end{smallmatrix}\right):\mathbb{V}(\widetilde{\mathcal{T}}_{1}) \rightarrow\mathbb{V}(\widetilde{\mathcal{T}}_{1}\langle m\rangle)\]
is given by
\[f\mapsto\mu_{\widetilde{\mathcal{T}}_{1}}(x)\circ f=f\circ\mu_{\widetilde{ \mathcal{F}}}(x),\]
i.e. by the action of \(x\) on the \(R^{\wedge}\)-module \(\mathbb{V}(\widetilde{\mathcal{T}}_{1})=R^{\wedge}\).
#### 7.3.2. The upper dot
By definition we have that
\[\mathbb{V}\circ\Phi\left(\begin{array}{c}\\ \cline{1-1}\\ \cline{1-1}\\ \end{smallmatrix}\right)=\mathbb{V}(\widehat{\epsilon}_{s}):\mathbb{V}( \widetilde{\mathcal{T}}_{s})\rightarrow\mathbb{V}(\widetilde{\mathcal{T}}_{1} \langle 1\rangle).\]
Recall the identification \(\gamma_{s}:B_{s}^{\wedge}\xrightarrow{\sim}\mathbb{V}(\widetilde{\mathcal{T}} _{s})\) in (5.7), which satisfies \(\gamma_{s}(u_{s})=\zeta_{s}\). The morphism
\[\mathbb{V}(\widehat{\epsilon}_{s})\circ\gamma_{s}:B_{s}^{\wedge}\to \mathbb{V}(\widetilde{\mathcal{T}}_{1}\langle 1\rangle)=R^{\wedge}(1)\]
is a morphism of \(R^{\wedge}\)-modules, which sends the generator \(u_{s}\) to
\[\widehat{\epsilon}_{s}\langle-1\rangle\circ\zeta_{s}=\xi,\]
see (5.6), i.e. to \(\gamma_{\varnothing}(u_{\varnothing})\). It therefore coincides with the "multiplication" morphism
\[m_{s}:B_{s}^{\wedge}\to R^{\wedge}(1)\]
defined by \(a\otimes b\mapsto ab\), see [A1, SS3.3].
#### 7.3.3. The lower dot
We now analyze
\[\mathbb{V}\circ\Phi\left(\begin{array}{c}s\\ \cline{1-1}\\ \end{array}\right)=\mathbb{V}(\widehat{\eta}_{s}):\mathbb{V}(\widetilde{ \mathcal{T}}_{1}\langle-1\rangle)\rightarrow\mathbb{V}(\widetilde{\mathcal{T }}_{s}).\]
By [AMRW1, Lemma 5.3.2 (1)] we have \(\widehat{\epsilon}_{s}\circ\widehat{\eta}_{s}=\mu_{\widetilde{\mathcal{T}}_{ 1}}(\alpha_{s}^{\vee})\), and hence
\[\mathbb{V}(\widehat{\epsilon}_{s})\circ\mathbb{V}(\widehat{\eta}_{s})=\alpha _{s}^{\vee}\cdot\mathrm{id}:\mathbb{V}(\widetilde{\mathcal{T}}_{1}) \rightarrow\mathbb{V}(\widetilde{\mathcal{T}}_{1}).\]
The morphism \(\beta_{s}^{\prime}:=\gamma_{s}^{-1}\circ\mathbb{V}(\widehat{\eta}_{s})\circ \gamma_{\varnothing}\) therefore satisfies \(m_{s}\circ\beta_{s}^{\prime}=\alpha_{s}^{\vee}\cdot\mathrm{id}_{R^{\wedge}}\). Now the \(\Bbbk\)-vector space \(\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)}(R^{\wedge},B_{s} ^{\wedge}(1))\) is 1-dimensional, and generated by the morphism \(\beta_{s}\) such that \(\beta_{s}(1)=\delta_{s}\otimes 1-1\otimes s(\delta_{s})\) where \(\delta_{s}\in V\) is an element which satisfies \(\langle\alpha_{s},\delta_{s}\rangle=1\), cf. (3.3). This generator is the unique vector such that \(m_{s}\circ\beta_{s}=\alpha_{s}^{\vee}\cdot\mathrm{id}_{R^{\wedge}}\), so that \(\beta_{s}=\beta_{s}^{\prime}\).
#### 7.3.4. Trivalent vertices
Let \(s\in S\). The spaces \(\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)}(B_{s}^{\wedge},B _{(s,s)}^{\wedge}(-1))\) and \(\mathrm{Hom}_{\mathscr{A}_{\mathrm{BS}}(\mathfrak{h}^{*},W)}(B_{(s,s)}^{\wedge },B_{s}^{\wedge}(-1))\) are 1-dimensional by [A1, Corollary 4.7] and an easy computation in the Hecke algebra. As explained e.g. in [A2, SS3.5], we can take as generators of these spaces the morphisms
\[t_{1}:f\otimes g\mapsto f\otimes 1\otimes g\quad\text{and}\quad t_{2}:f \otimes g\otimes h\mapsto f\partial_{s}(g)\otimes h,\]
where we use the identification
\[B_{(s,s)}^{\wedge}=B_{s}^{\wedge}\otimes_{R^{\wedge}}B_{s}^{\wedge}=R^{\wedge }\otimes_{(R^{\wedge})^{*}}R^{\wedge}\otimes_{(R^{\wedge})^{*}}R^{\wedge} \langle-2\rangle,\]
and \(\partial_{s}\) is the Demazure operator associated with \(s\), see [EW2, SS3.3]
We have
\[(\operatorname{id}_{B_{s}^{\wedge}}\otimes m_{s}(-1))\circ t_{1}=\operatorname{id} _{B_{s}^{\wedge}}\quad\text{and}\quad t_{2}(1)\circ(\operatorname{id}_{B_{s}^{ \wedge}}\otimes\beta_{s})=\operatorname{id}_{B_{s}^{\wedge}}.\]
On the other hand, by definition (see (7.1)), we have
\[(\operatorname{id}_{\mathbb{V}(\widetilde{\mathcal{T}}_{s})}\otimes\mathbb{V} (\widehat{\epsilon}_{s})(-1))\circ\mathbb{V}(\widehat{b}_{1})=\operatorname{ id}_{\mathbb{V}(\widetilde{\mathcal{T}}_{s})}\quad\text{and}\quad\mathbb{V}( \widehat{b}_{2})(1)\circ(\operatorname{id}_{\mathbb{V}(\widetilde{\mathcal{T}}_ {s})}\otimes\mathbb{V}(\widehat{\eta}_{s}))=\operatorname{id}_{\mathbb{V}( \widetilde{\mathcal{T}}_{s})}.\]
Using the descriptions in SSSS7.3.2-7.3.3, we deduce that
\[\mathbb{V}\circ\Phi\left(\begin{array}{c}\includegraphics[scale=0.5]{ pgf-1.eps}\end{array}\right)=\gamma_{(s,s)}\circ t_{1}\circ\gamma_{s}^{-1}\]
and
\[\mathbb{V}\circ\Phi\left(\begin{array}{c}\includegraphics[scale=0.5]{ pgf-1.eps}\end{array}\right)=\gamma_{s}\circ t_{2}\circ(\gamma_{(s,s)})^{-1}.\]
#### 7.3.5. \(2m_{st}\)-valent vertices
Let \(s,t\in S\) be distinct simple reflections generating a finite subgroup of \(W\). By the very definition of \(f_{s,t}\) (see Lemma 6.1) we have
\[\mathbb{V}\circ\Phi\left(\begin{array}{c}\includegraphics[scale=0.5]{ pgf-1.eps}\end{array}\right)=\mathbb{V}(f_{s,t})=\gamma_{w(t,s)}\circ\varphi_{s,t} \circ(\gamma_{w(s,t)})^{-1}.\]
### Triangulated Koszul duality
We have now constructed a monoidal functor \(\Phi\) as in Theorem 7.1. We will still denote by \(\Phi\) the induced functor from \(\mathscr{D}^{\oplus}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\) to \(\mathscr{T}^{\oplus}_{\operatorname{BS}}(\mathfrak{h},W)\). In order to prove that this functor is an equivalence, we will first study a variant of this functor obtained by "killing the right action of \(R^{\wedge}\)." Namely, recall the constructions of SS4.4. By definition of \(\underline{\mathscr{D}}^{\oplus}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\), there exists a unique additive functor
\[\underline{\Phi}:\underline{\mathscr{D}}^{\oplus}_{\operatorname{BS}}( \mathfrak{h}^{*},W)\to\operatorname{Tilt}^{\oplus}_{\operatorname{LM}}( \mathfrak{h},W)\]
such that
\[\mathsf{For}^{\mathsf{FM}}_{\operatorname{LM}}\circ\Phi=\underline{\Phi}\circ \mathsf{For}^{\mathsf{BE}}_{\operatorname{LE}}.\]
Recall also the equivalence (4.10), which we will here denote by \(\imath\). Finally, recall that we have an action of the monoidal category \((\mathscr{T}_{\operatorname{BS}}(\mathfrak{h},W),\,\widehat{\star}\,)\) on the category \(\operatorname{Tilt}_{\operatorname{LM}}(\mathfrak{h},W)\), see Remark 4.14(4).
The following statement is an analogue of [1, Theorem 5.2 & Proposition 5.5].
**Theorem 7.3**.: _The functor_
\[\varkappa:=\imath\circ K^{\operatorname{b}}(\underline{\Phi}):\operatorname{ LE}(\mathfrak{h}^{*},W)\to\operatorname{LM}(\mathfrak{h},W)\]
_is an equivalence of triangulated categories. It satisfies \(\varkappa\circ(1)=\langle 1\rangle\circ\varkappa\), and for any \(\underline{v}\in\operatorname{Exp}(W)\) and any \(w\in W\) we have_
\[\varkappa(\underline{B}^{\wedge}_{\underline{v}})\cong\mathcal{T}_{\underline {v}},\quad\varkappa(\underline{\Delta}^{\wedge}_{w})\simeq\Delta_{w},\quad \varkappa(\underline{\nabla}^{\wedge}_{w})\simeq\nabla_{w}.\]
_Finally, for any \(\mathcal{F}\in\mathscr{D}^{\oplus}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\) and \(\mathcal{G}\in\operatorname{LE}(\mathfrak{h}^{*},W)\) we have a canonical isomorphism_
\[\varkappa(\mathcal{F}\mathop{\star}\mathcal{G})\cong\imath\left(\Phi( \mathcal{F})\,\widehat{\star}\,\imath^{-1}(\mathcal{G})\right). \tag{7.2}\]
Proof.: The facts that \(\varkappa\circ(1)=\langle 1\rangle\circ\varkappa\) and that \(\varkappa(\underline{B}_{w}^{\wedge})\cong\mathcal{T}_{\underline{v}}\) for any expression \(\underline{v}\) are true by construction. The isomorphism (7.2) is also clear by construction and monoidality of \(\Phi\). Next we will prove that \(\varkappa(\underline{\Delta}_{w}^{\wedge})\simeq\Delta_{w}\) for any \(w\in W\), following the strategy of [1, SS5.2]. The proof that \(\varkappa(\underline{\nabla}_{w}^{\wedge})\simeq\nabla_{w}\) is similar, and left to the reader.
First, for \(s\in S\) we consider the functor
\[C_{s}^{\prime}:=\imath\circ K^{\mathrm{b}}(\widetilde{\mathcal{T}}_{s}\widehat {\star}(-))\circ\imath^{-1}:\mathsf{LM}(\mathfrak{h},W)\to\mathsf{LM}( \mathfrak{h},W).\]
The same construction can be carried out with \(\widetilde{\mathcal{T}}_{1}\) instead of \(\widetilde{\mathcal{T}}_{s}\), providing a functor isomorphic to the identity. These constructions are functorial and we have a morphism of functor \(\widetilde{\epsilon}_{s}:C_{s}^{\prime}\to\mathrm{id}\langle 1\rangle\) induced by the morphism \(\widetilde{\epsilon}_{s}\), see (6.3). As in [1, Lemma 5.3] one sees that the morphism \(\widetilde{\epsilon}_{s}(\underline{\Delta}_{w}):C_{s}^{\prime}(\underline{ \Delta}_{w})\to\underline{\Delta}_{w}\langle 1\rangle\) is nonzero for all \(w\in W\). On the other hand, recall the functor considered in Lemma 4.7. As in [1], there exists an isomorphism of functors
\[C_{s}^{\prime}\cong\widetilde{\mathcal{T}}_{s}\widehat{\star}\left(-\right). \tag{7.3}\]
Now we prove the desired claim by induction on \(\ell(w)\). For \(w=1\), this claim is true because \(\underline{\Delta}_{1}^{\wedge}=\underline{B}_{1}^{\wedge}\) and \(\underline{\Delta}_{1}=\mathcal{T}_{1}\) by definition. Let \(w\in W\) and \(s\in S\) such that \(sw<w\), and assume the claim is known for \(sw\). By the explicit description of \(\Delta_{s}^{\wedge}\) in (4.1), we have a distinguished triangle
\[\Delta_{s}^{\wedge}\to B_{s}^{\wedge}\to\Delta_{1}^{\wedge}(1)\xrightarrow{[1]}\]
in \(\mathsf{BE}(\mathfrak{h},W)\) where the second morphism is given by the upper dot. Tensoring with \(\Delta_{sw}^{\wedge}\) on the right and using [1, Proposition 6.11], we obtain a distinguished triangle
\[\Delta_{w}^{\wedge}\to B_{s}^{\wedge}\star\Delta_{sw}^{\wedge}\to\Delta_{sw}^ {\wedge}(1)\xrightarrow{[1]},\]
where the second morphism is the upper dot tensored by \(\mathrm{id}_{\Delta_{sw}^{\wedge}}\). Applying \(\varkappa\circ\mathsf{For}_{\mathsf{LE}}^{\mathsf{BE}}\) and using (7.2) and (7.3), we deduce a distinguished triangle
\[\varkappa(\underline{\Delta}_{w}^{\wedge})\to\widetilde{\mathcal{T}}_{s} \widehat{\star}\varkappa(\underline{\Delta}_{sw}^{\wedge})\to\varkappa( \underline{\Delta}_{sw}^{\wedge})\langle 1\rangle\xrightarrow{[1]},\]
where the second morphism is induced by \(\widetilde{\epsilon}_{s}\) via the identification (7.3). Using the inductive hypothesis, we can rewrite this triangle as
\[\varkappa(\underline{\Delta}_{w}^{\wedge})\to\widetilde{\mathcal{T}}_{s} \widehat{\star}\Delta_{sw}\xrightarrow{f}\Delta_{sw}\langle 1\rangle \xrightarrow{[1]}\]
with \(f\neq 0\). We compare this triangle with the triangle
\[\Delta_{w}\to\widetilde{\mathcal{T}}_{s}\widehat{\star}\Delta_{sw}\xrightarrow {g}\Delta_{sw}\langle 1\rangle\xrightarrow{[1]} \tag{7.4}\]
provided by Lemma 4.6(1). Here we also have \(g\neq 0\). We will prove below that
\[\dim_{\Bbbk}\operatorname{Hom}_{\mathsf{LM}(\mathfrak{h},W)}\left(\widetilde{ \mathcal{T}}_{s}\widehat{\star}\mathbf{\Delta}_{sw},\underline{\Delta}_{sw} \langle 1\rangle\right)=1; \tag{7.5}\]
this will imply that \(f\) and \(g\) are multiples of each other, hence that \(\varkappa(\underline{\Delta}_{w})\cong\mathbf{\Delta}_{w}\), which will finish the proof.
In order to prove (7.5), we observe that
\[\dim_{\Bbbk}\operatorname{Hom}_{\mathsf{LM}(\mathfrak{h},W)}\left( \underline{\Delta}_{sw}\langle 1\rangle,\underline{\Delta}_{sw}\langle 1 \rangle\right)=1\quad\text{and}\\ \operatorname{Hom}_{\mathsf{LM}(\mathfrak{h},W)}\left(\underline{ \Delta}_{w},\underline{\Delta}_{sw}\langle 1\rangle\right)=0=\operatorname{Hom}_{\mathsf{LM}( \mathfrak{h},W)}\left(\underline{\Delta}_{w}[1],\underline{\Delta}_{sw} \langle 1\rangle\right),\]
where the first two equalities follow from the properties of the standard objects in a highest weight category, and the third one is a consequence of the axioms of a
t-structure as the standard objects belong to the heart. Therefore, we obtain (7.5) by applying \(\operatorname{Hom}_{\operatorname{\mathsf{LM}}(\mathfrak{h},W)}\left(-,\underline{ \mathsf{A}}_{sw}\langle 1\rangle\right)\) to the triangle (7.4).
Finally, we prove that \(\varkappa\) is an equivalence of categories. For that, we first notice that, for any \(v,w\in W\) and \(n,m\in W\), \(\varkappa\) induces an isomorphism
\[\operatorname{Hom}_{\mathsf{LE}(\mathfrak{h}^{*},W)}(\underline{\Delta}_{w}^ {\wedge},\underline{\nabla}_{v}^{\wedge}(n)[m])\xrightarrow{\sim} \operatorname{Hom}_{\operatorname{\mathsf{LM}}(\mathfrak{h},W)}(\underline{ \mathsf{A}}_{w},\overline{\mathsf{v}}_{v}(n)[m]).\]
In fact these spaces are zero except when \(v=w\) and \(n=m=0\) in which case they are 1-dimensional, cf. [1]. To prove the claim it therefore suffices to prove that if \(g:\underline{\Delta}_{w}^{\wedge}\to\underline{\nabla}_{v}^{\wedge}\) is a nonzero morphism then \(\varkappa(g)\neq 0\). However, it follows from Lemma 4.1 that the cone of \(g\) belongs to the triangulated subcategory of \(\mathsf{LE}(\mathfrak{h}^{*},W)\) generated by the objects \(\underline{\Delta}_{w}^{\wedge}\langle n\rangle\) with \(v\in W\) satisfying \(v<w\) and \(n\in\mathbb{Z}\), hence the cone of \(\varkappa(g)\) belongs to the triangulated subcategory of \(\operatorname{\mathsf{LM}}(\mathfrak{h},W)\) generated by the objects \(\underline{\Delta}_{v}\langle n\rangle\) with \(v\in W\) satisfying \(v<w\) and \(n\in\mathbb{Z}\). Since \(\overline{\mathsf{v}}_{w}\oplus\underline{\mathsf{A}}_{w}[1]\) does not belong to this subcategory (again, by the recollement formalism), it follows that \(\varkappa(g)\neq 0\).
Now, recall that the objects \((\underline{\Delta}_{w}^{\wedge}\langle n\rangle:w\in W,\,n\in\mathbb{Z})\) generate the triangulated category \(\mathsf{LE}(\mathfrak{h}^{*},W)\), and similarly for the objects \((\underline{\nabla}_{w}^{\wedge}\langle n\rangle:w\in W,\,n\in\mathbb{Z})\), see e.g. [1]. In view of [1, Lemma 5.6] (a version of Bellinson's lemma), the property checked above therefore implies that \(\varkappa\) is fully faithful. Since its essential image contains the objects \((\underline{\Delta}_{w}\langle n\rangle:w\in W,\,n\in\mathbb{Z})\), which generate \(\operatorname{\mathsf{LM}}(\mathfrak{h},W)\) as a triangulated category, this functor is also essentially surjective, which finishes the proof.
### Full faithfulness of \(\Phi\)
As explained in SS7.1, to complete the proof of Theorem 7.1 we only have to prove that \(\Phi\) is fully faithful, i.e. that for any expressions \(\underline{v},\underline{w}\) this functor induces an isomorphism of graded \(R^{\wedge}\)-modules
\[\bigoplus_{n\in\mathbb{Z}}\operatorname{Hom}_{\mathscr{D}_{\operatorname{BS} }(\mathfrak{h}^{*},W)}(B_{\underline{v}}^{\wedge},B_{\underline{w}}^{\wedge}(n ))\xrightarrow{\sim}\bigoplus_{n\in\mathbb{Z}}\operatorname{Hom}_{\mathscr{T }_{\operatorname{BS}}(\mathfrak{h},W)}(\widetilde{\mathcal{T}}_{\underline{v} },\widetilde{\mathcal{T}}_{\underline{w}}\langle n\rangle).\]
Now, both sides are graded free of finite rank over \(R^{\wedge}\) by [1, Corollary 6.14] and Proposition 4.11. To prove the claim, using Lemma 2.2 it therefore suffices to prove that the morphism obtained after applying \((-)\otimes_{R^{\wedge}}\Bbbk\) is an isomorphism. However, by Proposition 4.11 the latter morphism identifies with the morphism
\[\bigoplus_{n\in\mathbb{Z}}\operatorname{Hom}_{\underline{\mathscr{D}}_{ \operatorname{BS}}(\mathfrak{h}^{*},W)}(\underline{B}_{\underline{v}}^{ \wedge},\underline{B}_{\underline{w}}^{\wedge}(n))\to\bigoplus_{n\in\mathbb{Z }}\operatorname{Hom}_{\operatorname{Tilt}_{\operatorname{\mathsf{LM}}}( \mathfrak{h},W)}(\mathcal{T}_{\underline{v}},\mathcal{T}_{\underline{w}} \langle n\rangle)\]
induced by \(\underline{\Phi}\); hence it is an isomorphism by Theorem 7.3.
### Self-duality
We now prove an analogue of [1, Theorem 5.7]. (This version is called "self duality" because the right equivariant and left equivariant categories are canonical equivalent, as explained in SS4.4.) Recall the indecomposable objects \(\overline{B}_{w}\in\overline{\mathscr{D}}_{\operatorname{BS}}(\mathfrak{h},W)\) and \(\underline{B}_{w}^{\wedge}\in\mathscr{D}_{\operatorname{BS}}(\mathfrak{h}^{*},W)\) defined in SS3.1 (for \(w\in W\)), and the indecomposable objects \(\overline{\mathcal{T}}_{w}\in\operatorname{Tilt}_{\operatorname{\mathsf{RE}}}( \mathfrak{h},W)\) and \(\underline{\mathcal{T}}_{w}^{\wedge}\in\operatorname{Tilt}_{\operatorname{ \mathsf{LE}}}(\mathfrak{h}^{*},W)\) defined in SSSS4.3-4.4 (for \(w\in W\)).
**Theorem 7.4**.: _There exists an equivalence of triangulated categories_
\[\kappa:\operatorname{\mathsf{RE}}(\mathfrak{h},W)\xrightarrow{\sim} \mathsf{LE}(\mathfrak{h}^{*},W)\]
_which satisfies \(\kappa\circ\langle 1\rangle=(1)\circ\kappa\), and such that_
\[\kappa(\overline{\Delta}_{w})\simeq\underline{\Delta}_{w}^{\wedge},\quad \kappa(\overline{\nabla}_{w})\simeq\underline{\nabla}_{w}^{\wedge},\quad\kappa( \overline{B}_{w})\simeq\underline{\mathcal{T}}_{w}^{\wedge},\quad\kappa( \overline{\mathcal{T}}_{w})\simeq\underline{B}_{w}^{\wedge}\]
_for any \(w\in W\)._
Proof.: We define \(\kappa\) as the inverse of the composition of equivalences \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{LM}}\circ\varkappa\), see (4.7) and Theorem 7.3. By Theorem 7.3 this functor satisfies \(\kappa(\overline{\Delta}_{w})\simeq\Delta_{w}^{\wedge}\) and \(\kappa(\overline{\nabla}_{w})\simeq\underline{\nabla}_{w}^{\wedge}\) for any \(w\in W\). The fact that \(\kappa(\overline{\mathcal{T}}_{w})\simeq\underline{B}_{w}^{\wedge}\) for \(w\in W\) is also an immediate consequence of the properties of \(\varkappa\). Finally, the fact that \(\kappa(\overline{\mathcal{T}}_{w})\simeq\underline{B}_{w}^{\wedge}\) for any \(w\in W\) can be deduced exactly as in the proof of [1, Theorem 5.7].
### Application to the combinatorics of indecomposable tilting objects
There are two families of Laurent polynomials parametrized by pairs of elements of \(W\) that one can attach to the realization \(\mathfrak{h}\). First, consider the split Grothendieck ring \([\mathscr{D}(\mathfrak{h},W)]_{\oplus}\) of the monoidal category \(\mathscr{D}(\mathfrak{h},W)\), which we consider as a \(\mathbb{Z}[v,v^{-1}]\)-algebra with \(v\) acting via the morphism induced by (1). Recall that, using the notation from the proof of Lemma 5.8 (but allowing now \(W\) not to be finite), there exists a unique \(\mathbb{Z}[v,v^{-1}]\)-algebra isomorphism
\[\eta:\mathcal{H}_{(W,S)}\xrightarrow{\sim}[\mathscr{D}(\mathfrak{h},W)]_{\oplus}\]
which sends \(H_{s}+v\) to \([B_{s}]\) for any \(s\in S\), see [1, Corollary 6.27]. For \(w\in W\) we set
\[\underline{H}_{w}^{\mathfrak{h}}=\eta^{-1}([B_{w}]).\]
Then \((\underline{H}_{w}^{\mathfrak{h}}:w\in W)\) is a basis of \(\mathcal{H}_{(W,S)}\), which we call the "canonical basis" attached to \(\mathfrak{h}\). We can obtain a first family of Laurent polynomials \((h_{y,w}^{\mathfrak{h}}:y,w\in W)\) as the coefficients of the expansion of the elements of this basis in the standard basis \((H_{y}:y\in W)\); namely for \(w\in W\) we have
\[\underline{H}_{w}^{\mathfrak{h}}=\sum_{y\in W}h_{y,w}^{\mathfrak{h}}\cdot H_{ y}.\]
(The "\(p\)-canonical" basis studied in [1] is an example of this construction.)
On the other hand, consider the objects \((\overline{\mathcal{T}}_{w}:w\in W)\) in \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\). Given \(y\in W\) and \(n\in\mathbb{Z}\), we will denote by
\[(\overline{\mathcal{T}}_{w}:\overline{\Delta}_{y}\langle n\rangle)\]
the number of times \(\overline{\Delta}_{y}\langle n\rangle\) appears in a standard filtration of \(\overline{\mathcal{T}}_{w}\). (It is a standard fact that this quantity does not depend on the choice of filtration.) We can then consider, for \(y,w\in W\), the Laurent polynomial
\[t_{y,w}^{\mathfrak{h}}:=\sum_{n\in\mathbb{Z}}(\overline{\mathcal{T}}_{w}: \overline{\Delta}_{y}\langle n\rangle)\cdot v^{n}.\]
The following statement shows that these families are exchanged by passing from \(\mathfrak{h}\) to \(\mathfrak{h}^{*}\).
**Corollary 7.5**.: _For any \(y,w\in W\) we have \(h_{y,w}^{\mathfrak{h}}=t_{y,w}^{\mathfrak{h}^{*}}\)._
Proof.: As explained in [1, SS6.6], the natural functor \(\mathscr{D}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)\to\mathsf{BE}(\mathfrak{h},W)\) induces an isomorphism
\[[\mathscr{D}_{\mathrm{BS}}^{\oplus}(\mathfrak{h},W)]_{\oplus}\xrightarrow{ \sim}[\mathsf{BE}(\mathfrak{h},W)]_{\Delta}\]
where the right-hand side is the Grothendieck group of the triangulated category \(\mathsf{BE}(\mathfrak{h},W)\), and the composition of \(\eta\) with this isomorphism sends \(H_{w}\) to \([\Delta_{w}]\) for any \(w\in W\). The functor \(\mathsf{For}_{\mathsf{RE}}^{\mathsf{BE}}\) also induces an isomorphism
\[[\mathsf{BE}(\mathfrak{h},W)]_{\Delta}\xrightarrow{\sim}[\mathsf{RE}( \mathfrak{h},W)]_{\Delta},\]
e.g. because it is t-exact and induces an isomorphism between the sets of isomorphism classes of simple objects in the heart of the perverse t-structure on both sides. We deduce an isomorphism
\[\mathcal{H}_{(W,S)}\xrightarrow{\sim}[\mathsf{RE}(\mathfrak{h},W)]_{\Delta}\]
sending \(H_{w}\) to \([\overline{\Delta}_{w}]\) for any \(w\in W\). In view of the description of morphism spaces between standard and costandard objects (see [ARV, (9.2)]), the inverse isomorphism sends the class of an object \(M\) to
\[\sum_{w\in W}\sum_{n,m\in\mathbb{Z}}(-1)^{n}\dim_{\Bbbk}\operatorname{Hom}_{ \mathsf{RE}(\mathfrak{h},W)}(M,\overline{\nabla}_{w}(m)[n])\cdot v^{m}H_{w}.\]
In particular, for \(y,w\in W\) we have
\[h^{\mathfrak{h}}_{y,w}(v)=\sum_{n,m\in\mathbb{Z}}(-1)^{n}\dim_{\Bbbk} \operatorname{Hom}_{\mathsf{RE}(\mathfrak{h},W)}(\overline{B}_{w},\overline{ \nabla}_{y}(m)[n])\cdot v^{m}.\]
On the other hand, by Theorem 7.4, for any \(y,w\in W\) and \(n,m\in\mathbb{Z}\) we have
\[\operatorname{Hom}_{\mathsf{RE}(\mathfrak{h},W)}(\overline{B}_{w},\overline{ \nabla}_{y}(m)[n])\cong\operatorname{Hom}_{\mathsf{LE}(\mathfrak{h}^{*},W)}( \underline{\mathcal{T}}^{\wedge}_{w},\underline{\nabla}^{\wedge}_{y}\langle m \rangle[n]).\]
Hence this \(\Bbbk\)-vector space vanishes unless \(n=0\), and in this case its dimension is \((\underline{\mathcal{T}}^{\wedge}_{w},\underline{\Delta}_{y}\langle m\rangle)\). We deduce that
\[h^{\mathfrak{h}}_{y,w}(v)=\sum_{m\in\mathbb{Z}}(\underline{\mathcal{T}}^{ \wedge}_{w},\underline{\Delta}^{\wedge}_{y}\langle m\rangle)\cdot v^{m},\]
which finishes the proof.
_Example 7.6_.: Consider the realization of Example 2.1(4). In this case, the main result of [EW1] says that for any \(y,w\in W\) the polynomial \(h^{\mathfrak{h}}_{y,w}\) is up to some factor the Kazhdan-Lusztig polynomial [KL] attached to \((y,w)\); in the notation of [S2] we have \(h^{\mathfrak{h}}_{y,w}=h_{y,w}\). Since the dual of a Soergel realization is again a Soergel realization (see Example 2.3(2)), we deduce that for any Soergel realization we have \(t^{\mathfrak{h}}_{y,w}=h_{y,w}\).
_Remark 7.7_.: Continue with the setting of Example 7.6. By [ARV, Proposition 8.13], for any \(w\in W\) we have \(\overline{B}_{w}=\overline{\mathscr{L}}_{w}\). As a consequence of Theorem 7.4 we therefore have
\[\operatorname{Hom}_{\mathsf{RE}(\mathfrak{h},W)}(\overline{\mathscr{L}}_{w}, \overline{\mathscr{L}}_{w}\langle n\rangle[m])=0 \tag{7.6}\]
unless \(n=-m\). Assume now that \(W\) is finite. Then the category \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) has enough projective objects; if for \(w\in W\) we denote by \(\overline{\mathscr{P}}_{w}\) the projective cover of \(\overline{\mathscr{L}}_{w}\), then setting
\[\overline{\mathscr{P}}:=\bigoplus_{w\in W}\overline{\mathscr{P}}_{w},\quad A:= \bigoplus_{n\in\mathbb{Z}}\operatorname{Hom}(\overline{\mathscr{P}},\overline {\mathscr{P}}\langle n\rangle)\]
we obtain a graded \(\Bbbk\)-algebra \(A\) and an equivalence of categories between \(\mathsf{P}_{\mathsf{RE}}(\mathfrak{h},W)\) and the category of finite-dimensional graded \(A\)-modules. Using (7.6) and the techniques of [R1, SS9.2] one can prove that \(A\) is a Koszul ring in the sense of [BGS].
In case \(W\) is a finite dihedral group and \(\mathfrak{h}\) is the geometric realization, one sees using Theorem 7.4 and Ringel duality (see [ARV, Proposition 10.2]) that \(A\) is isomorphic to the graded ring studied in [Sa].
|
2301.06433 | Wobble control of a pendulum actuated spherical robot | Spherical robots can conduct surveillance in hostile, cluttered environments
without being damaged, as their protective shell can safely house sensors such
as cameras. However, lateral oscillations, also known as wobble, occur when
these sphere-shaped robots operate at low speeds, leading to shaky camera
feedback. These oscillations in a pendulum-actuated spherical robot are caused
by the coupling between the forward and steering motions due to nonholonomic
constraints. Designing a controller to limit wobbling in these robots is
challenging due to their underactuated nature. We propose a model-based
controller to navigate a pendulum-actuated spherical robot using wobble-free
turning maneuvers consisting of circular arcs and straight lines. The model is
developed using Lagrange-D'Alembert equations and accounts for the coupled
forward and steering motions. The model is further analyzed to derive
expressions for radius of curvature, precession rate, wobble amplitude, and
wobble frequency during circular motions. Finally, we design an input-output
feedback linearization-based controller to control the robot's heading
direction and wobble. Overall, the proposed controller enables a teleoperator
to command a specific forward velocity and pendulum angle as per the desired
turning radius while limiting the robot's lateral oscillations to enhance the
quality of camera feedback. | Animesh Singhal, Sahil Modi, Abhishek Gupta, Leena Vachhani | 2023-01-16T13:48:49Z | http://arxiv.org/abs/2301.06433v1 | # Wobble control of a pendulum actuated spherical robot
###### Abstract
Spherical robots can conduct surveillance in hostile, cluttered environments without being damaged, as their protective shell can safely house sensors such as cameras. However, lateral oscillations, also known as wobble, occur when these sphere-shaped robots operate at low speeds, leading to shaky camera feedback. These oscillations in a pendulum-actuated spherical robot are caused by the coupling between the forward and steering motions due to nonholonomic constraints. Designing a controller to limit wobbling in these robots is challenging due to their underactuated nature. We propose a model-based controller to navigate a pendulum-actuated spherical robot using wobble-free turning maneuvers consisting of circular arcs and straight lines. The model is developed using Lagrange-D'Alembert equations and accounts for the coupled forward and steering motions. The model is further analyzed to derive expressions for radius of curvature, precession rate, wobble amplitude, and wobble frequency during circular motions. Finally, we design an input-output feedback linearization-based controller to control the robot's heading direction and wobble. Overall, the proposed controller enables a teleoperator to command a specific forward velocity and pendulum angle as per the desired turning radius while limiting the robot's lateral oscillations to enhance the quality of camera feedback.
## Nomenclature
\(T_{s}\): Rolling torque that rotates the hull with respect to the yoke resulting in a forward motion of the robot
\(T_{p}\): Pendulum torque that rotates the pendulum with respect to the yoke resulting in sideways motion of the robot
\(r_{p}\): Distance between the pendulum's centre of mass and hull's geometric centre
\(r_{h}\): Radius of the sphere
\(r_{y}\): Distance between the yoke's centre of mass and hull's geometric centre
\(\mathbf{G}\): Global inertial frame fixed to the ground
\(\mathbf{Y}\): Frame attached to the yoke
\(\mathbf{P}\): Frame attached to the pendulum
\(\mathbf{H}\): Frame attached to the hull
\(\phi\): Heading angle of the robot measured with respect to the \(X\)-axis of global frame \(\mathbf{G}\). This angle characterises the precession of the robot.
\(\theta\) & Lean angle of the robot perpendicular to the heading direction. This angle characterises the lateral oscillations or the wobbling of the robot \(\psi\) & Forward spin angle of the robot responsible for moving it forward or backwards. This angle characterises the forward rolling of the robot. \(\beta\) & Pendulum angle relative to the yoke \({}^{\mathbf{A}}\mathbf{R_{B}}\) & Rotation matrix that maps vectors from frame **B** to frame **A** \(\stackrel{{\rightarrow}}{{\omega}}_{\mathbf{A}}\) & The angular velocity vector of frame **A** expressed in frame **A** \(\stackrel{{\rightarrow}}{{\mathbf{B}}}^{T}\mathbf{p}\) & Position vector of a point **P** starting from the origin of coordinate frame **B** expressed in frame **A** \(\mathbf{Y}_{c}\) & Position vector corresponding to the center of mass of yoke \(\mathbf{H}_{c}\) & Position vector corresponding to the center of mass of hull \(\mathbf{P}_{c}\) & Position vector corresponding to the pendulum's centre of mass \(\mathbf{P}_{o}\) & Position vector corresponding to the origin of pendulum frame \(X,Z\) & X and Z co-ordinates of the hull centre along the global frame \(\mathbf{G}(OXYZ)\) \(\stackrel{{\rightarrow}}{{v}}_{\mathbf{P}}\) & Linear velocity vector of point **P** expressed in frame **A** \(K\) & Kinetic energy of the system \(V\) & Potential energy of the system \(m_{B}\) & mass of a body B \({}^{\mathbf{B}}I_{\mathbf{B}}\) & The mass moment of inertia matrix of a body B calculated in its own frame of reference \(\lambda_{i}\) & Lagrange multipliers \(L\) & Lagrangian \(\mathbf{x}\) & State vector representing robot's motion \(u\) & Control input \(q\) & Generalized coordinates \(Q\) & Generalized forces \(A\) & Amplitude of lateral oscillations \(\omega\) & Frequency of lateral oscillations \(\rho\) & Radius of curvature for a spherical robot moving in a circular trajectory with pure rolling \(y\) & Control output \(K_{p,\psi}\) & Speed control gain \(K_{p,\beta}\) & Proportional component of pendulum control gain \(K_{d,\beta}\) & Derivative component of pendulum control gain \(K_{p,\theta}\) & Wobble control gain \(T_{p,\theta}\) & Wobble control component of pendulum torque \(T_{p,\beta}\) & Pendulum control component of pendulum torque \(\dot{\theta}_{des}\) & Desired value of the rate of change of lean angle \(\dot{\psi}_{des}\) & Desired value of forward speed \(\beta_{des}\) & Desired value of the pendulum's angle \(\gamma,\delta\) & Coefficients of a linear combination \(\dot{\theta}_{des}\) & The value of the angle \(\
## 1 Introduction
Spherical robots can shield all electro-mechanical components within their spherical shell from harsh environments and external impacts. Due to their ball shape, spherical robots have the remarkable ability to rebound from collisions with obstacles and avoid becoming wedged in corners. These robots cannot topple over as they can regain their equilibrium after a disturbance due to their spherical shape and heavy pendulum. This ability is beneficial in situations such as falling from low heights or being struck during an operation. Hence, spherical robots are suitable for a wide range of applications, including surveillance, reconnaissance, hazardous environment assessment, search and rescue, and planetary exploration [1, 2, 3, 4]. The driving mechanisms determine the dexterity of a spherical robot based on its capability to roll in multiple directions. From the wheel drive [5] to the pendulum actuated drives [6, 7, 8, 9], a variety of such mechanisms have been developed [10, 11, 12, 13, 14]. The majority of active drive designs are based on three fundamental physics principles [15] for propelling a spherical robot: barycenter offset, outer-shell deformation, and conservation of angular momentum.
Several modeling approaches for representing the dynamics of spherical robots have been proposed in the literature. First-order mathematical models of spherical robots are based on the rolling constraint principle and conservation of angular momentum [16, 17]. The dynamics of a sphere rolling on a smooth surface have been modeled using the Lagrangian method and Euler angles [18, 19]. Other studies [20] have used the Gibbs-Appell equation, Kane equation, and Boltzmann-Hamel equation to model spherical robots.
This work discusses a spherical robot with a yoke-pendulum design, as depicted in figure 1. The robot comprises a spherical shell known as a hull, a platform known as a yoke, and a pendulum. This spherical robot's yoke is an internal platform that can accommodate cameras, various sensors, and the necessary electronics. Two motors drive the robot to generate a forward and steering motion. One of the motors rotates the hull at the required speed, allowing the robot to move forward and backward. The second motor rotates the pendulum with respect to the yoke. This motion perturbs the robot's center of mass to provide steering motion. The pendulum can only swing in the lower hemisphere because the upper hemisphere of the robot contains numerous components, including sensors mounted on the yoke.
Several studies have investigated the dynamics of the sphere with pendulum-based actuation and no-slip conditions. Euler Lagrangian-based dynamic models of just the decoupled forward-driving motion [14, 21] exclude the turning motion. Circular motion trajectories can be simulated with a model based on gyroscopic precession [22]. Other attempts have been made to simultaneously model the forward and steering motions of spherical robots using Euler-Lagrange-based decoupled modeling [23, 24]. However, these models have ignored the dynamic interaction between rotations along the lateral and longitudinal axes. In most of these works, mathematical modeling is often followed by a demonstration of straight lines and circular motion executed by spherical bots in simulation [16, 23, 25]. Because of the decoupled modeling approach, the models presented in these works give a bird's-eye view of the motion and do not capture the wobbly nature of the robot observed in practice. As a result, it is worthwhile to investigate the small
Figure 1: The spherical robot
amplitude lateral oscillations that spherical robots exhibit when moving at low speeds along simple paths such as a straight line or a circle.
Various experimental results have highlighted the wobbly nature of pendulum-based spherical robots for different pendulum angles [20, 26, 27]. Froberg and Smolic [28] note the robot's wobbly behavior and suggest using a PID regulator to reverse the robot's incorrect tipping with the pendulum. Schroll [29] modeled the wobbly behavior of a spherical robot driven by a two-degrees-of-freedom pendulum that controls both steering and forward motion. This work demonstrates that when a forward-moving spherical robot is steered by tilting the pendulum at a fixed angle, it moves in a wobbly circle with lateral oscillations and a radius of curvature that oscillates as the robot wobbles into and out of its curved trajectory. Such oscillations have been generally neglected in the literature due to their relatively smaller amplitude. These oscillations become crucial when a robot houses a camera/sensors to capture its surroundings. Specifically, we are interested in the robot's lateral oscillations or sideways fluctuations (perpendicular to the heading direction) as it leads to deviations in the robot's trajectory and shaky video feedback from the mounted camera on the robot's yoke.
The difficulty of spherical robot's path planning and feedback control problems comes from its non-holonomic, underactuated, and non-chained properties [30, 31, 32]. Several innovative attempts have been made in this regard. A Recurrent Neural Network controller [33] that accounts for unknown uncertainties and control input saturation has been designed to control the motion of a spherical robot. Another controller has been designed based on Lyapunov's direct method, and neurodynamic technique [34] for the two-state trajectory tracking problem of a spherical robot. An alternative controller design for tracking trajectories is based on the back-stepping technique [35]. Two control inputs were used to drive a pendulum-actuated spherical robot to control its heading, and forward speed by Hogan and Forbes [36]. However, this work does not focus on limiting the robot's lateral oscillations.
Stabilizing the lateral oscillations or wobbling of spherical robots helps get more sharp camera feedback and subsequently enables greater navigational autonomy [37] through sensor feedback. Since spherical robots are typically underactuated, the robot's controller can only control a limited number of outputs. Different combinations of control outputs have been used in the past to reduce oscillations when controlling such robots. One of the works [38] chooses the control outputs in such a way that the robot advances with a constant speed and minimal pendulum oscillations. Despite this controller design, the robot still exhibits small amplitude oscillations, and the pendulum's motion within the spherical shell is unrestricted, which is impractical due to space constraints. A few other attempts are to control forward speed and lean angle, which is responsible for lateral oscillations, using sliding mode controllers [20, 39]. However, the limitation of sliding mode control is its tendency to oscillate due to switching around the sliding surface and its sensitivity to controller parameters. Other studies have attempted to stabilize the lean angle by proposing various model-based controller approaches, such as proportional-integral (PI) and Linear Quadratic Regulator (LQR) controllers. The effectiveness of these model-based controller designs is contingent upon the precision of the employed model. The use of a decoupled model [24, 40] and external noise to model the lateral oscillations [41] has led to inaccurate modeling of the lateral oscillations. Therefore, it is essential to properly model these oscillations and choose a set of control objectives that can control the speed and direction of the robot's motion while limiting wobbling and maintaining a stable pendulum swing within the available space.
This work makes the following contributions, which are organized into separate sections:
* We model the wobble present in the motion of a spherical robot by developing equations that account for the coupling of its forward and steering motions. Section 2 describes the method for modeling the underactuated system using Euler angles and the Lagrange-D'Alembert equations and takes the non-holonomic constraints into account.
* We provide mathematical formulation for the wobble amplitude, wobble frequency, radius of curvature, and the precession rate associated with the robot's circular motion. Section 3 derives these formulations by simplifying the robot dynamics. These formulations illustrate the relationship between the aforementioned quantities (characterizing the robot's circular motion) and the robot's speed and pendulum angle. It is observed that the radius of curvature of the robot can be controlled indirectly by adjusting the robot's speed and pendulum angle. The section concludes by comparing the system response generated by the original model with that generated by the simplified model equations, demonstrating their similarity at different speeds.
* We propose a feedback linearization-based controller design for controlling a pendulum-actuated spherical robot's heading direction via a turning maneuver while limiting its wobble and maintaining a stable and constrained pendulum motion. The control set-points are determined by calculating the desired robot speed and pendulum angle based on the required turning radius of the turning maneuver. Section 4 elaborates on the proposed control strategy.
## 2 Modeling and Dynamics
This section discusses the modeling of the pendulum-actuated spherical robot described in this work. The robot's pendulum is mounted on the yoke at the geometric center of the hull through a motor that provides a torque \(T_{p}\). The yoke and the hull are connected through a motor mounted on the yoke. This motor rotates the hull by providing a torque \(T_{s}\). The center of mass of the hull and the yoke are at the geometric center of the hull, while the pendulum's center of mass is at a distance \(r_{p}\) from the geometric center. We assume that the robot rolls without slipping and use the Lagrange D'Alembert formulation to determine the equations of motion for this system.
### Reference frames and Euler Angles
This work is based on four reference frames as shown in figure 2: a global inertial frame fixed to the ground (\(\mathbf{G}\)), and the three frames attached to the yoke (\(\mathbf{Y}\)), pendulum (\(\mathbf{P}\)) and the hull (\(\mathbf{H}\)) respectively with their origins at the geometric center of the hull. The yoke frame \(\mathbf{Y}\) is defined to meet the following constraints: (1) The \(z\)-axis of the yoke is always aligned with the \(z\)-axis of the hull, and (2) The \(x\)-axis of the yoke frame lies in the global \(XZ\)-plane, i.e., it always remains parallel to the ground.
The orientation of the hull at any given instant is characterized by three \(YXZ\) Euler angles \(\phi\), \(\theta\), and \(\psi\). The orientation of the pendulum requires another angle \(\beta\) due to an additional degree of freedom. The transformation between frames happens as follows :
* Intermediate Frame (\(\mathbf{I}\)): \(\mathbf{G}\) is rotated along its \(Y\)-axis by angle \(\phi\) to obtain \(\mathbf{I}\) as shown in Figure 3a.
* Yoke frame (\(\mathbf{Y}\)): \(\mathbf{I}\) is rotated along its local \(x\)-axis \(x_{i}\) by angle \(\theta\) to obtain \(\mathbf{Y}\) as shown in Figure 3b.
* Pendulum frame (\(\mathbf{P}\)): \(\mathbf{Y}\) is rotated along its local \(x\)-axis \(x_{y}\) by angle \(\beta\) to obtain \(\mathbf{P}\) as shown in Figure 3c. Note that the angle made by the pendulum with the vertical axis \(Y\) is (\(\beta+\theta\)).
* Hull frame (\(\mathbf{H}\)): \(\mathbf{Y}\) is rotated along its \(z\)-axis \(z_{y}\) by angle \(\psi\) to obtain \(\mathbf{H}\) as shown in Figure 3d.
### Robot kinematics
Rotation matrices mapping vectors from frames \(\mathbf{Y}\), \(\mathbf{P}\) and \(\mathbf{H}\) to \(\mathbf{G}\) are given by, \({}^{\mathbf{G}}\mathbf{R}_{\mathbf{Y}}=R_{y}(\phi)R_{x}(\theta)\), \({}^{\mathbf{G}}\mathbf{R}_{\mathbf{P}}=R_{y}(\phi)R_{x}(\theta)R_{x}(\beta)\) and \({}^{\mathbf{G}}\mathbf{R}_{\mathbf{H}}=R_{y}(\phi)R_{x}(\theta)R_{z}(\psi)\).
The angular velocities of different frames are obtained to be:
\[\mathbf{\dot{\mathbf{\omega_{Y}}}}=\left[\dot{\theta}\qquad\dot{\phi}\,\cos( \theta)\qquad-\dot{\phi}\,\sin(\theta)\right]^{\intercal} \tag{1}\]
\[\mathbf{P}\mathbf{\dot{\omega_{P}}}=\left[\dot{\beta}+\dot{\theta}\qquad\dot{ \phi}\,\cos(\beta+\theta)\qquad-\dot{\phi}\,\sin(\beta+\theta)\right]^{\intercal} \tag{2}\]
Figure 2: Frames and Euler angles (YXZ)
\[\vec{\mathbf{H}_{\omega_{H}}}=\begin{bmatrix}\dot{\theta}\,\cos(\psi)\,+\,\dot{ \phi}\,\cos(\theta)\,\sin(\psi)\\ \dot{\phi}\,\cos(\theta)\,\cos(\theta)\,-\,\dot{\theta}\,\sin(\psi)\\ \dot{\psi}\,-\,\dot{\phi}\,\sin(\theta)\end{bmatrix} \tag{3}\]
The position vector corresponding to the center of mass of the yoke (\(\mathbf{Y}_{c}\)) and the hull (\(\mathbf{H}_{c}\)) as well as origin of pendulum frame (\(\mathbf{P}_{o}\)) are
\[\vec{\mathbf{G}^{r}}\vec{r_{Y_{c}}}=\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{ H}_{c}}}=\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{P}_{o}}}=[X\quad r_{h} \quad Z]^{\intercal} \tag{4}\]
where \(X\) and \(Z\) are the coordinates of the hull center along the global frame \(\mathbf{G}(OXYZ)\) and \(r_{h}\) is the radius of the sphere. Note that the origins of frames \(\mathbf{Y},\mathbf{P},\mathbf{H}\) coincide at the center of the robot. The position vector corresponding to the Pendulum's center of mass is given by:
\[\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{P}_{c}}}=\vec{\mathbf{G}^{r}}\vec{ \mathbf{\mathbf{P}_{c}}}+\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{P}_{o}}} \tag{5}\]
\[\vec{\mathbf{P}^{r}}\vec{\mathbf{\mathbf{P}_{c}}}=\vec{\mathbf{G}}\mathbf{R_ {P}^{r}}\vec{\mathbf{\mathbf{P}_{c}}}\vec{\mathbf{\mathbf{P}_{c}}}\text{ and }\vec{\mathbf{\mathbf{P}_{P_{c}}}}=[0\quad-r_{p}\quad 0]^{\intercal} \tag{6}\]
where \(\mathbf{P}_{c}\) stands for pendulum's centre of mass, and \(r_{p}\) is the distance between \(\mathbf{P}_{c}\) and \(\mathbf{P}_{o}\).
Differentiating the position vectors, the linear velocities of the center of mass of the hull (\(\mathbf{H}_{c}\)), yoke (\(\mathbf{Y}_{c}\)) and the pendulum (\(\mathbf{P}_{c}\)) are obtained as:
\[\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{H}_{c}}}=\vec{\mathbf{G}^{r}}\vec{ \mathbf{\mathbf{V}_{c}}}=\begin{bmatrix}\dot{X}&0&\dot{Z}\end{bmatrix}^{\intercal} \tag{7}\]
\[\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{P}_{c}}}=\begin{bmatrix}\dot{X}\\ 0\\ \dot{Z}\end{bmatrix}+\vec{\mathbf{G}^{r}}\vec{\mathbf{\mathbf{\mathbf{P}}}} \times\vec{\mathbf{G}}\mathbf{R_{P}}\begin{bmatrix}0\\ -r_{p}\\ 0\end{bmatrix} \tag{8}\]
Figure 3: Steps for obtaining the frames
The Kinetic Energy of the system can then be written as:
\[K=\frac{1}{2}(m_{h}\|^{\mathbf{G}}\vec{v_{\mathbf{H_{c}}}}\|^{2}+m_{ y}\|^{\mathbf{G}}\vec{v_{\mathbf{V_{c}}}}\|^{2}+m_{p}\|^{\mathbf{G}}\vec{v_{ \mathbf{P_{c}}}}\|^{2})+\\ \frac{1}{2}(\mathbf{{}^{H}}\vec{\omega_{H}}\mathbf{{}^{T}}_{H} \mathbf{{}^{H}}\mathbf{{}^{T}}_{\omega\mathbf{H}}+\mathbf{{}^{T}}\vec{\omega_{ \mathbf{V}}}\mathbf{{}^{T}}_{Y}\mathbf{{}^{T}}_{Y}\mathbf{{}^{T}}_{\omega \mathbf{V}}+\mathbf{{}^{P}}\vec{\omega_{\mathbf{P}}}\mathbf{{}^{T}}_{P}\mathbf{ {}^{T}}_{\omega\mathbf{P}}) \tag{9}\]
where \(m_{j}\) denoted the mass of body \(\mathbf{J}\) and \(\mathbf{{}^{J}}\mathbf{{}_{J}}\) denotes the mass moment of inertia matrix of body \(\mathbf{J}\) calculated in its own frame of reference. Here, \(\mathbf{{}^{H}}_{\mathbf{H}}=diag(I_{h},I_{h},I_{h})\), \(\mathbf{{}^{T}}_{\mathbf{Y}}=diag(I_{y},2I_{y},I_{y})\) and \(\mathbf{{}^{P}}_{I\mathbf{P}}=diag(I_{y},0,I_{y})\), where \(I_{h}=\frac{2}{3}m_{h}r_{h}^{2}\), \(I_{y}=\frac{1}{4}m_{y}r_{h}^{2}\), \(I_{p}=\frac{1}{3}m_{p}r_{p}^{2}\).
The Potential Energy of the system is given by
\[V=m_{p}g(\mathbf{{}^{G}}\vec{\tau_{\mathbf{P_{c}}}}\cdot[0\;1\;0]^{\intercal} )+m_{y}g(\mathbf{{}^{G}}\vec{\tau_{\mathbf{Y_{c}}}}\cdot[0\;1\;0]^{\intercal}) \tag{10}\]
where the datum point for potential energy is chosen as the geometric centre of the sphere.
### Non-holonomic Constraints
The constraint of rolling without slipping is non-holonomic and is given by
\[\mathbf{{}^{G}}\vec{v_{\mathbf{H}}}=\mathbf{{}^{G}}\vec{\omega_{ \mathbf{H}}}\times\mathbf{{}^{G}}\vec{\tau_{\mathbf{H}}} \tag{11}\]
Here \(\mathbf{{}^{G}}\vec{\tau_{\mathbf{H}}}=[0\quad r_{h}\quad 0]^{\intercal}\) denotes the vector (written in the global frame) between the stationary point on the hull, which is in contact with the ground and the sphere center. Equation (11) is simplified further as
\[\dot{X}=r_{h}(\dot{\theta}\sin(\phi)-\dot{\psi}\cos(\phi)\cos( \theta)) \tag{12}\]
\[\dot{Z}=r_{h}(\dot{\theta}\cos(\phi)+\dot{\psi}\sin(\phi)\cos( \theta)) \tag{13}\]
These constraint equations are written in the form
\[a_{1}^{\intercal}\cdot\dot{q}=0\;,\;a_{2}^{\intercal}\cdot\dot{q}=0;\text{ where generalized coordinates }q=[X\quad Z\quad\phi\quad\theta\quad\psi\quad\beta]^{\intercal} \tag{14}\]
Next, the Lagrangian model is formulated utilising the deduced constraints.
### Lagrange D'Alembert equations
The Lagrange D'Alembert equations for non-holonomic systems are given by
\[\frac{d}{dt}(\frac{\partial L}{\partial\dot{q}})-\frac{\partial L }{\partial q}=Q+\lambda_{1}a_{1}+\lambda_{2}a_{2} \tag{15}\]
where the generalized forces \(Q\) = \([0\;0\;0\;0\;T_{s}\;T_{p}]^{\intercal}\). \(T_{s}\) is the torque applied for forward motion, and \(T_{p}\) is the torque applied on the pendulum. \(\lambda_{1}\) and \(\lambda_{2}\) are Lagrange multipliers and \(a_{1}\) and \(a_{2}\) are obtained by simplifying equation (14).
Equations (12), (13) and (15) render the dynamic model of the robot and are simplified to represent the system in control affine form as \(\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})+\mathbf{G}(\mathbf{x})\mathbf{u}\). Here, \(\mathbf{x}\) is the state vector given by \(\left[\phi\quad\theta\quad\psi\quad\beta\quad X\quad Z\quad\dot{\phi}\quad \dot{\theta}\quad\dot{\psi}\quad\dot{\beta}\quad\dot{X}\quad\dot{Z}\right]^{\intercal}\), \(\mathbf{f}\) is a smooth vector field, \(\mathbf{G}\) is a \(12\times 2\) matrix whose columns are smooth vector fields \(\mathbf{G}_{i,j}\) and \(\mathbf{u}\) is the control input vector given by \([T_{s}\quad T_{p}]^{\intercal}\).
The model upon simplification is obtained as:
\[\begin{bmatrix}\dot{\phi}\\ \dot{\theta}\\ \dot{\psi}\\ \dot{\beta}\\ \dot{X}\\ \dot{Z}\\ \dot{\phi}\\ \dot{\theta}\\ \dot{\psi}\\ \dot{\beta}\\ \dot{X}\\ \ddot{Z}\\ \ddot{Z}\end{bmatrix}=\begin{bmatrix}\dot{\phi}\\ \dot{\theta}\\ \dot{\psi}\\ \dot{\beta}\\ \dot{X}\\ \dot{Z}\\ f_{7}(\mathbf{x})\\ f_{8}(\mathbf{x})\\ f_{9}(\mathbf{x})\\ f_{10}(\mathbf{x})\\ f_{11}(\mathbf{x})\\ f_{12}(\mathbf{x})\end{bmatrix}+\begin{bmatrix}0&0\\ 0&0\\ 0&0\\ 0&0\\ 0&0\\ 0&0\\ 0&0\\ 0&0\\ 0&G_{7,1}(\mathbf{x})&0\\ 0&G_{8,2}(\mathbf{x})\\ G_{9,1}(\mathbf{x})&0\\ 0&G_{10,2}(\mathbf{x})\\ G_{11,1}(\mathbf{x})&G_{11,2}(\mathbf{x})\\ G_{12,1}(\mathbf{x})&G_{12,2}(\mathbf{x})\end{bmatrix}\begin{bmatrix}T_{s}\\ T_{p}\end{bmatrix} \tag{16}\]
Note that this system is under-actuated due to just two control knobs \(T_{s}\) and \(T_{p}\). As a result, only a limited number of outputs can be controlled.
### System response for a circular motion
The dynamics of the spherical robot described in section 2.4 are simulated using the ODE15s solver built into MATLAB. We simulate the steady-state behavior of the robot's circular motion. This configuration is attained when the pendulum's angle and robot's forward speed are held constant, and the remaining state variables are initialized as zero.
We conduct our system analysis for relatively smaller pendulum angles so that the robot does not lean excessively during steering motion. A steep lean angle during the steering motion would laterally tilt the camera mounted on the robot. Oscillations in this tilt would cause the camera's feedback to be shaky. To circumvent this problem, we focus our analysis on a pendulum's swing of about 0\({}^{\circ}\) to 15\({}^{\circ}\) since this is a relatively small inclination. Within this range, simulations are roughly divided into two groups: low pendulum angles of 5\({}^{\circ}\) and high pendulum angles of 15\({}^{\circ}\).
We further categorize operation speeds as fast or slow. The system response is depicted in Figure 4, in which the robot's center of mass follows wobbly circular paths while moving forward at varying speeds and maintaining a pendulum angle (\(\beta\)) of 15\({}^{\circ}\). According to this figure, the robot's wobble is minimal at 10 rad/s, and the robot follows a nearly smooth circular trajectory; thus, for our simulations, we consider this a high speed. Similarly, we consider 1 rad/s to be a low speed because the effect of wobbling caused by center of mass perturbation is clearly visible in this case.
We further investigate the system response for the rate of change of heading angle \(\dot{\phi}\), rate of change of lean angle \(\dot{\theta}\), forward speed \(\dot{\psi}\), and lean angle \(\theta\) by simulating the steady state circular motion of the robot with the following four configurations of pendulum angle \(\beta\) and forward rolling speed \(\mathrm{l}\dot{\psi}\)!:
* Following observations can be made:
Figure 4: Path followed by robot’s COM while moving forward at different speeds \(\mathrm{l}\dot{\psi}\)! with \(\beta=15^{\circ}\)
* Lean angle \(\theta\) is very small in magnitude (refer figure (d)d, (d)d)
* Magnitude of \(\dot{\phi}\) and \(\dot{\theta}\) are small compared to \(\dot{\psi}\) (refer figure 5, 7, 8)
* \(\dot{\psi}\) remains approximately constant (refer figure (c)c, (c)c)
## 3 Steady state analysis of wobbly circular motion
This section simplifies and linearizes the dynamics of the circular steady-state motion whose configuration was discussed in section 2.5. This leads to the development of expressions for wobble amplitude, wobble frequency, precession rate, and radius of curvature. Then, the system response of the original system is compared to the system response of the simplified system.
### Model simplification
The 3D mathematical model of the robot derived in section 2 is highly complex and nonlinear. The constituent equations of the model can be simplified by taking the following approximations:
* Pendulum angle \(\beta\) is held constant, i.e. \(\dot{\beta}\approx 0\), \(\ddot{\beta}\approx 0\)
* Centre of mass of yoke is assumed to be at the hull centre, i.e. \(r_{y}=0\).
Using these approximations, equations (12), (13) and (15) reduce to the following three equations:
\[\bigg{[}I_{h}+\frac{I_{p}}{2}+\frac{3I_{y}}{2}+\frac{I_{y}\cos(2 \theta)}{2}-\frac{I_{p}\cos(2\beta+2\theta)}{2}+\frac{m_{p}r_{p}^{2}}{2}(1- \cos(2\beta+2\theta))\bigg{]}\ddot{\phi}-\bigg{[}I_{h}\sin(\theta)\\ -\frac{m_{p}r_{p}r_{h}}{2}(\sin(\beta)+\sin(\beta+2\theta))\bigg{]} \ddot{\psi}-\bigg{[}I_{h}\cos(\theta)-\frac{m_{p}r_{p}r_{h}}{2}(\cos(\beta+2 \theta)-\cos(\beta))\bigg{]}\dot{\psi}\dot{\theta}\\ -\bigg{[}I_{y}\sin(2\theta)-I_{p}\sin(2\beta+2\theta)-m_{p}r_{p} ^{2}\sin(2\beta+2\theta)+m_{p}r_{p}r_{h}\sin(\beta+\theta)\bigg{]}\dot{\phi} \dot{\theta}=0 \tag{17}\]
\[\bigg{[}(I_{p}+I_{h}+I_{y}+m_{p}r_{p}^{2}+(m_{p}+m_{y}+m_{h})r_{h }^{2}-2m_{p}r_{p}r_{h}\cos(\beta+\theta)\bigg{]}\ddot{\theta}+\bigg{[}m_{p}r_{ p}r_{h}\sin(\beta+\theta)\bigg{]}\dot{\theta}^{2}\\ +\bigg{[}I_{h}\cos(\theta)+(m_{p}+m_{y}+m_{h})r_{h}^{2}\cos(\theta )-\frac{m_{p}r_{p}r_{h}}{2}(\cos(\beta+2\theta)+\cos(\beta))\bigg{]}\dot{\phi} \dot{\psi}+m_{p}r_{p}g\sin(\beta+\theta)\\ +\bigg{[}\frac{I_{y}\sin(2\theta)}{2}-\frac{I_{p}\sin(2\beta+2 \theta)}{2}-\frac{m_{p}r_{p}^{2}\sin(2\beta+2\theta)}{2}+m_{p}r_{p}r_{h}\sin( \beta+\theta)\bigg{]}\dot{\phi}^{2}=0 \tag{18}\]
\[\bigg{[}I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}\cos^{2}(\theta)\bigg{]} \ddot{\psi}-\bigg{[}\frac{(m_{p}+m_{y}+m_{h})r_{h}^{2}}{2}\sin(2\theta)\bigg{]} \dot{\psi}\dot{\theta}\\ -\bigg{[}I_{h}\cos(\theta)+(m_{p}+m_{y}+m_{h})r_{h}^{2}\cos( \theta)-2m_{p}r_{p}r_{h}\cos(\beta)\cos^{2}(\theta)+2m_{p}r_{p}r_{h}\sin(\beta )\cos(\theta)\sin(\theta)\bigg{]}\dot{\phi}\dot{\theta}\\ -\bigg{[}I_{h}\sin(\theta)-m_{p}r_{p}r_{h}\sin(\beta)\cos^{2}( \theta)-m_{p}r_{p}r_{h}\cos(\beta)\cos(\theta)\sin(\theta)\bigg{]}\ddot{\phi} =0 \tag{19}\]
Equations (17), (18) and (19) constitute the mathematical model of the system at all pendulum angles and all forward speeds. These equations can be further simplified by taking the following approximations:
* Pendulum angle \(\beta\) is small in magnitude (\(\sin\beta\approx\beta\) and \(\cos\beta\approx 1\)).
* Lean angle \(\theta\) is small in magnitude (\(\sin\theta\approx\theta\) and \(\cos\theta\approx 1\)).
* Forward speed \(\dot{\psi}\) is constant.
* \(\dot{\psi}\) is larger than \(\dot{\phi}\) and \(\dot{\theta}\). Hence \(\dot{\phi}\dot{\theta}\), \(\dot{\phi}^{2}\), \(\dot{\theta}^{2}\) can be neglected in comparison to \(\dot{\psi}\).
These approximations are used to further simplify equations (17), (18), and (19) to give:
\[\ddot{\phi}=\frac{I_{h}}{I_{h}+2I_{y}}\dot{\psi}\dot{\theta} \tag{20}\]
\[\bigg{[}I_{p}+I_{h}+I_{y}+m_{p}r_{p}^{2}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-2m_{p}r_{ p}r_{h}\bigg{]}\ddot{\theta}\\ +\bigg{[}I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{p}r_{p}r_{h}\bigg{]} \dot{\phi}\dot{\psi}+m_{p}r_{p}g(\beta+\theta)=0 \tag{21}\]
\[\ddot{\psi}=0 \tag{22}\]
### Formulation of circular motion characteristics
A spherical robot's circular motion characteristics include precession rate, wobble amplitude, wobble frequency, and radius of curvature. The expressions for these quantities can now be obtained using the simplified model.
**Precession rate**: Using constant \(\dot{\psi}\), equation (20) can be integrated to obtain \(\dot{\phi}\).
\[\dot{\phi}=\frac{I_{h}}{I_{h}+2I_{y}}\dot{\psi}\theta+c \tag{23}\]
where \(c\) is a constant of integration. We assume that the robot starts with initial \(\theta\) and \(\dot{\phi}=0\), which gives \(c=0\).
**Wobbling**: From equations (21) and (23), we get
\[\bigg{[}I_{p}+I_{h}+I_{y}+m_{p}r_{p}^{2}+(m_{p}+m_{y}+m_{h})r_{h}^ {2}-2m_{p}r_{p}r_{h}\bigg{]}\ddot{\theta}\\ +\bigg{[}\frac{I_{h}(I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{p}r_{ p}r_{h})}{I_{h}+2I_{y}}\dot{\psi}^{2}+m_{p}r_{p}g\bigg{]}\theta=-m_{p}r_{p}g\beta \tag{24}\]
Equation (24) is a standard second-order equation, the solution of which is given by:
\[\theta=A(1-\cos(\omega t)) \tag{25}\]
where \(A\) is the amplitude of oscillations and \(\omega\) is the frequency of oscillations given by:
\[A=\frac{-m_{p}r_{p}g\beta}{\frac{I_{h}(I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{ p}r_{p}r_{h})}{I_{h}+2I_{y}}\dot{\psi}^{2}+m_{p}r_{p}g} \tag{26}\]
\[\omega=\sqrt{\frac{m_{p}r_{p}g+\frac{I_{h}(I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}- m_{p}r_{p}r_{h})}{I_{h}+2I_{y}}\dot{\psi}^{2}}{\frac{I_{h}+2I_{y}}{I_{h}+I_{y}} +m_{h}r_{h}^{2}-2m_{p}r_{p}r_{h}}}} \tag{27}\]
**Radius of curvature**: For a sphere moving in a circular trajectory with pure rolling, the radius of curvature \(\rho\) can be approximated as
\[Time=\frac{2\pi}{\dot{\phi}_{mean}}=\frac{2\pi\rho}{\dot{\psi}r_{h}}\implies \rho=\frac{\dot{\psi}r_{h}}{\dot{\phi}_{mean}} \tag{28}\]
\[\rho=\frac{-r_{h}\bigg{(}[I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{p}r_{p}r_{h}]I_ {h}\dot{\psi}^{2}+[I_{h}+2I_{y}]m_{p}r_{p}g\bigg{)}}{m_{p}r_{p}gI_{h}\beta} \tag{29}\]
The dynamics of the circular steady-state motion can be further simplified based on forward speed: Low forward speeds and High forward speeds.
#### 3.2.1 Low forward Speeds
The ratio of \(r_{h}\dot{\psi}^{2}\) to \(g\) is relatively small at low forward speeds. Consequently, we can neglect the terms containing \(r_{h}\dot{\psi}^{2}\). This approximation further simplifies the equations (26), (27), and (29) to yield the following expressions:
**Wobbling**: The expressions for wobble amplitude and frequency become:
\[A=-\beta \tag{30}\]
\[\omega=\sqrt{\frac{m_{p}r_{p}g}{I_{p}+I_{h}+I_{y}+m_{p}r_{p}^{2}+(m_{p}+m_{y} +m_{h})r_{h}^{2}-2m_{p}r_{p}r_{h}}} \tag{31}\]
**Radius of curvature**:
\[\rho=\frac{-r_{h}(I_{h}+2I_{y})}{I_{h}\beta} \tag{32}\]
#### 3.2.2 High forward Speed
At high forward speeds, the ratio of \(g\) to \(r_{h}\dot{\psi}^{2}\) is relatively small. Consequently, we can neglect the terms containing \(g\). This approximation further simplifies the equations (26), (27), and (29) to yield the following expressions:
**Wobbling**: The expressions for wobble amplitude and frequency become:
\[A=-\frac{(I_{h}+2I_{y})(m_{p}r_{p}g)\beta}{I_{h}(I_{h}+(m_{p}+m_{y}+m_{h})r_{h} ^{2}-m_{p}r_{p}r_{h})\dot{\psi}^{2}} \tag{33}\]
\[\omega=\dot{\psi}\sqrt{\frac{I_{h}(I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{p}r_{ p}r_{h})}{(I_{h}+2I_{y})(I_{p}+I_{h}+I_{y}+m_{p}r_{p}^{2}+(m_{p}+m_{y}+m_{h})r_{h} ^{2}-2m_{p}r_{p}r_{h})}} \tag{34}\]
**Radius of curvature**:
\[\rho=-\bigg{(}\frac{I_{h}+(m_{p}+m_{y}+m_{h})r_{h}^{2}-m_{p}r_{p}r_{h}}{m_{p} r_{p}g\beta}\bigg{)}r_{h}\dot{\psi}^{2} \tag{35}\]
### Analysis of circular motion characteristics
In this section, we analyze the expressions for wobble amplitude and frequency, precession rate, and radius of curvature to comprehend their nature, dependence on parameters such as \(\beta\) and \(\dot{\psi}\), and compare them to the system response derived from the original model.
**Wobbling**: The spherical robot's wobbling or lateral oscillations are characterized by the lean angle \(\theta\). Based on the expression (25), \(\theta\) is sinusoidal with oscillations centered away from the origin. The mean value of \(\theta\) equals the amplitude of oscillations. This suggests that the magnitude of \(\theta\) oscillations around the mean position increases as the mean value of \(\theta\) moves further from the origin.
The relationship between wobble frequency and forward speed at a constant pendulum angle of \(5^{\circ}\) is depicted in figure 8(a). The wobble frequency is found to be nearly constant at low forward speeds, a behavior confirmed by the equation (31). From equation (34) and figure 8(a), it can be seen that at high forward speeds, the wobble frequency increases approximately linearly with forward speed. The figure also illustrates the similarity between the system response for wobble frequency generated by the original model and the simplified model equation (27) at different speeds.
The relationship between wobble frequency and pendulum angle for a range of constant forward speeds is shown graphically in figure 9(a), which is based on equation (27). It can be inferred from the figure that for any constant forward speed, the wobble frequency is independent of the pendulum angle. We can also observe that the wobble frequency is greater at higher speeds for any given pendulum angle.
The relationship between wobble amplitude and forward speed for a constant pendulum angle of \(5^{\circ}\) is illustrated in figure 8(b). This figure and equation (26) demonstrate that the amplitude of oscillations in \(\theta\) varies inversely with \(\dot{\psi}^{2}\). Figure 8(b) also demonstrates that lateral oscillations are significantly reduced at high speeds, a behavior confirmed by the equation (33). The figure also illustrates the similarity between the system response for the amplitude of lateral oscillations generated by the original model and the simplified model equation (26) at different speeds.
The relationship between wobble amplitude and pendulum angle for a range of constant forward speeds is shown graphically in figure 9(b), which is based on equation (26). For constant forward speed, the wobble amplitude is directly proportional to pendulum angle \(\beta\). We can also observe that wobble amplitude decreases with increasing speed for any given pendulum angle.
**Radius of curvature**: According to the equation (29), the radius of curvature depends on the angle of the pendulum and forward speed. If we control these two variables, we can indirectly affect the radius of curvature of the robot. This can facilitate the robot's movement along a curved path.
Figure 8(c) illustrates the relationship between the radius of curvature and forward speed for a constant pendulum angle of \(5^{\circ}\). At low forward speeds, the radius of curvature is observed to be nearly constant, a behavior confirmed by the equation (32). The expression (35) for the radius of curvature at high speed is comparable to the radius of curvature expressions reported in the literature [42, 43] for all speed ranges. Equation (35) demonstrates that the radius of curvature at high forward speeds is directly proportional to the square of the forward speed. Consequently, higher speeds result in a larger turning radius. Figure 8(c) also compares the
radius of curvature values generated by the original model with those generated by the simplified model (refer equation (29)).
The relationship between the radius of curvature and pendulum angle for a range of constant forward speeds is shown graphically in figure 9(c), which is based on Equation (29). The radius of curvature is inversely proportional to the pendulum angle for constant forward speed. Therefore, the robot makes sharper turns as the pendulum tilts further. For any given pendulum angle, we can see that the radius of curvature increases with higher speed. We can also observe that the radius of curvature varies significantly as the pendulum's angle changes at high speeds.
**Precession rate**: The robot's precession rate indicates how quickly it completes one complete revolution as it moves in a circle. According to the equation (23), the magnitude of the precession rate is directly proportional to the lean angle. Consequently, the precession rate oscillates when the robot's motion involves lateral oscillations. This section analyzes the precession rate's mean value to comprehend its behavior in relation to forward velocity and pendulum angle.
The relationship between the mean precession rate and forward speed for a constant pendulum angle of \(5^{\circ}\) is illustrated in figure 8(d). Figure 8(d) and equation (23) illustrate how the mean precession rate increases at low forward speeds but is inversely proportional to \(\dot{\psi}\) at high forward speeds. Equation (28) represents the relationship between the precession rate, the radius of curvature, and forward speeds. As the radius of curvature remains nearly constant at low speeds (refer figure 8(c)), we observe a linear increase in the value of the mean precession rate as the speed increases. However, as the radius of curvature increases at a nonlinear rate at high speeds, the mean precession rate decreases as the speed increases linearly. Figure 8(d) also illustrates the similarity between the system response for the mean precession rate generated by the original model and the simplified model equation (23) at different speeds.
The relationship between the mean precession rate and pendulum angle for a range of constant forward speeds is shown graphically in figure 9(d), which is based on equation (23). The figure illustrates that the mean precession rate is directly proportional to the pendulum angle. Contrarily, the mean precession rate does not increase or decrease monotonically with speed for a given pendulum angle.
## 4 Controller design
The spherical robot can be commanded to execute specific maneuvers such as turning by controlling the value of pendulum angle \(\beta\), or forward speed, by controlling \(\dot{\psi}\), in a teleoperation setup [26]. Controlling the robot's speed and pendulum angle, as discussed in section 3.3, results in indirect control over the radius of curvature as per equation
Figure 10: Circular motion characteristics vs pendulum angle \(\beta\) at different forward speeds \(\dot{\psi}\)
Figure 9: Circular motion characteristics vs forward speed \(\dot{\psi}\) at a constant pendulum angle (\(\beta\) = \(5^{\circ}\))
(29). It is evident from section 2.5 that the robot does not move in an exact circular arc but rather wobbles. Therefore, it is necessary to stabilize this wobbly behavior, which is characterized by the oscillations of the lean angle \(\theta\). In this section, we present a controller design for semi-autonomous robot operation to achieve the following control objectives:
1. Heading control: Moving the robot along the desired curve or line by changing the turning radius
2. Wobble control: Limiting lateral oscillations to obtain sharp feedback from the robot's camera
For the robot's lateral oscillations to stabilize, the lean angle \(\theta\) must remain fixed (\(\dot{\theta}_{des}\) = 0). To achieve heading control, the desired value of the pendulum's angle \(\beta_{des}\) would be determined based on the required radius of curvature for a given operational speed \(\dot{\psi}\). The robot's \(\psi_{des}\) will be set according to the requirements of the teleoperator. For example, if careful surveillance is required, the teleoperator may need to move the robot slowly along a path. Rapidly reaching a destination may necessitate traveling at a faster rate. It should be noted that \(\beta_{des}\) should be small due to space constraints and the need to maintain the robot's upright position for enhanced camera coverage. The following sections describe the controller's strategy for achieving these objectives and the results it achieved.
### Approach
With only two inputs, \(T_{s}\) and \(T_{p}\), there are only two controllable quantities. Therefore, we can choose between the following outputs, which are related to forward motion (speed) and steering (pendulum angle/lean angle rate):
\[\mathbf{y}=\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}=\begin{bmatrix}\dot{\psi}\\ \dot{\beta}\end{bmatrix}\text{ or }\mathbf{y}=\begin{bmatrix}y_{1}\\ y_{3}\end{bmatrix}=\begin{bmatrix}\ddot{\psi}\\ \dot{\theta}\end{bmatrix} \tag{36}\]
Similar to input-output feedback linearization approach, we repeatedly differentiate the potential output functions \(y_{1}\), \(y_{2}\) and \(y_{3}\) until the input \(\mathbf{u}\) appears to obtain:
\[\begin{bmatrix}\dot{y}_{1}\\ \ddot{y}_{2}\\ \dot{y}_{3}\end{bmatrix}=\begin{bmatrix}\ddot{\psi}\\ \ddot{\beta}\\ \ddot{\theta}\end{bmatrix}=\begin{bmatrix}f_{9}(\mathbf{x})\\ f_{10}(\mathbf{x})\\ f_{8}(\mathbf{x})\end{bmatrix}+\begin{bmatrix}G_{9,1}(\mathbf{x})&0\\ 0&G_{10,2}(\mathbf{x})\\ 0&G_{8,2}(\mathbf{x})\end{bmatrix}\begin{bmatrix}T_{s}\\ T_{p}\end{bmatrix} \tag{37}\]
It can be observed from equation (37) that \(\ddot{\psi}\) depends only on \(T_{s}\). Whereas \(\ddot{\beta}\) and \(\ddot{\theta}\) depend only on \(T_{p}\).
As input \(T_{s}\) is exclusive to a particular output \(\ddot{\psi}\), we design the speed control torque (\(T_{s}\)) separately using a proportional controller with high gain \(K_{p,\dot{\psi}}\) as shown:
\[T_{s}=K_{p,\dot{\psi}}(\dot{\psi}_{des}-\dot{\psi}) \tag{38}\]
To control pendulum angle and wobble, the controller input \(T_{p}\) is designed as a linear combination of the required torques for \(\beta\) and \(\dot{\theta}\) control. Let \(T_{p,\beta}\) be the pendulum control torque required to maintain a \(\beta_{des}\). Let \(T_{p,\dot{\theta}}\) be the wobble control torque required to maintain \(\dot{\theta}_{des}=0\). Then a net torque \(T_{p}\) can be found as a weighted sum of wobble control torque (\(T_{p,\dot{\theta}}\)) and pendulum control torque (\(T_{p,\beta}\)) as shown:
\[T_{p}=\gamma T_{p,\dot{\theta}}+\delta T_{p,\beta} \tag{39}\]
Wobble control torque (\(T_{p,\dot{\theta}}\)) design: We implement the underlying concepts of input-output feedback linearization via static feedback to cancel nonlinear terms appearing in \(\dot{y}_{3}\) and then apply a proportional controller term \(v_{\theta}\).
\[T_{p,\dot{\theta}}=\left[\frac{1}{G_{8,2}(\mathbf{x})}(-f_{8}(\mathbf{x})+v_{ \theta})\right]\text{ where }v_{\theta}=K_{p,\dot{\theta}}(\dot{\theta}_{des}-\dot{\theta}) \tag{40}\]
Pendulum control torque (\(T_{p,\beta}\)) design: We implement a combination of feedback term \(v_{\beta}\) and feedforward term to counteract gravitational torque. \(v_{\beta}\) consists of a proportional-derivative (PD) controller with high gains \(K_{p,\beta}\) and \(K_{d,\beta}\) as shown:
\[T_{p,\beta}=[m_{p}gr_{p}\sin(\beta+\theta)+v_{\beta}]\text{ where }v_{\beta}=K_{p,\beta}(\beta_{des}-\beta)+K_{d,\beta}(\dot{\beta}_{des}-\dot{ \beta}) \tag{41}\]
The proposed controller architecture is depicted in a block diagram shown in figure 11. We empirically tune the speed control gain (\(K_{p,\dot{\phi}}\)) and the pendulum control gains (\(K_{p,\beta}\) and \(K_{d,\beta}\)) to very large values to emulate our robot's actuation system, which has a lower level controller that controls the angular position of the pendulum and the speed of the motor driving the robot forward almost instantaneously. We also empirically tune the wobble control gain (\(K_{p,\dot{\theta}}\)). However, the wobble control gain (\(K_{p,\dot{\theta}}\)) has a much smaller value than the speed control gain (\(K_{p,\dot{\phi}}\)) and the pendulum control gains (\(K_{p,\beta}\) and \(K_{d,\beta}\)). This is because the proportional controller term (\(v_{\theta}\)) in wobble control torque (\(T_{p,\dot{\theta}}\)) is not responsible for canceling out the nonlinear terms associated with \(\theta\)'s dynamics. In the case of wobble control torque (\(T_{p,\dot{\theta}}\)), input-output linearization cancels out the nonlinear terms, whereas this is not the case with pendulum control torque (\(T_{p,\beta}\)) or speed control torque (\(T_{s}\)).
To tune the \(\gamma\) and \(\delta\) values, the \(T_{p}\) controller is implemented for three distinct scenarios, and the system response is analyzed. First, we examine the performance at the extremes by disabling pendulum control torque (\(T_{p,\beta}\)) or wobble control torque (\(T_{p,\dot{\theta}}\)). This is accomplished by setting (\(\gamma\)=0, \(\delta\)=1) and (\(\gamma\)=1, \(\delta\)=0) respectively. Next, we test the performance of \(T_{p}\) controller with various values of (\(\gamma\),\(\delta\)), with each value ranging from 0 to 1, to obtain various linear combinations of pendulum control torque (\(T_{p,\beta}\)) and wobble control torque (\(T_{p,\dot{\theta}}\)).
We analyze the performance of these three types of scenarios to determine which configuration of (\(\gamma\),\(\delta\)) achieves the goal of moving the robot in a circle at a constant speed and a desired radius of curvature while simultaneously eliminating wobbling. Before implementing the proposed controller with the scenarios discussed previously, we move the robot in a wobbly circle at a steady state for the first five seconds, similar to section 3. The value of the radius of curvature for this motion can be determined using the equation (29) corresponding to the initial configuration of a pendulum angle of 15\({}^{\circ}\) and a forward speed \(|\dot{\psi}|\) of 1. The control response of the three discussed scenarios is summarised below, along with figures comparing the system responses generated by each case:
**Wobbly**: Wobbly behavior with desired radius of curvature: (\(\gamma\) = 0, \(\delta\) = 1).
In this case, only the pendulum control torque (\(T_{p,\beta}\)) is at work to control the \(\beta\). Though the robot moves in a desired radius of curvature, it wobbles along the way. Hence, the lean angle \(\theta\) exhibits oscillations as expected due to wobbling as shown in figure 11(b). Figure 11(c) depicts that the pendulum angle \(\beta\) remains stabilized at 15\({}^{\circ}\) as per the command.
**Wobble-free; diff. radius**: Wobble-free behavior with a different radius of curvature: (\(\gamma\) = 1, \(\delta\) = 0).
Here, we toggle the controller to switch to the wobble control torque (\(T_{p,\dot{\theta}}\)), which is responsible for reducing the wobbling. This configuration successfully stabilizes the lean angle \(\theta\) as depicted in figure 11(b). However, the pendulum angle \(\beta\) starts exhibiting very high-frequency oscillations that can be observed in figure 11(c). Figure 11(a) shows that the robot loses track of the path it was supposed to move and starts moving on a curve with a higher radius of curvature because the mean value of pendulum angle \(\beta\) decreases.
**Wobble-free; sim. radius**: Wobble-free behavior with a similar radius of curvature: (\(\gamma\) = 0.9, \(\delta\) = 0.1)
This case captures the best of both worlds by using both the components pendulum control torque (\(T_{p,\beta}\)) and wobble control torque (\(T_{p,\dot{\theta}}\)) serving different functions. The robot moves in a path with a desired curvature radius as shown in 11(a). The lean angle \(\theta\) and \(\beta\) stabilize with time as shown in figures 11(b) and 11(c) respectively.
Figure 11: Block diagram for controller design
### Results
We use the \(T_{p}\)'s linear combination (\(\gamma\) = 0.9, \(\delta\) = 0.1) to execute a wobble-free turning maneuver as shown in figure 13. We compare the wobble-free system response to a wobbly turning maneuver in which only the pendulum control torque (\(T_{p,\beta}\)) is used for pendulum angle control 1.
Footnote 1: The video at [https://youtu.be/430yfKLEphw](https://youtu.be/430yfKLEphw) demonstrates the 3D multibody dynamic simulation of the pendulum-actuated spherical robot generated in Simulink using the VRML (Virtual Reality Modeling Language) functionality. We illustrate the controller design results in the video by contrasting the robot’s wobbly and wobble-free turning maneuvers modeled in this study.
Figure (b)b demonstrates that the wobbling has decreased significantly due to a negligible change in the lean angle \(\theta\) when the linear combination-based controller (\(T_{p}\)) is used. A non-zero stabilization of \(\theta\) during the turning phase causes the robot to tilt toward the turn's center. Then, \(\theta\) stabilizes at 0\({}^{\circ}\), indicating that the robot regains its upright position following the completion of its turning motion.
As depicted in figure (c)c, the linear combination-based controller (\(T_{p}\)) can also stabilize the precession rate. During the period in which the robot executes a turning motion, the figures display a non-zero stable precession rate. As soon as the robot moves in a straight line, the precession rate becomes zero.
The evolution of pendulum angle \(\beta\) with time depicted in figure (d)d is also realizable in practice. It is neither oscillatory nor increasing or decreasing monotonically. It is fascinating that \(\beta\) exhibits a smoother response during the robot's wobbly motion. However, when the robot executes a wobble-free motion due to the linear combination-based controller (\(T_{p}\)), the resulting \(\beta\) response exhibits oscillations in its behavior during the transient phase, which eventually stabilizes as the robot reaches a steady state.
Figure 14 illustrates a comparison between wobbly and wobble-free turning maneuvers with the proposed controller for varying pendulum angle \(\beta\) values at forward speed \(\dot{\psi}\) of 1 rad/s. The solid line style represents motion without wobble, whereas the dash-dotted line style represents motion with wobble. During the circular arc section of the path, we can observe that the radius of curvature for these two types of response varies slightly. After the turning maneuver, both trajectories converge to move in the same direction.
The dotted line style represents the circle that the wobble-free turning maneuver is a part of while tracing the circular arc. The dashed line represents the tangent to this circle at the point where \(\beta_{des}\) is toggled to \(0^{\circ}\). Due to the settling time associated with this toggle command, the tangent at the toggle point is not parallel to the direction of the final straight line for both wobbly and wobble-free straight lines. This deflection is shown as a function of pendulum angle \(\beta\) in figure (a)a. Observably, the deflection increases as the angle of the pendulum increases.
At a given speed of operation, the radius of curvature during a turning maneuver can be approximated using the equation (29) for various values of pendulum angle \(\beta\). The error in the radius of curvature is depicted in (b)b as the percentage difference between the observed values and the desired values predicted by equation (29). For both wobbly and wobble-free motion, the percentage error in the radius of curvature relative to the desired value decreases as the pendulum angle increases. In conclusion, when the inclination of the pendulum is altered, the heading angle deflection and error in the radius of curvature exhibit opposite behaviors. Based on heading deflection tolerance and acceptable error in the radius of curvature, figures (a)a and (b)b can be used to choose a pendulum angle that corresponds to the desired operational speed.
## 5 Conclusion and future work
This article proposes a dynamic model of a pendulum-actuated spherical robot considering the coupling between forward and steering motion. Our model qualitatively captures the system's lateral oscillations, which we refer to as wobbling. Modeling the wobble allows for the investigation of controller design for stabilizing the yoke, which serves as a platform for mounting sensors on a spherical robot. The yoke must remain stable during motion because the sensors mounted on the yoke would otherwise produce inaccurate results. In addition, we present mathematical formulations for the robot's radius of curvature, precession rate, wobble amplitude, and wobble frequency when moving in a wobbly circular motion. Finally, we design a controller for the non-holonomic spherical robot using only two input torques. The controller design aims towards controlling the forward speed \(\dot{\psi}\), pendulum angle \(\beta\), and limiting wobble (oscillations in the lean angle \(\theta\)). The control set-points are determined by calculating the desired robot speed and pendulum angle based on the required radius of curvature of the turning maneuver. The results of this work can be applied to the robot's semi-autonomous teleoperation. In this mode, a command is issued for a desired pendulum
Figure 13: Comparison between wobbly and wobble-free motion when \(\beta_{des}\) = 15\({}^{\circ}\) during turning motion
Figure 14: Comparison between the path taken by the robot during wobbly and wobble-free turning motion while moving with different value of pendulum angle during the circular arc
Figure 15: Errors as a function of \(\beta\)
angle to steer the robot in a specific direction while maintaining a desired forward speed. The wobble controller can assist the robot in moving without wobbling along the desired curve or line. In the future, the results of the controller design can be experimentally validated using actual hardware and used to develop a navigation algorithm.
|
2310.10510 | Normalized solutions for p-Laplacian equations with potential | In this paper, we consider the existence of normalized solutions for the
following $p$-Laplacian equation
\begin{equation*}
\left\{\begin{array}{ll}
-\Delta_{p}u-V(x)\lvert u\rvert^{p-2}u+\lambda\lvert u\rvert^{p-2}u=\lvert
u\rvert^{q-2}u&\mbox{in}\ \mathbb{R}^N,
\int_{\mathbb{R}^N}\lvert u\rvert^pdx=a^p,
\end{array}\right.
\end{equation*} where $N\geqslant 1$, $p>1$,
$p+\frac{p^2}{N}<q<p^*=\frac{Np}{N-p}$(if $N\leqslant p$, then $p^*=+\infty$),
$a>0$ and $\lambda\in\mathbb{R}$ is a Lagrange multiplier which appears due to
the mass constraint. Firstly, under some smallness assumptions on $V$, but no
any assumptions on $a$, we obtain a mountain pass solution with positive
energy, while no solution with negative energy. Secondly, assuming that the
mass $a$ has an upper bound depending on $V$, we obtain two solutions, one is a
local minimizer with negative energy, the other is a mountain pass solution
with positive energy. | Shengbing Deng, Qiaoran Wu | 2023-10-16T15:31:28Z | http://arxiv.org/abs/2310.10510v1 | # Normalized solutions for \(p\)-Laplacian equations with potential
###### Abstract
In this paper, we consider the existence of normalized solutions for the following \(p\)-Laplacian equation
\[\left\{\begin{array}{ll}-\Delta_{p}u-V(x)|u|^{p-2}u+\lambda|u|^{p-2}u=|u|^{q- 2}u&\mbox{in }\mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{p}dx=a^{p},\end{array}\right.\]
where \(N\geqslant 1\), \(p>1\), \(p+\frac{p^{2}}{N}<q<p^{*}=\frac{Np}{N-p}\)(if \(N\leqslant p\), then \(p^{*}=+\infty\)), \(a>0\) and \(\lambda\in\mathbb{R}\) is a Lagrange multiplier which appears due to the mass constraint. Firstly, under some smallness assumptions on \(V\), but no any assumptions on \(a\), we obtain a mountain pass solution with positive energy, while no solution with negative energy. Secondly, assuming that the mass \(a\) has an upper bound depending on \(V\), we obtain two solutions, one is a local minimizer with negative energy, the other is a mountain pass solution with positive energy.
## 1 Introduction
In this paper, we study the existence of solutions for the following \(p\)-Laplacian equation
\[-\Delta_{p}u-V(x)|u|^{p-2}u+\lambda|u|^{p-2}u=|u|^{q-2}u\quad\mbox{in }\mathbb{R}^{N}, \tag{1.1}\]
where \(\Delta_{p}u=\mbox{div}(|\nabla u|^{p-2}\nabla u)\) is the \(p\)-Laplacian operator, \(N\geqslant 1\), \(p>1\), \(p+\frac{p^{2}}{N}<q<p^{*}=\frac{Np}{N-p}\)(if \(N\leqslant p\), then \(p^{*}=+\infty\)) and \(\lambda\in\mathbb{R}\).
If \(p=2\), (1.1) is a special case of the following equation
\[-\Delta u-V(x)u+\lambda u=f(u)\quad\mbox{in }\mathbb{R}^{N}, \tag{1.2}\]
which can be derived from the following Schrodinger equation
\[i\psi_{t}+\Delta\psi+V(x)\psi+g(|\psi|^{2})\psi=0,\quad(t,x)\in\mathbb{R}^{+} \times\mathbb{R}^{N},\]
when we look for standing waves of the form \(\psi(t,x)=e^{-i\lambda t}u(x)\), where \(\lambda\in\mathbb{R}\), \(u\) is a real function and \(f(u)=g(u^{2})u\). A lot of efforts have been done to study (1.2). To obtain the existence result, a possible choice is to fix \(\lambda\in\mathbb{R}\) and find critical points of functional
\[I(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}(|\nabla u|^{2}-V(x)u^{2}+\lambda u^{2}) dx-\int_{\mathbb{R}^{N}}F(u)dx,\]
where \(F(u)=\int_{0}^{u}f(s)ds\). Here we refer the readers to [7, 8, 12] and references therein. Alternatively, an interesting method is to consider a prescribe \(L^{2}\) norm of \(u\), that is, let
\[\int_{\mathbb{R}^{N}}u^{2}dx=a^{2}, \tag{1.3}\]
where \(a>0\) is a fixed constant, and find critical points of functional
\[J(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}(|\nabla u|^{2}-V(x)u^{2})dx-\int_{ \mathbb{R}^{N}}F(u)dx,\]
which satisfy (1.3). In this case, \(\lambda\in\mathbb{R}\) will appear as a Lagrange multiplier and solutions of (1.2) satisfy (1.3) always called normalized solutions.
When we search for normalized solutions of (1.2), a new critical exponent \(2+\frac{4}{N}\) which is called \(L^{2}\) critical exponent appears. This critical exponent can be derived from the Gagliardo-Nirenberg inequality (see [1, 28]): for every \(p<q<p^{*}\), there exists an optimal constant \(C_{N,p,q}>0\) depending on \(N\), \(p\) and \(q\) such that
\[\|u\|_{q}\leqslant C_{N,p,q}\|\nabla u\|_{p}^{\frac{N(q-p)}{pq}}\|u\|_{p}^{1- \frac{N(q-p)}{pq}}\quad\forall u\in W^{1,p}(\mathbb{R}^{N}).\]
Assume that \(V\equiv 0\) and \(f(u)=|u|^{q-2}u\). In the \(L^{2}\)-subcritical case ( \(2<q<2+\frac{4}{N}\)), by Gagliardo-Nirenberg inequality, \(J(u)\) is bounded from below if \(u\) satisfies (1.3) and we can find a global minimizer of \(J(u)\), see [2, 17, 18] and references therein.
However, in the \(L^{2}\)-supercritical case (\(q>2+\frac{4}{N}\)), the functional \(J(u)\) is unbounded from below if \(u\) satisfies (1.3) and it seems impossible to search for a global minimizer of \(J(u)\). Jeanjean [13] first studied the \(L^{2}\)-supercritical case, with general nonlinearities and \(V\equiv 0\), he introduced an auxiliary functional
\[\tilde{J}(s,u):=J(s\star u)=\frac{1}{2}e^{2s}\int_{\mathbb{R}^{N}}|\nabla u|^ {2}dx-e^{-Ns}\int_{\mathbb{R}^{N}}F(e^{\frac{Ns}{2}}u)dx\]
to obtain the boundedness of a Palais-Smale(PS) sequence and got the existence result. This method has been widely used to study normalized solutions of (1.2). For the non-potential case, that is \(V\equiv 0\), we refer the readers to [14, 16, 22, 23, 27] and references therein.
If \(V\not\equiv 0\), it is complicated to study normalized solutions of (1.2). Firstly, the appearance of \(V\) will strongly affect the geometry structure of \(J\). In [20], Molle et al. considered the normalized solutions of (1.1) with \(V\geqslant 0\) and established different existence results under different assumptions on \(V\) and \(a\). Secondly, it is hard to obtain the compactness of a minimizing sequence or PS sequence. If \(V\) is a radial function, we can work in radial space \(H^{1}_{r}(\mathbb{R}^{N})\) to overcome this difficulty, but it is invalid when \(V\) is not a radial function. In [3, 20], the authors used a splitting lemma to conclude the weak convergence and obtained the compactness of PS sequence. In [10, 15], the authors considered a minimizing problem on Pohozaev manifold and proved the infimum on Pohozaev manifold is strictly decreasing for \(a\) to obtain the compactness of a minimizing sequence. We refer the readers to [4, 21, 24, 29] and references therein for more results about normalized solutions of Schrodinger equations with \(V\not\equiv 0\).
If \(p\not\equiv 2\), there are few papers on the normalized solutions of \(p\)-Laplacian equations. For the case of \(V\equiv 0\), Wang et al. in [26] considered the equation
\[-\Delta_{p}u+|u|^{p-2}u=\mu u+|u|^{s-1}u\quad\text{in }\mathbb{R}^{N}\]
with \(L^{2}\) constraint
\[\int_{\mathbb{R}^{N}}u^{2}dx=a^{2},\]
where \(1<p<N\), \(s\in(\frac{N+2}{N}p,p^{*})\). By Gagliardo-Nirenberg inequality, the \(L^{2}\)-critical exponent should be \(\frac{N+2}{N}p\). Zhang and Zhang [30] considered the \(L^{p}\) constraint, that is
\[\left\{\begin{array}{ll}-\Delta_{p}u=\lambda|u|^{p-2}u+\mu|u|^{q-2}u+g(u)& \mbox{in }\mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{p}dx=a^{p},\end{array}\right.\]
where \(N>p\), \(1<p<q\leqslant p+\frac{p^{2}}{N}\) and \(g:\mathbb{R}\to\mathbb{R}\) are \(L^{2}\)-supercritical but Sobolev subcritical nonlinearities. By Gagliardo-Nirenberg inequality, the \(L^{p}\)-critical exponent is \(p+\frac{p^{2}}{N}\). For the case of \(V\not\equiv 0\), Wang and Sun [25] considered both the \(L^{2}\) constraint and the \(L^{p}\) constraint for the following problem
\[\left\{\begin{array}{ll}-\Delta_{p}u+V(x)|u|^{p-2}u=\lambda|u|^{r-2}u+|u|^{ q-2}u&\mbox{in }\mathbb{R}^{N},\\ \int_{\mathbb{R}^{N}}|u|^{r}dx=c,\end{array}\right.\]
where \(1<p<N\), \(\lambda\in\mathbb{R}\), \(r=p\) or \(2\), \(p<q<p^{*}\) and \(V(x)=|x|^{k}\) with \(k>0\). Since \(V\) is a radial function, they can work in the space \(W^{1,p}_{r}(\mathbb{R}^{N})\).
Motivated by the results mentioned above, we are interested in the existence of normalized solutions for (1.1) with a \(L^{p}\) constraint. Define \(W(x):=V(x)|x|\) and
\[S_{a}:=\Big{\{}u\in W^{1,p}(\mathbb{R}^{N}):\int_{\mathbb{R}^{N}}|u|^{p}dx=a^{ p}\Big{\}}, \tag{1.4}\]
where \(a>0\) is a constant. Throughout the rest of the paper we assume that
\[V\geqslant 0\quad\mbox{but}\quad V\not\equiv 0. \tag{1.5}\]
Our results can be stated as follows.
**Theorem 1.1**.: _Let \(N\geqslant 2\), \(1<p<N\) and (1.5) holds. Then there exists a positive constant \(L\) depending on \(N\) and \(p\), such that if_
\[\max\{\|V\|_{N/p},\|W\|_{N/(p-1)}\}<L, \tag{1.6}\]
_then (1.1) has a mountain pass solution on \(S_{a}\) for every \(a>0\) with positive energy, while no solution with negative energy._
**Theorem 1.2**.: _Let \(N\geqslant 1\), \(p>1\), \(r\in(\max\{1,\frac{N}{p}\},+\infty]\), \(s\in(\max\{\frac{p}{p-1},\frac{N}{p-1}\},+\infty]\) and (1.5) holds. Moreover, we assume that_
\[V\in L^{r}(\mathbb{R}^{N})\quad\mbox{and}\quad\lim_{|x|\to+\infty}V(x)=0\mbox { if }r=+\infty,\]
_and \(W\in L^{s}(\mathbb{R}^{N})\)._
\((i)\) _There exist positive constants \(\sigma(N,p,q,r)\) and \(L(N,p,q,r)\) such that if_
\[a^{\sigma}\|V\|_{r}<K \tag{1.7}\]
_and there exists \(\varphi\in S_{a}\):_
\[\int_{\mathbb{R}^{N}}(|\nabla\varphi|^{p}-V(x)|\varphi|^{p})dx\leqslant 0, \tag{1.8}\]
_then (1.1) has a solution on \(S_{a}\) which is a local minimizer with negative energy._
\((ii)\) There exist positive constants \(\sigma_{i}(N,p,q,r)\), \(\bar{\sigma}_{i}(N,p,q,s)(i=1,2)\) and \(L(N,p,q,r,s)\) such that if_
\[\max\{a^{\sigma_{i}}\|V\|_{r},a^{\bar{\sigma}_{i}}\|W\|_{s}\}<L,\quad i=1,2, \tag{1.9}\]
_then \((\ref{eq:1.1})\) has a mountain pass solution on \(S_{a}\) with positive energy._
**Remark 1.1**.: _We point out that if there exist \(x_{0}\in\mathbb{R}^{N}\) and \(\delta>0\) such that_
\[\eta:=\inf_{x\in B_{\delta}(x_{0})}V(x)>0\quad\text{with }\eta\delta^{p}>N \Big{(}\frac{N-p}{p-1}\Big{)}^{p-1}\text{ if }N>p, \tag{1.10}\]
_then \((\ref{eq:1.1})\) holds, where \(B_{\delta}(x_{0})\) denotes the open ball centered at \(x_{0}\) with radius \(\delta\). This fact will be proved in Lemma 4.4._
The paper is organized as follows. In Section 2, we collect some preliminary results. In Section 3, we give the proof of Theorem 1.1. In Section 3 is devote to proving Theorem 1.2.
## 2 Preliminaries
In this section, we collect some useful results which will be used throughout the rest of paper. Define
\[T(s):=\left\{\begin{array}{ll}s,&\text{if }|s|\leqslant 1,\\ \frac{s}{|s|},&\text{if }|s|>1.\end{array}\right.\]
**Lemma 2.1**.: _Let \(N\geqslant 1\), \(p>1\) and \(\{u_{n}\}\subset D^{1,p}(\mathbb{R}^{N})\) such that \(u_{n}\rightharpoonup u\) in \(D^{1,p}(\mathbb{R}^{N})\), where \(D^{1,p}(\mathbb{R}^{N})\) denotes the completion of \(C_{c}^{\infty}(\mathbb{R}^{N})\) with respect to the norm \(\|u\|_{D^{1,p}}:=\|\nabla u\|_{p}\). Assume that for every \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{N})\), there is_
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\varphi(|\nabla u_{n}|^{p-2}\nabla u_{n} -|\nabla u|^{p-2}\nabla u)\cdot\nabla T(u_{n}-u)dx=0. \tag{2.1}\]
_Then, up to a subsequence, \(\nabla u_{n}\to\nabla u\) a.e. in \(\mathbb{R}^{N}\)._
Proof.: Let \(k\in\mathbb{N}_{+}\) and \(\varphi\in C_{c}^{\infty}(\mathbb{R}^{N})\) satisfies
\[0\leqslant\varphi\leqslant 1\quad\varphi_{k}=1\text{ in }B_{k}\quad\text{and} \quad\varphi_{k}=0\text{ in }\ B_{k+1}^{c}.\]
Since
\[(|\nabla u_{n}|^{p-2}\nabla u_{n}-|\nabla u|^{p-2}\nabla u)\cdot\nabla T(u_{n} -u)\geqslant 0,\]
we have
\[\lim_{n\to\infty}\int_{B_{k}}(|\nabla u_{n}|^{p-2}\nabla u_{n}-|\nabla u|^{p-2 }\nabla u)\cdot\nabla T(u_{n}-u)dx=0. \tag{2.2}\]
Therefore, by [9, Theorem 1.1], up to a subsequence, we have \(\nabla u_{k,n}\to\nabla u\) a.e. in \(B_{k}\). Now, using the Cantor diagonal argument, we complete the proof.
Let
\[E_{\infty}(u)=\frac{1}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}\|u\|_{q}^{q},\]
\[E(u)=\frac{1}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}\|u\|_{q}^{q}-\frac{1}{p}\int _{\mathbb{R}^{N}}V(x)|u|^{p}dx,\]
\[E_{\infty,\lambda}=\frac{1}{p}\|\nabla u\|_{p}^{p}+\frac{\lambda}{p}\|u\|_{p}^{p}- \frac{1}{q}\|u\|_{q}^{q},\]
and
\[E_{\lambda}(u)=\frac{1}{p}\|\nabla u\|_{p}^{p}+\frac{\lambda}{p}\|u\|_{p}^{p}- \frac{1}{q}\|u\|_{q}^{q}-\frac{1}{p}\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx.\]
**Lemma 2.2**.: _Let \(N\geqslant 1\), \(p>1\) and_
\((i)\) _If \(N>p\), then \(V\in L^{N/p}(B_{1})\) and \(V\in L^{\tilde{r}}(\mathbb{R}^{N}\backslash B_{1})\) for some \(\tilde{r}\in[N/p,+\infty]\),_
\((ii)\) _If \(N\leqslant p\), then \(V\in L^{r}(B_{1})\) and \(V\in L^{\tilde{r}}(\mathbb{R}^{N}\backslash B_{1})\) for some \(r,\tilde{r}\in(1,+\infty]\)._
_If \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) in \(W^{1,p}(\mathbb{R}^{N})\) and \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\). Then, \(\nabla u_{n}\to\nabla u\) a.e. in \(\mathbb{R}^{N}\)._
Proof.: By Lemma 2.1, we just need to prove (2.1). Since \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\), we can assume that \(u_{n}\to u\) a.e. in \(\mathbb{R}^{N}\). Therefore, the Egorov's theorem implies that for every \(\delta>0\), there exists \(F_{\delta}\subset\mathrm{supp}\varphi\) such that \(u_{n}\to u\) uniformly in \(F_{\delta}\) and \(m(\mathrm{supp}\varphi\backslash F_{\delta})<\delta\). Hence, \(|u_{n}(x)-u(x)|\leqslant 1\) for all \(x\in F_{\delta}\) as long as \(n\) sufficiently large.
Now, we have
\[\limsup_{n\to\infty}\Big{|}\int_{\mathbb{R}^{N}}\varphi|\nabla u| ^{p-2}\nabla u\cdot\nabla T(u_{n}-u)dx\Big{|}\] \[\leqslant \limsup_{n\to\infty}\Big{|}\int_{F_{\delta}}\varphi|\nabla u|^{p- 2}\nabla u\cdot\nabla T(u_{n}-u)dx\Big{|}+\limsup_{n\to\infty}\Big{|}\int_{F_ {\delta}^{c}}\varphi|\nabla u|^{p-2}\nabla u\cdot\nabla T(u_{n}-u)dx\Big{|}\] \[= \limsup_{n\to\infty}\Big{|}\int_{F_{\delta}^{c}}\varphi|\nabla u| ^{p-2}\nabla u\cdot\nabla T(u_{n}-u)dx\Big{|},\]
since \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\). For every \(\varepsilon>0\), by the definition of \(T\),
\[\Big{|}\int_{F_{\delta}^{c}}\varphi|\nabla u|^{p-2}\nabla u\cdot\nabla T(u_{n }-u)dx\Big{|}\leqslant\int_{F_{\delta}^{c}}\!\!\!|\varphi||\nabla u|^{p-1}dx<\varepsilon,\]
as long as \(\delta\) sufficiently small, which implies
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\varphi|\nabla u|^{p-2}\nabla u\cdot \nabla T(u_{n}-u)dx=0. \tag{2.3}\]
Next, we prove
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\varphi|\nabla u_{n}|^{p-2}\nabla u_{n }\cdot\nabla T(u_{n}-u)dx=0,\]
which together with (2.3) implies (2.1) holds. Since \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) in \(W^{1,p}(\mathbb{R}^{N})\), we have
\[\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{p-2}\nabla u_{n}\cdot\nabla \psi dx= -\lambda\int_{\mathbb{R}^{N}}\!\!|u_{n}|^{p-2}u_{n}\psi dx+\int_{ \mathbb{R}^{N}}V(x)|u_{n}|^{p-2}u_{n}\psi dx\] \[+\int_{\mathbb{R}^{N}}\!\!|u_{n}|^{q-2}u_{n}\psi dx+o_{n}(1)\| \psi\|_{W^{1,p}},\]
for every \(\psi\in W^{1,p}(\mathbb{R}^{N})\). Let \(\psi=\varphi T(u_{n}-u)\), then
\[\limsup_{n\to\infty}\Big{|}\int_{\mathbb{R}^{N}}\varphi|\nabla u_{n}|^{p-2} \nabla u_{n}\cdot\nabla T(u_{n}-u)dx\Big{|}\]
\[\limsup_{n\to\infty}\int_{\mathbb{R}^{N}}|V(x)||u_{n}|^{p-1}|\varphi T(u_{n}-u)| dx\leqslant C\varepsilon.\]
Similarly, we have
\[\limsup_{n\to\infty}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{p-1}|T(u_{n}-u)\nabla \varphi|dx\leqslant C\varepsilon,\]
\[\limsup_{n\to\infty}\int_{\mathbb{R}^{N}}|u_{n}|^{p-1}|\varphi T(u_{n}-u)|dx \leqslant C\varepsilon,\]
and
\[\limsup_{n\to\infty}\int_{\mathbb{R}^{N}}|u_{n}|^{q-1}|\varphi T(u_{n}-u)|dx \leqslant C\varepsilon.\]
Therefore, from (2.4), we obtain
\[\limsup_{n\to\infty}\Big{|}\int_{\mathbb{R}^{N}}\varphi|\nabla u_{n}|^{p-2} \nabla u_{n}\cdot\nabla T(u_{n}-u)dx\Big{|}\leqslant C\varepsilon,\]
which implies
\[\lim_{n\to\infty}\int_{\mathbb{R}^{N}}\varphi|\nabla u_{n}|^{p-2}\nabla u_{n} \cdot\nabla T(u_{n}-u)dx=0.\]
**Remark 2.1**.: _If \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) in \(W^{1,p}(\mathbb{R}^{N})\) and \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\). Then, by Lemma 2.2 and weak convergence, \(u\) is a solution of \((\ref{eq:1})\)._
**Lemma 2.3**.: _Let \(N\geqslant 1\), \(p>1\), and \(V\) satisfies the assumptions of Lemma 2.2. Assume \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) in \(W^{1,p}(\mathbb{R}^{N})\), and \(u_{n}\to u\) in \(W^{1,p}(\mathbb{R}^{N})\). Let \(v_{n}=u_{n}-u\). Then, \(\{v_{n}\}\) is a PS sequence for \(E_{\infty,\lambda}\)._
Proof.: Since \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\), we have \(v_{n}\rightharpoonup 0\) in \(W^{1,p}(\mathbb{R}^{N})\), \(v_{n}\to 0\) in \(L^{p}_{loc}(\mathbb{R}^{N})\), \(L^{q}_{loc}(\mathbb{R}^{N})\), and a.e. in \(\mathbb{R}^{N}\). Set
\[\int_{\mathbb{R}^{N}}V(x)|v_{n}|^{p}dx=\int_{B_{1}}V(x)|v_{n}|^{p}dx+\int_{B_{ 1}^{c}}V(x)|v_{n}|^{p}dx=A_{n}+B_{n}. \tag{2.5}\]
Firstly, we assume \(N>p\) and \(\tilde{r}<+\infty\). Let \(\tilde{r}^{\prime}\) be the conjugate exponent of \(\tilde{r}\). Since \(\{|v_{n}|^{p}\}\) is bounded in \(L^{N/(N-p)}(B_{1})\) and \(L^{\tilde{r}^{\prime}}(B_{1}^{c})\), we have \(|v_{n}|^{p}\rightharpoonup 0\) in \(L^{N/(N-p)}(B_{1})\) and \(L^{\tilde{r}^{\prime}}(B_{1}^{c})\) and hence \(A_{n},B_{n}\to 0\) as \(n\to\infty\). In the case \(\tilde{r}=+\infty\), it is not difficult to prove that \(A_{n}\to 0\) as \(n\to\infty\). We know \(v_{n}\to 0\) in \(L^{p}_{loc}(\mathbb{R}^{N})\), thus, for every \(R>1\), there holds
\[\limsup_{n\to\infty}\lvert B_{n}\rvert=\limsup_{n\to\infty}\Big{|}\int_{B_{R} ^{c}}V(x)|v_{n}|^{p}dx\Big{|}\leqslant C\sup_{B_{R}^{c}}\lvert V\rvert,\]
which implies \(B_{n}\to 0\) as \(n\to\infty\). Similar arguments for \(N\leqslant p\) can also prove that \(A_{n},B_{n}\to 0\) as \(n\to\infty\). To sum up, in any case we have
\[\int_{\mathbb{R}^{N}}V(x)|v_{n}|^{p}dx\to 0\quad\text{as $n\to\infty$}, \tag{2.6}\]
which implies
\[\int_{\mathbb{R}^{N}}V(x)|u_{n}|^{p}dx\to\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx \quad\text{as $n\to\infty$}. \tag{2.7}\]
Since \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\), there exists \(c\in\mathbb{R}\) such that
\[E_{\lambda}(u_{n})\to c\quad\text{and}\quad\lVert E_{\lambda}^{\prime}(u_{n}) \rVert\to 0\text{ in $W^{-1,p}(\mathbb{R}^{N})$}\quad\text{as $n\to\infty$}.\]
By Brezis-Lieb lemma and Lemma 2.2, we have
\[E_{\lambda}(u_{n})=E_{\lambda}(u)+E_{\infty,\lambda}(v_{n})+o_{n}(1),\]
which implies
\[E_{\infty,\lambda}(v_{n})\to c-E_{\lambda}(u)\quad\text{as $n\to\infty$}. \tag{2.8}\]
Finally, we prove that
\[\lVert E_{\infty,\lambda}^{\prime}(v_{n})\rVert_{W^{-1,p}}\to 0\quad\text{as $n \to\infty$},\]
which together with (2.8) implies \(\{v_{n}\}\) is a PS sequence for \(E_{\infty,\lambda}\). We just need to prove that
\[E_{\infty,\lambda}^{\prime}(v_{n})\psi=o_{n}(1)\lVert\psi\rVert_{W^{1,p}} \quad\forall\psi\in W^{1,p}(\mathbb{R}^{N}),\]
that is
\[\int_{\mathbb{R}^{N}}(\lvert\nabla v_{n}\rvert^{p-2}\nabla v_{n}\cdot\nabla \psi+\lambda\lvert v_{n}\rvert^{p-2}v_{n}\psi-\lvert v_{n}\rvert^{q-2}v_{n} \psi)dx=o_{n}(1)\lVert\psi\rVert_{W^{1,p}}. \tag{2.9}\]
By the Holder inequality,
\[\Big{|}\int_{\mathbb{R}^{N}}(\lvert\nabla u_{n}\rvert^{p-2}\nabla u_{n}- \lvert\nabla v_{n}\rvert^{p-2}\nabla v_{n}-\lvert\nabla u\rvert^{p-2}\nabla u \rangle\cdot\nabla\psi dx\Big{|}\]
\[\leqslant \Big{(}\int_{\mathbb{R}^{N}}\big{|}|\nabla u_{n}|^{p-2}\nabla u_{n}-|v_{ n}|^{p-2}\nabla v_{n}-|\nabla u|^{p-2}\nabla u\big{|}^{\frac{p}{p-1}}dx\Big{)}^{ \frac{p}{p-1}}\|\nabla\psi\|_{p}\] \[\leqslant \Big{(}\int_{\mathbb{R}^{N}}\big{|}|\nabla u_{n}|^{p-2}\nabla u_{ n}-|v_{n}|^{p-2}\nabla v_{n}-|\nabla u|^{p-2}\nabla u\big{|}^{\frac{p}{p-1}}dx \Big{)}^{\frac{p}{p-1}}\|\psi\|_{W^{1,p}}.\]
From [19, Lemma 3.2], we know
\[\int_{\mathbb{R}^{N}}\big{|}|\nabla u_{n}|^{p-2}\nabla u_{n}-|v_{n}|^{p-2} \nabla v_{n}-|\nabla u|^{p-2}\nabla u\big{|}^{\frac{p}{p-1}}dx=o_{n}(1),\]
which implies
\[\int_{\mathbb{R}^{N}}|\nabla v_{n}|^{p-2}\nabla v_{n}\cdot\nabla \psi dx\] \[= \int_{\mathbb{R}^{N}}|\nabla u_{n}|^{p-2}\nabla u_{n}\cdot\nabla \psi dx-\int_{\mathbb{R}^{N}}|\nabla u|^{p-2}\nabla u\cdot\nabla\psi dx+o_{n}( 1)\|\psi\|_{W^{1,p}}. \tag{2.10}\]
Similarly, we have
\[\int_{\mathbb{R}^{N}}|v_{n}|^{p-2}v_{n}\psi dx=\int_{\mathbb{R}^{N}}|u_{n}|^{ p-2}u_{n}\psi dx-\int_{\mathbb{R}^{N}}|u|^{p-2}u\psi dx+o_{n}(1)\|\psi\|_{W^{1,p}}. \tag{2.11}\]
and
\[\int_{\mathbb{R}^{N}}|v_{n}|^{q-2}v_{n}\psi dx=\int_{\mathbb{R}^{N}}|u_{n}|^{ q-2}u_{n}\psi dx-\int_{\mathbb{R}^{N}}|u|^{q-2}u\psi dx+o_{n}(1)\|\psi\|_{W^{1,p}}. \tag{2.12}\]
Now, using (2.7), (2.10), (2.11), (2.12), Remark 2.1 and the fact that \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\), we obtain (2.9).
Now, we state a splitting Lemma for \(E_{\lambda}\). The idea of this proof comes from [5, Lemma 3.1] and [20, Lemma 2.3].
**Lemma 2.4**.: _Let \(N\geqslant 1\), \(p>1\), and assume that_
\((i)\)_\(N>p\): \(V\in L^{N/p}(B_{1})\), and \(V\in L^{\tilde{r}}(\mathbb{R}^{N}\backslash B_{1})\) for some \(\tilde{r}\in[N/p,+\infty]\),_
\((ii)\)_\(N\leqslant p\): \(V\in L^{r}(B_{1})\) and \(V\in L^{\tilde{r}}(\mathbb{R}^{N}\backslash B_{1})\) for some \(r,\tilde{r}\in(1,+\infty]\),_
\((iii)\) _in case \(\tilde{r}=+\infty\), \(V\) further satisfies \(V(x)\to 0\) as \(|x|\to+\infty\),_
\((iv)\)_\(\lambda>0\). If \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) in \(W^{1,p}(\mathbb{R}^{N})\) and \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\) but not strongly, then there exist an integer \(k\geqslant 1\), \(k\) nontrivial solutions \(w^{1},...,w^{k}\in W^{1,p}(\mathbb{R}^{N})\) to the equation_
\[-\Delta w+\lambda w=|w|^{q-2}w, \tag{2.13}\]
_and \(k\) sequence \(\{y_{n}^{j}\}\subset\mathbb{R}^{N}\), \(1\leqslant j\leqslant k\), such that \(|y_{n}^{j}|\to+\infty\) as \(n\to\infty\), \(|y_{n}^{j_{1}}-y_{n}^{j_{2}}|\to+\infty\) for \(j_{1}\neq j_{2}\) as \(n\to\infty\), and_
\[u_{n}=u+\sum_{j=1}^{k}w^{j}(\cdot-y_{n}^{j})+o_{n}(1)\quad\text{in }W^{1,p}( \mathbb{R}^{N}). \tag{2.14}\]
_Moreover, we have_
\[\|u_{n}\|_{p}^{p}=\|u\|_{p}^{p}+\sum_{j=1}^{k}\|w^{j}\|_{p}^{p}+o_{n}(1), \tag{2.15}\]
\[E_{\lambda}(u_{n})=E_{\lambda}(u)+\sum_{j=1}^{k}E_{\infty,\lambda}(w^{j})+o_{n}(1). \tag{2.16}\]
Proof.: Let \(u_{1,n}=u_{n}-u\). Then \(u_{1,n}\rightharpoonup 0\) in \(W^{1,p}(\mathbb{R}^{N})\), \(u_{n}\to u\) in \(L^{p}_{loc}(\mathbb{R}^{N})\), \(L^{q}_{loc}(\mathbb{R}^{N})\) and a.e. in \(\mathbb{R}^{N}\). Similar to the proof of Lemma 2.3, we can prove that
\[\int_{\mathbb{R}^{N}}V(x)|u_{1,n}|^{p}dx\to 0,\quad\text{as $n\to\infty$}. \tag{2.17}\]
Since \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\) but not strongly, there is
\[\liminf_{n\to\infty}\lVert u_{1,n}\rVert^{p}>0.\]
By Lemma 2.3, we know \(\{u_{1,n}\}\) is a PS sequence for \(E_{\infty,\lambda}\), thus
\[\lVert\nabla u_{1,n}\rVert_{p}^{p}+\lambda\lVert u_{1,n}\rVert_{p}^{p}=\lVert u _{1,n}\rVert_{q}^{q}+o_{n}(1),\]
which implies
\[\liminf_{n\to\infty}\lVert u_{1,n}\rVert_{q}^{q}>0,\]
since \(\lambda>0\).
Let us decompose \(\mathbb{R}^{N}\) into \(N\)-dimensional unit hypercubes \(Q_{i}\) and set
\[l_{n}=\sup_{i\in\mathbb{N}_{+}}\lVert u_{1,n}\rVert_{L^{q}(Q_{i})}.\]
It is not difficult to conclude that \(l_{n}\) can be attained by some \(i_{n}\in\mathbb{N}_{+}\). We claim that
\[\liminf_{n\to\infty}l_{n}>0.\]
Indeed,
\[\lVert u_{1,n}\rVert_{q}^{q}=\sum_{i=1}^{\infty}\lVert u_{1,n}\rVert_{L^{q}( Q_{i})}^{q}\leqslant l_{n}^{q-p}\sum_{i=1}^{\infty}\lVert u_{1,n}\rVert_{L^{q}(Q_{i})}^ {p}\leqslant Cl_{n}^{q-p}\sum_{i=1}^{n}\lVert\nabla u_{1,n}\rVert_{L^{p}(Q_{i })}^{p}\leqslant Cl_{n}^{q-p},\]
this prove the claim.
Let \(y_{n}^{1}\) be the center of \(Q_{i_{n}}\). It is not difficult to observe that \(|y_{n}^{1}|\to+\infty\), since \(u_{1,n}\to 0\) in \(L^{q}_{loc}(\mathbb{R}^{N})\). Set
\[v_{1,n}:=u_{1,n}(\cdot+y_{n}^{1}),\]
then \(\{v_{1,n}\}\) is a PS sequence for \(E_{\infty,\lambda}\) and there exists \(w^{1}\in W^{1,p}(\mathbb{R}^{N})\backslash\{0\}\) such that \(v_{1,n}\rightharpoonup w^{1}\) in \(W^{1,p}(\mathbb{R}^{N})\). By weak convergence, we know \(w^{1}\) satisfies (2.13). Moreover, by (2.6), Brezis-Lieb Lemma[6, Theorem 2] and Lemma 2.2, we have
\[u_{n}=u+u_{1,n}=u+v_{1,n}(\cdot-y_{n}^{1})=u+w^{1}(\cdot-y_{n}^{1})+[v_{1,n}( \cdot-y_{n}^{1})-w^{1}(\cdot-y_{n}^{1})],\]
\[\lVert u_{n}\rVert^{p}=\lVert u\rVert^{p}+\lVert w^{1}\rVert^{p}+\lVert v_{1,n}-w^{1}\rVert^{p}+o_{n}(1),\]
and
\[E_{\lambda}(u_{n})=E_{\lambda}(u)+F_{\infty,\lambda}(w^{1})+F_{\infty,\lambda }(v_{1,n}-w^{1})+o_{n}(1).\]
Now, we can set
\[u_{2,n}=v_{1,n}(\cdot-y_{n}^{1})-w^{1}(\cdot-y_{n}^{1}),\]
and iterate the above procedure. To complete the proof, we just need to prove that the iteration will be ended in finite steps. Suppose by contraction that the iteration will not be ended, then we have
\[\|u_{n}\|^{p}\geqslant\sum_{j=1}^{\infty}\|w^{j}\|^{p}.\]
In Lemma 2.5, we will claim that there exists a constant \(C>0\) depending on \(N\), \(p\), \(q\) and \(\lambda\), such that \(\|w^{j}\|_{W^{1,p}}\geqslant C\) which implies \(\|u_{n}\|=+\infty\), which is an absurd.
Finally, we give some properties of \(w\) which satisfies
\[-\Delta_{p}w+\lambda|w|^{p-2}w=|w|^{q-2}w\]
for some \(\lambda>0\). Define
\[Z_{a}:=\{w\in S_{a}:\exists\lambda>0,\text{s.t.}-\Delta_{p}w+\lambda|w|^{p-2} w=|w|^{q-2}w\}, \tag{2.18}\]
and
\[m_{a}:=\inf_{w\in Z_{a}}E_{\infty}(w). \tag{2.19}\]
**Lemma 2.5**.: _Let \(w\in W^{1,p}(\mathbb{R}^{N})\) be a non-trivial solution of_
\[-\Delta_{p}w+\lambda|w|^{p-2}w=|w|^{q-2}w\]
_for some \(\lambda>0\). Then there exists a constant \(C\) depending on \(N\), \(p\), \(q\), and \(\lambda\) such that_
\[\|w\|_{W^{1,p}}\geqslant C.\]
Proof.: By the Pohozaev identity, we have
\[\|\nabla w\|_{p}^{p}=\frac{N(q-p)}{pq}\|w\|_{q}^{q},\]
which together with
\[\|\nabla w\|_{p}^{p}+\lambda\|w\|_{p}^{p}=\|w\|_{q}^{q}\]
implies
\[\lambda\|w\|_{p}^{p}=\frac{Np-(N-p)q}{N(q-p)}\|\nabla w\|_{p}^{p}.\]
Now, by Gagliardo-Nirenberg inequality, we have
\[\frac{N(q-p)}{pq}\|\nabla w\|_{p}^{p}=\|w\|_{q}^{q}\leqslant C_{N,p,q}^{q}\| \nabla w\|_{p}^{\frac{N(q-p)}{p}}\|w\|_{p}^{q-\frac{N(q-p)}{p}}=C\|\nabla w\|_ {p}^{q}.\]
Therefore, \(\|w\|_{W^{1,p}}\geqslant C=C(N,p,q,\lambda)\), since \(q>p\).
**Lemma 2.6**.: _We have \(m_{a}>0\) and \(m_{a}\) can be achieved by some \(w\in Z_{a}\)._
Proof.: For every \(w\in Z_{a}\), by the Pohozaev identity and Gagliardo-Nirenberg inequality, we have
\[\|\nabla w\|_{p}^{p}=\frac{N(q-p)}{pq}\|w\|_{q}^{q}\leqslant\frac{N(q-p)}{pq}C_{ N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}\|\nabla w\|_{p}^{\frac{N(q-p)}{p}}, \tag{2.20}\]
which implies \(\|\nabla w\|_{p}\geqslant C=C(N,p,q,a)\), since \(N(q-p)/p>p\). Therefore,
\[E_{\infty}(w)=\frac{1}{p}\|\nabla w\|_{p}^{p}-\frac{1}{q}\|w\|_{q}^{q}=\Big{(} \frac{1}{p}-\frac{1}{q\gamma_{q}}\Big{)}\|\nabla w\|_{p}^{p}\geqslant C,\]
which implies \(m_{a}>0\). By [1, Page 2], it is not difficult to know the inequality in (2.20) can become an equation by some \(\psi\in Z_{a}\) and \(E_{\infty}(\psi)=m_{a}\).
**Lemma 2.7**.: \(m_{a}\) _is decreasing for \(a\in(0,+\infty)\) and_
\[a^{p\theta}m_{a}=b^{p\theta}m_{b}\quad\forall a,b>0,\]
_where_
\[\theta=\frac{Np-q(N-p)}{N(q-p)-p^{2}}.\]
Proof.: By Lemma 2.6, there exists \(w_{b}\in Z_{b}\) such that \(E_{\infty}(w_{b})=m_{b}\). Moreover, there exists \(\lambda_{b}>0\) such that
\[-\Delta_{p}w_{b}+\lambda_{b}|w_{b}|^{p-2}w_{b}=|w_{b}|^{q-2}w_{b}.\]
Therefore, by Pohozaev identity, we have
\[E_{\infty}(w_{b})=\frac{1}{p}\|\nabla w_{b}\|_{p}^{p}-\frac{1}{q}\|w_{b}\|_{q }^{q}=\frac{N(q-p)-p^{2}}{Np(q-p)}\|\nabla w_{b}\|_{p}^{p}=m_{b}. \tag{2.21}\]
Let \(w=\alpha w_{b}(\beta\cdot)\), where
\[\alpha=\Big{(}\frac{b}{a}\Big{)}^{\frac{p^{2}}{N(q-p)-p^{2}}}\quad\text{and} \quad\beta=\Big{(}\frac{b}{a}\Big{)}^{\frac{p(q-p)}{N(q-p)-p^{2}}}.\]
Direct calculations show that \(w\in Z_{a}\) and
\[-\Delta_{p}w+\lambda_{b}\beta^{p}|w|^{p-2}w=|w|^{q-2}w.\]
Thus, \(E_{\infty}(w)\geqslant m_{a}\). By (2.21), we have
\[E_{\infty}(w) =\frac{1}{p}\|\nabla w\|_{p}^{p}-\frac{1}{q}\|w\|_{q}^{q}=\frac{ N(q-p)-p^{2}}{Np(q-p)}\|\nabla w\|_{p}^{p}\] \[=\frac{N(q-p)-p^{2}}{Np(q-p)}\alpha^{p}\beta^{p-N}\|\nabla w_{b}\| _{p}^{p}=\Big{(}\frac{b}{a}\Big{)}^{p\theta}m_{b},\]
which implies
\[b^{p\theta}m_{b}\geqslant a^{p\theta}m_{a}.\]
Similarly, we have
\[a^{p\theta}m_{a}\geqslant b^{p\theta}m_{b}.\]
Proof of Theorem 1.1
In this section, we will prove Theorem 1.1. For some fixed \(\delta\in(0,1)\), we give following assumptions on \(V\) and \(W\).
\[\|V\|_{N/p}\leqslant(1-\delta)S, \tag{3.1}\]
\[N|q-2p|S^{-1}\|V\|_{N/p}+p^{2}S^{-\frac{p-1}{p}}\|W\|_{N/(p-1)}<B, \tag{3.2}\]
\[\big{[}AN|q-2p|+(N-p)D\big{]}S^{-1}\|V\|_{N/p}+p(pA+D)S^{-\frac{p-1}{p}}\|W\|_{ N/(p-1)}<AB, \tag{3.3}\]
and
\[N|q-2p|S^{-1}\|V\|_{N/p}+p^{2}S^{-\frac{p-1}{p}}\|W\|_{N/(p-1)}\leqslant N(q-p) -p^{2} \tag{3.4}\]
where
\[A=Np-(N-p)q,\quad B=N(q-p)-p^{2}\quad\text{and}\quad D=N(q-p)^{2}.\]
Obviously, under the above assumptions, (1.6) can be established. Firstly, we prove that under the above assumptions, the functional \(E\) has a mountain pass geometry.
**Lemma 3.1**.: _For every \(u\in S_{a}\),_
\[E(u)\geqslant\frac{\delta}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}C_{N,p,q}^{q}a^{ q-\frac{N(q-p)}{p}}\|\nabla u\|_{p}^{\frac{N(q-p)}{p}}.\]
Proof.: By the Gagliardo-Nirenberg inequality, Holder inequality and Sobolev inequality, we have
\[\|u\|_{q}^{q}\leqslant C_{N,p,q}^{q}\|u\|_{p}^{q-\frac{N(q-p)}{p}}\|\nabla u\| _{p}^{\frac{N(q-p)}{p}}=C_{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}\|\nabla u\|_{p}^{ \frac{N(q-p)}{p}},\]
and
\[\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx\leqslant\|V\|_{N/p}\|u\|_{p^{*}}^{p} \leqslant S^{-1}\|V\|_{N/p}\|\nabla u\|_{p}^{p}.\]
Hence,
\[E(u)\geqslant\frac{1}{p}(1-S^{-1}\|V\|_{N/p})\|\nabla u\|_{p}^{p}-\frac{1}{q}C _{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}\|\nabla u\|_{p}^{\frac{N(q-p)}{p}},\]
which together with (3.1) implies
\[E(u)\geqslant\frac{\delta}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}C_{N,p,q}^{q}a^{ q-\frac{N(q-p)}{p}}\|\nabla u\|_{p}^{\frac{N(q-p)}{p}}.\]
**Lemma 3.2**.: _For every \(u\in S_{a}\),_
\[\lim_{s\to-\infty}\|\nabla(s\star u)\|_{p}^{p}=0,\quad\lim_{s\to+\infty}\| \nabla(s\star u)\|_{p}=+\infty, \tag{3.5}\]
_and_
\[\lim_{s\to-\infty}E(s\star u)=0,\quad\lim_{s\to+\infty}E(s\star u)=-\infty. \tag{3.6}\]
Proof.: It is obvious to obtain (3.5), here we only prove (3.6). By the Holder inequality,
\[\int_{\mathbb{R}^{N}}V(x)|s\star u|^{p}dx\leqslant\|V\|_{N/p}\|s\star u\|_{p^{*} }^{p}=e^{ps}\|V\|_{N/p}\|u\|_{p^{*}}^{p}\to 0\]
as \(s\to-\infty\). Thus
\[\lim_{s\to-\infty}E(s\star u)=\lim_{s\to-\infty}E_{\infty}(s\star u)-\frac{1}{ p}\lim_{s\to-\infty}\int_{\mathbb{R}^{N}}V(x)|s\star u|^{p}dx=0.\]
Since \(V\geqslant 0\), we have
\[\lim_{s\to+\infty}E(s\star u)\leqslant\lim_{s\to+\infty}E_{\infty}(s\star u)=-\infty.\]
This ends of the proof.
For every \(c\in\mathbb{R}\) and \(R>0\), define \(E^{c}=\{u\in S_{a}:E(u)\leqslant c\}\) and
\[M_{R}=\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R\}.\]
From Lemmas 3.1 and 3.2, it is easy to know that \(E^{0}\neq\emptyset\) and there exist \(\tilde{R}>R_{0}>0\) such that for all \(0<R\leqslant R_{0}\), \(0<M_{R}<M_{\tilde{R}}\). Thus, we can construct a min-max structure
\[\Gamma=\{\xi\in C([0,1],\mathbb{R}\times S_{a}):\xi(0)\in(0,A_{R_{0}}),\xi(1) \in(0,E^{0})\} \tag{3.7}\]
with associated min-max level
\[m_{V,a}=\inf_{\xi\in\Gamma}\max_{t\in[0,1]}\tilde{E}(\xi(t))>0, \tag{3.8}\]
where
\[A_{k}=\{u\in S_{a}:\|\nabla u\|_{p}\leqslant R\}\quad\text{and}\quad\tilde{E} (s,u)=E(s\star u).\]
Next, we prove \(E\) has a bounded PS sequence. Before starting the proof, we give the definition of homotopy-stable family.
**Definition 3.1**.: ([11, Definition 5.1]) _Let \(B\) be a closed subset of \(X\). We shall say that a class of \(\mathcal{F}\) of compact subsets of \(X\) is a homotopy-stable family with extended boundary \(B\) if for any set \(A\) in \(\mathcal{F}\) and any \(\eta\in C([0,1]\times X,X)\) satisfying \(\eta(t,x)=x\) for all \((t,x)\) in \((\{0\}\times X)\cup([0,1]\times B)\) we have that \(\eta(\{1\}\times A)\in\mathcal{F}\)._
**Lemma 3.3**.: ([11, Theorem 5.2]) _Let \(\varphi\) be a \(C^{1}\)-functional on a complete connected \(C^{1}\)-Finsler manifold \(X\) and consider a homotopy-stable family \(\mathcal{F}\) with an extend closed boundary \(B\). Set \(c=c(\varphi,\mathcal{F})\) and let \(F\) be a closed subset of \(X\) satisfying_
(F'1)_\(A\cap F\backslash B\neq\emptyset\) for every \(A\in\mathcal{F}\), and_
(F'2)_\(\sup\varphi(B)\leqslant c\leqslant\inf\varphi(F)\). Then, for any sequence of sets \(\{A_{n}\}\) in \(\mathcal{F}\) such that \(\lim_{n\to\infty}\sup_{A_{n}}\varphi=c\), there exists a sequence \(\{x_{n}\}\) in \(X\backslash B\) such that_
(i)_\(\lim_{n\to\infty}\varphi(x_{n})=c\),_
(ii)_\(\lim_{n\to\infty}\|d\varphi(x_{n})\|=0\),_
(iii)_\(\lim_{n\to\infty}\operatorname{dist}(x_{n},F)=0\),_
(iv)_\(\lim_{n\to\infty}\operatorname{dist}(x_{n},A_{n})=0\)._
**Lemma 3.4**.: _There exists a bounded PS sequence \(\{u_{n}\}\) for \(E|_{S_{a}}\) at the level \(m_{V,a}\), that is_
\[E(u_{n})\to m_{V,a}\quad\text{and}\quad\|E^{\prime}(u_{n})\|_{(T_{u_{n}}S_{a})^ {*}}\to 0\quad\text{as $n\to\infty$}, \tag{3.9}\]
_such that_
\[\|\nabla u_{n}\|_{p}^{p}-\frac{N(q-p)}{pq}\|u_{n}\|_{q}^{q}-\frac{1}{p}\int_{ \mathbb{R}^{N}}V(x)(N|u_{n}|^{p}+p|u_{n}|^{p-2}u_{n}\nabla u_{n}\cdot x)dx\to 0 \tag{3.10}\]
_as \(n\to\infty\). Moreover, the related Lagrange multipliers of \(\{u_{n}\}\)_
\[\lambda_{n}=-\frac{1}{a^{p}}F^{\prime}(u_{n})u_{n}, \tag{3.11}\]
_converges to \(\lambda>0\) as \(n\to\infty\)._
Proof.: Firstly, we prove the existence of PS sequence. Let \(X=\mathbb{R}\times S_{a}\), \(\mathcal{F}=\Gamma\) which is given by (3.7) and \(B=(0,A_{R_{0}})\cup(0,E^{0})\). By Definition 3.1, \(\Gamma\) is a homotopy-stable family of compact subsets of \(\mathbb{R}\times S_{a}\) with extend closed boundary \((0,A_{R_{0}})\cup(0,E^{0})\). Let
\[\varphi=\tilde{E}(s,u)\quad c=m_{V,a}\quad\text{and}\quad F=\{(s,u)\in \mathbb{R}\times S_{a}:\tilde{E}(s,u)\geqslant c\}.\]
We can check that \(E\) satisfies (F'1) and (F'2) in Lemma 3.3. Considering the sequence
\[\{A_{n}\}:=\{\xi_{n}\}=\{(\alpha_{n},\beta_{n})\}\subset\Gamma\]
such that
\[m_{V,a}\leqslant\max_{t\in[0,1]}\tilde{F}(\xi_{n}(t))<m_{V,a}+\frac{1}{n}\]
(we may assume that \(\alpha_{n}=0\), if not, replacing \(\{(\alpha_{n},\beta_{n})\}\) with \(\{(0,\alpha_{n}\star\beta_{n})\}\)). Then, by Lemma 3.3, there exists a sequence \(\{(s_{n},v_{n})\}\) for \(\tilde{E}|_{S_{a}}\) at level \(m_{V,a}\) such that
\[\partial_{s}\tilde{F}(s_{n},v_{n})\to 0\quad\text{and}\quad\|\partial_{u} \tilde{F}(s_{n},v_{n})\|_{(T_{v_{n}}S_{a})^{*}}\to 0\quad\text{as $n\to\infty$}.\]
Moreover, by (iv) in Lemma 3.3, we have
\[|s_{n}|+\text{dist}_{W^{1,p}}(v_{n},\beta_{n}([0,1]))\to 0\quad\text{as $n\to\infty$},\]
which implies \(s_{n}\to 0\) as \(n\to 0\). Therefore, we have
\[E(s_{n}\star v_{n})=\tilde{E}(s_{n},v_{n})\to m_{V,a}\quad\text{as $n\to\infty$}\]
and
\[E^{\prime}(s_{n}\star v_{n})(s_{n}\star\varphi)=\partial_{u}\tilde{E}(s_{n},v _{n})\varphi=o_{n}(1)\|\varphi\|=o_{n}(1)\|s_{n}\star\varphi\|\]
for every \(\varphi\in T_{v_{n}}S_{a}\). Let \(\{u_{n}\}=\{(s_{n}\star v_{n})\}\). Then \(\{u_{n}\}\) is a PS sequence for \(E|_{S_{a}}\) at the level \(m_{V,a}\). Differentiating \(\tilde{E}\) with respect to \(s\), we obtain (3.10).
Next, we claim that \(\{u_{n}\}\) is bounded in \(W^{1,p}(\mathbb{R}^{N})\). Set
\[a_{n}:=\|\nabla u_{n}\|_{p}^{p},\quad b_{n}:=\|u_{n}\|_{q}^{q},\]
\[c_{n}:=\int_{\mathbb{R}^{N}}V(x)|u_{n}|^{p}dx\quad\text{and}\quad d_{n}:=\int_ {\mathbb{R}^{N}}V(x)|u_{n}|^{p-2}u_{n}\nabla u_{n}\cdot xdx.\]
By (3.9) and (3.10), we have
\[\frac{1}{p}a_{n}-\frac{1}{q}b_{n}-\frac{1}{p}c_{n}=m_{V,a}+o_{n}(1) \tag{3.12}\]
and
\[a_{n}-\frac{N(q-p)}{pq}b_{n}-\frac{N}{p}c_{n}-d_{n}=o_{n}(1), \tag{3.13}\]
which implies
\[a_{n}=\frac{Np(q-p)}{B}m_{V,a}-\frac{N(2p-q)}{B}c_{n}-\frac{p^{2}}{B}d_{n}+o_{n }(1).\]
Since
\[|c_{n}|\leqslant S^{-1}\|V\|_{N/p}a_{n},\quad\text{and}\quad|d_{n}|\leqslant S^ {-\frac{p-1}{p}}\|W\|_{N/(p-1)}a_{n}, \tag{3.14}\]
we get
\[\Big{(}B-N|q-2p|S^{-1}\|V\|_{N/p}-p^{2}S^{-\frac{p-1}{p}}\|W\|_{N/(p-1)}\Big{)} a_{n}=Np(q-p)m_{V,a}+o_{n}(1).\]
By assumption (3.2), \(\{a_{n}\}\) is bounded and
\[a_{n}\leqslant\frac{Np(q-p)m_{V,a}}{B-N|q-2p|S^{-1}\|V\|_{N/p}-p^{2}S^{-\frac{ p-1}{p}}\|W\|_{N/(p-1)}}+o_{n}(1). \tag{3.15}\]
Finally, we claim \(\lambda>0\). By (3.9), there exists \(\lambda_{n}>0\) such that
\[E^{\prime}(u_{n})\varphi=\lambda_{n}\int_{\mathbb{R}^{N}}|u_{n}|^{p-2}u_{n} \varphi dx+o_{n}(1)\|\varphi\|_{W^{1,p}},\]
and hence we can choose \(\lambda_{n}\) as (3.11). Since \(a_{n}\) is bounded, so \(b_{n}\), \(c_{n}\), \(d_{n}\) and \(\lambda_{n}\) are also bounded, we assume that they converge to \(a_{0}\), \(b_{0}\), \(c_{0}\), \(d_{0}\) and \(\lambda\) respectively. By (3.12), (3.13), (3.14) and (3.15), we have
\[\lambda a^{p} =-\lim_{n\to\infty}E^{\prime}(u_{n})u_{n}=-a_{0}+b_{0}+c_{0}\] \[=\frac{1}{B}\Big{\{}p[Np-(N-p)q]m_{V,a}-(N-p)(q-p)c-p(q-p)d_{0} \Big{\}}\] \[\geqslant\frac{1}{B}\Big{\{}p[Np-(N-p)q]m_{V,a}-(N-p)(q-p)S^{-1}\| V\|_{N/p}a_{0}-p(q-p)S^{-\frac{p-1}{p}}\|W\|_{N/(p-1)}a_{0}\Big{\}}\] \[\geqslant\frac{p}{B}\Big{\{}A-\frac{(N-p)DS^{-1}\|V\|_{N/p}}{B-N| q-2p|S^{-1}\|V\|_{N/p}-p^{2}S^{-(p-1)/p}\|W\|_{N/(p-1)}}-\] \[\qquad\qquad\frac{pDS^{-(p-1)/p}\|W\|_{N/(p-1)}}{B-N|q-2p|S^{-1} \|V\|_{N/p}-p^{2}S^{-(p-1)/p}\|W\|_{N/(p-1)}}\Big{\}}m_{V,a}\] \[=\frac{p}{B}\frac{AB-[AN|q-2p|+(N-p)D]S^{-1}\|V\|_{N/p}-p(pA+D)S^ {-(p-1)/p}\|W\|_{N/(p-1)}}{B-N|q-2p|S^{-1}\|V\|_{N/p}-p^{2}S^{-(p-1)/p}\|W\|_{ N/(p-1)}}m_{V,a}.\]
Therefore, assumption (3.3) implies \(\lambda>0\).
**Lemma 3.5**.: _Let \(u\) be a weak solution of \((\ref{eq:1})\). If \((\ref{eq:3})\) holds, then \(E(u)\geqslant 0\)._
Proof.: By the Pohozaev identity, assumption (3.4) and (3.14), we have
\[E(u) =\frac{1}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}\|u\|_{q}^{q}-\frac{1}{p }\int_{\mathbb{R}^{N}}V(x)|u|^{p-2}dx\] \[=\frac{N(q-p)-p^{2}}{Np(q-p)}\|\nabla u\|_{p}^{p}+\frac{2p-q}{p(q- p)}\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx+\frac{p}{N(q-p)}\int_{\mathbb{R}^{N}}V(x)|u|^{p-2 }u\nabla u\cdot xdx.\] \[\geqslant\frac{1}{Np(q-p)}\Big{[}N(q-p)-p^{2}-N|q-2p|S^{-1}\|V\|_ {N/p}-p^{2}S^{-\frac{p-1}{p}}\|W\|_{N/(p-1)}\Big{]}\|\nabla u\|_{p}^{p}\] \[\geqslant 0.\]
**Lemma 3.6**.: _For every \(a>0\), there holds \(m_{V,a}<m_{a}\)._
Proof.: By Lemma 2.6, there exists \(w_{a}\in Z_{a}\) such that \(E_{\infty}(w_{a})=m_{a}\). Let
\[\xi(t)=\big{(}0,[(1-t)h_{0}+th_{1}]\star w_{a}\big{)},\]
where \(h_{0}<<-1\) such that \(\|\nabla(h_{0}\star w_{a})\|_{p}<R_{0}\) and \(h_{1}>>1\) such that \(E(h_{1}\star w_{a})<0\). Then, \(\xi\in\Gamma\) and \(\max_{t\in[0,1]}\tilde{E}(\xi(t))\geqslant m_{V,a}\). Since
\[\max_{t\in[0,1]}\tilde{E}(\xi(t)) =\max_{t\in[0,1]}F\big{(}[(1-t)h_{0}+th_{1}]\star w_{a}\big{)}\] \[<\max_{t\in[0,1]}E_{\infty}\big{(}[(1-t)h_{0}+th_{1}]\star w_{a} \big{)}\leqslant\max_{s\in\mathbb{R}}E_{\infty}(s\star w_{a})\] \[=m_{a},\]
we have \(m_{V,a}<m_{a}\).
**End of the proof of Theorem 1.1.** Lemma 3.5 shows that (1.1) has no solution with negative energy. Now, let us prove the existence result. By Lemma 3.4, there exist a PS sequence \(\{u_{n}\}\) for \(E|_{S_{a}}\) at level \(m_{V,a}\) and \(u\in W^{1,p}(\mathbb{R}^{N})\) such that \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\). Moreover, the Lagrange multipliers \(\lambda_{n}\to\lambda>0\) as \(n\to\infty\).
Next, we prove \(u_{n}\to u\) in \(W^{1,p}(\mathbb{R}^{N})\). It is not difficult to prove that \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) at level \(m_{V,a}+\frac{\lambda}{p}a^{p}\). Suppose by contradiction that \(\{u_{n}\}\) does not strongly convergence to \(u\) in \(W^{1,p}(\mathbb{R}^{N})\). Then, by Lemma 2.4, there exists \(k\in\mathbb{N}_{+}\) and \(y_{n}^{j}\in\mathbb{R}^{N}(1\leqslant j\leqslant k)\) such that
\[u_{n}=u+\sum_{j=1}^{k}w^{j}(\cdot-y_{n}^{j})+o_{n}(1)\quad\text{in }W^{1,p}( \mathbb{R}^{N}),\]
where \(w^{j}\) satisfies
\[-\Delta_{p}w+\lambda|w|^{p-2}w=|w|^{q-2}w.\]
Let \(n\to\infty\), by (2.16), we have
\[m_{V,a}+\frac{\lambda}{p}a^{p}=E_{\lambda}(u)+\sum_{j=1}^{k}F_{\infty,\lambda }(w^{j})=E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j})+\frac{\lambda}{p}\|u\|_{p}^{p}+ \frac{\lambda}{p}\sum_{j=1}^{k}\|w^{j}\|_{p}^{p}.\]
By (2.15), we know
\[a^{p}=\|u_{n}\|_{p}^{p}=\|u\|_{p}^{p}+\sum_{j=1}^{k}\|w^{j}\|_{p}^{p}.\]
Thus,
\[m_{V,a}=E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j}). \tag{3.16}\]
Since \(\|w^{j}\|_{p}^{p}\leqslant a^{p}\), by Lemma 2.7,
\[E_{\infty}(w^{j})\geqslant m_{\|w^{j}\|_{p}}\geqslant m_{a},\]
which together with (3.16) and Lemma 3.5 implies \(m_{V,a}\geqslant m_{a}\), a contradiction with Lemma 3.6.
Now, by strong convergence, we know \(u\in S_{a}\) satisfies (1.1).
## 4 Proof of Theorem 1.2
### Existence of local minimizer
We give following assumption on \(V\) and \(a\).
\[a^{\gamma}\|V\|_{r}^{\frac{r[N(q-p)-p^{2}]}{N(q-p)r-Np}}<K, \tag{4.1}\]
where \(K>0\) depending on \(N,p,q\) and \(r\), and
\[\gamma=q-\frac{N(q-p)}{p}+\frac{[(N-p)(q-p)r-Np][N(q-p)-p^{2}]}{p[N(q-p)r-Np]}.\]
We will see later that \(\gamma>0\), hence (1.7) can be established.
Let
\[\alpha=\frac{pr}{r-1}\Longleftrightarrow r=\frac{\alpha}{\alpha-p}.\]
For every \(u\in S_{a}\), by the Gagliardo-Nirenberg inequality and Holder inequality, we have
\[\|u\|_{q}^{q}\leqslant C_{N,p,q}^{q}\|u\|_{p}^{q-\frac{N(q-p)}{p}}\|\nabla u\| _{p}^{\frac{N(q-p)}{p}}=C_{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}\|\nabla u\|_{p}^{ \frac{N(q-p)}{p}},\]
and
\[\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx\leqslant\|V\|_{r}\|u\|_{\alpha}^{p} \leqslant C_{N,p,\alpha}^{p}\|V\|_{r}a^{p-\frac{N}{r}}\|\nabla u\|_{p}^{\frac {N}{r}},\]
which implies
\[E(u)\geqslant\frac{1}{p}\|\nabla u\|_{p}^{p}-\frac{1}{q}C_{N,p,q}^{q}a^{q- \frac{N(q-p)}{p}}\|\nabla u\|_{p}^{\frac{N(q-p)}{p}}-\frac{1}{p}C_{N,p,\alpha} ^{p}\|V\|_{r}a^{p-\frac{N}{r}}\|\nabla u\|_{p}^{\frac{N}{r}}. \tag{4.2}\]
Define the function \(h:\mathbb{R}^{+}\to\mathbb{R}\)
\[h(t):=\frac{1}{p}t^{p}-\frac{1}{q}C_{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}t^{\frac{ N(q-p)}{p}}-\frac{1}{p}C_{N,p,\alpha}^{p}\|V\|_{r}a^{p-\frac{N}{r}}t^{\frac{N}{r}}. \tag{4.3}\]
By (4.2), there is \(E(u)\geqslant h(\|\nabla u\|_{p})\) for every \(u\in S_{a}\).
**Lemma 4.1**.: _Let \(N\geqslant 1\), \(p>1\), \(r\in(\max\{1,\frac{N}{p}\},+\infty]\) and assumption (4.1) holds. Then, for every \(a>0\), there exists positive constants \(C\) depending on \(N\), \(p\), \(q\) and \(r\) such that_
\[\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}>0,\]
_where_
\[R_{a}=Ca^{\frac{(N-p)(q-p)r-Np}{N(q-p)r-Np}}\|V\|_{r}^{\frac{pr}{N(q-p)r-Np}}.\]
Proof.: \(h(t)>0\) if and only if \(g(t)<1\) with
\[g(t)=\frac{p}{q}C_{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}t^{\frac{N(q-p)}{p}-p}+C_{N,p,\alpha}^{p}\|V\|_{r}a^{p-\frac{N}{r}}t^{\frac{N}{r}-p}.\]
Since \(r>N/p\), it is not difficult to know that \(g\) has a unique critical point
\[\bar{t}=Ca^{\frac{(N-p)(q-p)r-Np}{N(q-p)r-Np}}\|V\|_{r}^{\frac{pr}{N(q-p)r-Np}}\]
which is a global minimizer, where \(C>0\) depending on \(N\), \(p\), \(q\) and \(r\). The minimum level is
\[g(\bar{t})=Ca^{\gamma}\|V\|_{r}^{\frac{r[N(q-p)-p^{2}]}{N(q-p)r-Np}},\]
where
\[\gamma=q-\frac{N(q-p)}{p}+\frac{[(N-p)(q-p)r-Np][N(q-p)-p^{2}]}{p[N(q-p)r-Np]} >0,\]
since the minimum level is increasing with respect to \(a\). Therefore, if there exists \(t>0\) such that \(g(t)<1\), we must have
\[a^{\gamma}\|V\|_{r}^{\frac{r[N(q-p)-p^{2}]}{N(q-p)r-Np}}<K,\]
where \(K>0\) depends on \(N\), \(p\), \(q\) and \(r\), that is (4.1).
Let \(R_{a}=\bar{t}\). Then, under assumption (4.1), we have \(h(R_{a})>0\). Thus, by (4.2), there is
\[\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}\geqslant h(\|\nabla u\|_{p})=h( R_{a})>0.\]
Let \(a_{*}\) be the supremum of \(a\) that makes (4.1) holds. Define
\[c_{V,a}=\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}\leqslant R_{a}\}, \tag{4.4}\]
where \(0<a<a_{*}\) and \(R_{a}\) is given by Lemma 4.1.
**Lemma 4.2**.: _If assumption (1.8) holds, then \(c_{V,a}<0\) for every \(0<a<a_{*}\)._
Proof.: Let \(\varphi\in S_{a}\) such that (1.8) holds. For every \(t>0\), we have
\[E(t\varphi)\leqslant-\frac{1}{q}t^{q}\|\varphi\|_{q}^{q}<0.\]
Let \(\bar{t}=\frac{R_{a}}{\|\nabla\varphi\|_{p}}\). Then \(\|\nabla(\bar{t}\varphi)\|_{p}=R_{a}\) and \(E(\bar{t}\varphi)<0\). It is not difficult to prove that for every \(0<b\leqslant a\), there is
\[\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}=R_{a}\}>0. \tag{4.5}\]
If \(\bar{t}\leqslant 1\). Since
\[\bar{t}\varphi\in\{E(u):u\in S_{\bar{t}a},\|\nabla u\|_{p}=R_{a}\},\]
by (4.5), \(E(\bar{t}\varphi)<0\), a contradiction. Therefore, \(\bar{t}>1\) and
\[\varphi\in\{u\in S_{a}:\|\nabla u\|_{p}\leqslant R_{a}\}.\]
By the definition of \(c_{V,a}\), we have
\[c_{V,a}\leqslant E(\varphi)<0.\]
**Lemma 4.3**.: _If assumption (1.8) holds, then for every \(0<b<a<a_{*}\), there is_
\[c_{V,a}\leqslant\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}\leqslant R_{a}\}<0.\]
Proof.: Let \(\varphi\in S_{a}\) such that (1.8) holds. By Lemma 4.2, we know
\[\varphi\in\{u\in S_{a}:\|\nabla u\|_{p}\leqslant R_{a}\}.\]
Since
\[\psi=\frac{b}{a}\varphi\in\{u\in S_{b}:\|\nabla u\|_{p}\leqslant R_{a}\}\]
and
\[E(\psi)\leqslant-\frac{1}{q}\Big{(}\frac{b}{a}\Big{)}^{q}\|\varphi\|_{q}^{q}<0,\]
we have
\[\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}\leqslant R_{a}\}<0.\]
For sufficiently small \(\varepsilon>0\), there exists \(u\in S_{b}\) satisfies \(\|\nabla u\|_{p}\leqslant R_{a}\) such that
\[E(u)<\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}\leqslant R_{a}\}+\varepsilon<0.\]
Let \(v=\frac{a}{b}u\in S_{a}\). Then,
\[E(v) =\frac{1}{p}\Big{(}\frac{a}{b}\Big{)}^{p}\Big{(}\|\nabla u\|_{p}^ {p}-\int_{\mathbb{R}^{N}}V(x)|u|^{p}dx\Big{)}-\frac{1}{q}\Big{(}\frac{a}{b} \Big{)}^{q}\|u\|_{q}^{q}\leqslant\Big{(}\frac{a}{b}\Big{)}^{p}E(u)\] \[<\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}\leqslant R_{a}\}+\varepsilon.\]
Now, we claim that \(\|\nabla v\|_{p}\leqslant R_{a}\). If not, there exists \(1\leqslant\bar{t}<\frac{a}{b}\) such that \(\|\nabla(\bar{t}u)\|_{p}=R_{a}\), \(\|\bar{t}u\|_{p}<a\) and
\[E(\bar{t}u)\leqslant\bar{t}^{p}E(u)<0,\]
a contradiction with (4.5). Therefore,
\[v\in\{u\in S_{a}:\|\nabla u\|_{p}\leqslant R_{a}\}.\]
By the definition of \(c_{V,a}\), we have
\[c_{V,a}\leqslant E(v)<\inf\{E(u):u\in S_{b},\|\nabla u\|_{p}\leqslant R_{a}\} +\varepsilon.\]
**Proof of the existence of local minimizer.** Let \(\{u_{n}\}\) be a minimizing sequence for \(c_{V,a}\). By Lemma 4.1, we know
\[\liminf_{n\to\infty}\lVert\nabla u_{n}\rVert_{p}<R_{a}.\]
Therefore, the Ekeland's variational principle implies \(\{u_{n}\}\) can be chosen as a PS sequence for \(E|_{S_{a}}\) at level \(c_{V,a}\). Since \(\{u_{n}\}\) is bounded in \(W^{1,p}(\mathbb{R}^{N})\), there exists \(u\in W^{1,p}(\mathbb{R}^{N})\) such that \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\) as \(n\to\infty\). Therefore, the Lagrange multipliers
\[\lambda_{n}=-\frac{F^{\prime}(u_{n})u_{n}}{a^{p}}\]
are also bounded and there exists \(\lambda\in\mathbb{R}\) such that \(\lambda_{n}\to\lambda\) as \(n\to\infty\). Since \(\{u_{n}\}\) is a PS sequence for \(E|_{S_{a}}\), we have
\[F^{\prime}(u_{n})u_{n}+\lambda_{n}a^{p}=o_{n}(1),\]
that is
\[o_{n}(1)=\frac{1}{p}\Big{(}\lVert\nabla u_{n}\rVert_{p}^{p}-\lVert u_{n} \rVert_{q}^{q}-\int_{\mathbb{R}^{N}}V(x)\lVert u_{n}\rVert_{p}^{p}\Big{)}+ \frac{1}{p}\lambda_{n}a^{p}\leqslant F(u_{n})+\frac{1}{p}\lambda_{n}a^{p}=c_{V,a}+\frac{1}{p}\lambda a^{p}+o_{n}(1),\]
which implies
\[\lambda\geqslant-\frac{pc_{V,a}}{a^{p}}>0.\]
If \(\{u_{n}\}\) does not strongly convergence to \(u\) in \(W^{1,p}(\mathbb{R}^{N})\). It is not difficult to prove that \(\{u_{n}\}\) is also a PS sequence for \(E_{\lambda}\) at level \(c_{V,a}+\frac{\lambda}{p}a^{p}\). Then, by Lemma 2.4, there exists \(k\in\mathbb{N}_{+}\) and \(y_{n}^{j}\in\mathbb{R}^{N}(1\leqslant j\leqslant k)\) such that
\[u_{n}=u+\sum_{j=1}^{k}w^{j}(\cdot-y_{n}^{j})+o_{n}(1)\quad\text{in }W^{1,p}( \mathbb{R}^{N}),\]
where \(w^{j}\) satisfies
\[-\Delta_{p}w+\lambda|w|^{p-2}w=|w|^{q-2}w.\]
Let \(n\to\infty\), by (2.16), we have
\[c_{V,a}+\frac{\lambda}{p}a^{p}=E_{\lambda}(u)+\sum_{j=1}^{k}F_{\infty,\lambda }(w^{j})=E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j})+\frac{\lambda}{p}\lVert u\rVert _{p}^{p}+\frac{\lambda}{p}\sum_{j=1}^{k}\lVert w^{j}\rVert_{p}^{p}.\]
By (2.15), we know
\[a^{p}=\lVert u_{n}\rVert_{p}^{p}=\lVert u\rVert_{p}^{p}+\sum_{j=1}^{k}\lVert w ^{j}\rVert_{p}^{p}.\]
Thus,
\[c_{V,a}=E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j}). \tag{4.6}\]
Set \(\lVert u\rVert_{p}=b\leqslant a\). Since
\[\lVert\nabla u\rVert_{p}\leqslant\liminf_{n\to\infty}\lVert\nabla u_{n}\rVert _{p}<R_{a},\]
by Lemma 4.3, we have
\[E(u)\geqslant\inf\{F(v):v\in S_{b},\|\nabla v\|_{p}\leqslant R_{a}\} \geqslant c_{V,a},\]
which together with (4.6) implies
\[\sum_{j=1}^{k}E_{\infty}(w^{j})\leqslant 0,\]
a contradiction with Lemma 2.6.
Now, by strong convergence, we know \(u\in S_{a}\) satisfies (1.1)
**Lemma 4.4**.: _Under the assumption of (1.10), (1.8) holds._
Proof.: Without loss of generality, we can assume that \(x_{0}=0\). If \(N<p\), consider the function \(\varphi(x)=A_{\varepsilon}e^{-\varepsilon|x|}\in S_{a}\), then
\[\int_{\mathbb{R}^{N}}(|\nabla\varphi|^{p}-V(x)|\varphi|^{p})dx \leqslant\int_{\mathbb{R}^{N}}|\nabla\varphi|^{p}dx-\eta\int_{B_{ \delta}}|\varphi|^{p}dx\] \[=N\omega_{N}\Big{(}\int_{0}^{+\infty}r^{N-1}|\varphi^{\prime}(r) |^{p}dr-\eta\int_{0}^{\delta}r^{N-1}|\varphi(r)|^{p}dr\Big{)}\] \[=N\omega_{N}A_{\varepsilon}^{p}\Big{(}\varepsilon^{p-N}\int_{0}^ {+\infty}s^{N-1}e^{-ps}ds-\eta\int_{0}^{\delta}r^{N-1}e^{-p\varepsilon r}dr \Big{)}\] \[<0\]
by taking \(\varepsilon\) sufficiently small. If \(N=p\), consider the function
\[\varphi(x)=A_{k}\left\{\begin{array}{ll}1,&|x|\leqslant\delta, \\ \frac{\log(k\delta)-\log|x|}{\log k},&\delta<|x|\leqslant k\delta,\\ 0,&|x|>k\delta,\end{array}\right.\]
then
\[\int_{\mathbb{R}^{N}}(|\nabla\varphi|^{N}-V(x)|\varphi|^{N})dx \leqslant\int_{\mathbb{R}^{N}}|\nabla\varphi|^{N}dx-\eta\int_{B_{ \delta}}|\varphi|^{N}dx\] \[=N\omega_{N}\Big{(}\int_{\delta}^{+\infty}r^{N-1}|\varphi^{ \prime}(r)|^{N}dr-\eta\int_{0}^{\delta}r^{N-1}|\varphi(r)|^{N}dr\Big{)}\] \[=N\omega_{N}A_{k}^{N}\Big{(}\frac{1}{|\log k|^{N-1}}-\frac{\eta \delta^{N}}{N}\Big{)}\] \[<0\]
by taking \(k\) sufficiently large. If \(N>p\), consider the function
\[\varphi(x)=A_{k}\left\{\begin{array}{ll}1,&|x|\leqslant\delta, \\ (1+\frac{1}{k})\delta^{\frac{N-p}{p-1}}|x|^{\frac{p-N}{p-1}}-\frac{1}{k},&\delta <|x|\leqslant(k+1)^{\frac{p-1}{N-p}}\delta,\\ 0,&|x|>(k+1)^{\frac{p-1}{N-p}}\delta,\end{array}\right.\]
then
\[\int_{\mathbb{R}^{N}}(|\nabla\varphi|^{p}-V(x)|\varphi|^{p})dx\]
\[\leqslant\int_{\mathbb{R}^{N}}\lvert\nabla\varphi\rvert^{p}dx-\eta\int_{B_{ \delta}}\lvert\varphi\rvert^{p}dx\] \[=N\omega_{N}\Big{(}\int_{\delta}^{+\infty}r^{N-1}\lvert\varphi^{ \prime}(r)\rvert^{p}dr-\eta\int_{0}^{\delta}r^{N-1}\lvert\varphi(r)\rvert^{p}dr \Big{)}\] \[<N\omega_{N}A_{k}^{p}\Big{(}\big{(}\frac{N-p}{p-1}\big{)}^{p} \big{(}1+\frac{1}{k}\big{)}^{p}\delta^{\frac{p(N-p)}{p-1}}\int_{\delta}^{+ \infty}r^{-\frac{N-1}{p-1}}dr-\frac{\eta\delta^{N}}{N}\Big{)}\] \[=\omega_{N}A_{k}^{p}\delta^{N-p}\Big{(}N\big{(}\frac{N-p}{p-1} \big{)}^{p-1}\big{(}1+\frac{1}{k}\big{)}^{p}-\eta\delta^{p}\Big{)}<0\]
by taking \(k\) sufficiently large.
### Existence of mountain pass solution
We give following assumptions on \(V\), \(W\) and \(a\).
\[\max\Big{\{}a^{\big{(}p-\frac{N}{r}\big{)}\frac{p(q-p)}{N(q-p)-p^{2}}}\|V\|_{ r},a^{\big{(}p-1-\frac{N}{s}\big{)}\frac{p(q-p)}{N(q-p)-p^{2}}}\|W\|_{s}\Big{\}}<L_{1}, \tag{4.7}\]
\[\max\{a^{p-\frac{N}{r}}\|V\|_{r},a^{p-1-\frac{N}{s}}\|W\|_{s}\}<L_{2}, \tag{4.8}\]
where \(L_{1},L_{2}\) are positive constants depending on \(N,p,q,r\) and \(s\). Obviously, (1.9) can be established under above assumptions.
**Lemma 4.5**.: _Under the assumption (4.7), there exist positive constant \(C=C(N,p,q)\) and \(0<\kappa=\kappa(N,p,q,r)<1\) such that_
\[\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}>\kappa m_{a},\]
_where_
\[R_{a}=Ca^{-\theta}\quad\text{with}\quad\theta=\frac{Np-q(N-p)}{N(q-p)-p^{2}}.\]
Proof.: Considering the function
\[\tilde{h}(t)=\frac{1}{p}t^{p}-\frac{1}{q}C_{N,p,q}^{q}a^{q-\frac{N(q-p)}{p}}t ^{\frac{N(q-p)}{p}}.\]
Then, direct calculations show that
\[\max_{t>0}\tilde{h}(t)=\tilde{h}(\tilde{R})=C_{1}a^{-p\theta},\]
where
\[\theta=\frac{Np-q(N-p)}{N(q-p)-p^{2}},\]
\(\tilde{R}=Ca^{-\theta}\), \(C,C_{1}\) are positive constants depending on \(N\), \(p\) and \(q\). Since
\[h(\tilde{R})=Ca^{-p\theta}(1-C_{2}a^{\tilde{\theta}}\|V\|_{r}),\]
where \(h\) is given by (4.3) and
\[\tilde{\theta}=\Big{(}p-\frac{N}{r}\Big{)}\frac{p(q-p)}{N(q-p)-p^{2}}.\]
Therefore, under assumption (4.7), we can ensure that \(h(\tilde{R})\geqslant C_{3}a^{-p\theta}\).
Let \(R_{a}=\tilde{R}\). Then for every \(u\in S_{a}\), \(\|\nabla u\|_{p}=R_{a}\), by (4.2), we have
\[E(u)\geqslant h(\|\nabla u\|_{p})=h(\tilde{R})\geqslant C_{3}a^{-p\theta},\]
which implies
\[\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}>C_{3}a^{-p\theta}.\]
By Lemma 2.7, we know \(m_{a}=C_{4}a^{-p\theta}\). Therefore,
\[\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}>\kappa m_{a}.\]
By Lemma 4.5, we know there exists \(0<\tilde{R}_{a}<R_{a}\) such that
\[\sup\{E(u):u\in S_{a},\|\nabla u\|_{p}\leqslant\tilde{R}_{a}\}<\inf\{E(u):u \in S_{a},\|\nabla u\|_{p}=R_{a}\}.\]
Define
\[c_{a}:=\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}\leqslant R_{a}\}.\]
Now, we can construct a min-max structure
\[\Gamma=\{\xi\in C([0,1],\mathbb{R}\times S_{a}):\xi(0)\in(0,A_{\tilde{R}_{a}}),\xi(1)\in(0,E^{\min\{-1,2c_{a}\}})\}, \tag{4.9}\]
where
\[E^{c}=\{u\in S_{a}:E(u)\leqslant c\}\quad\text{and}\quad A_{R}=\{u\in S_{a}: \|\nabla u\|_{p}\leqslant R\}.\]
The associated min-max level is
\[m_{V,a}=\inf_{\xi\in\Gamma}\max_{t\in[0,1]}\tilde{E}(\xi(t))>0,\]
where \(\tilde{E}(s,u)=E(s\star u)\).
**Lemma 4.6**.: _Under the assumption (4.7), \(\kappa m_{a}\leqslant m_{V,a}<m_{a}\)._
Proof.: The proof of \(m_{V,a}<m_{a}\) is similar to Lemma 3.6. For every \(\xi(t)=(s_{t},u_{t})\in\Gamma\), it is not difficult to know that \(\|\nabla(s_{0}\star u_{0})\|_{p}<R_{a}\) and \(\|\nabla(s_{1}\star u_{1})\|_{p}>R_{a}\). Thus, there exists \(\tau\in(0,1)\) such that \(\|\nabla(s_{\tau}\star u_{\tau})\|_{p}=R_{a}\). By Lemma 4.5, we have
\[\max_{t\in[0,1]}\tilde{E}(\xi(t))\geqslant\tilde{E}(\xi(\tau))=E(s_{\tau} \star u_{\tau})\geqslant\inf\{E(u):u\in S_{a},\|\nabla u\|_{p}=R_{a}\}>\kappa m _{a},\]
which implies \(m_{V,a}\geqslant\kappa m_{a}\).
Similar to Lemma 3.4, we can obtain a PS sequence \(\{u_{n}\}\) for \(E|_{S_{a}}\) at level \(m_{V,a}\) which satisfies (3.10).
**Lemma 4.7**.: \(\{u_{n}\}\) _is bounded in \(W^{1,p}(\mathbb{R}^{N})\)._
Proof.: Similar to the proof of Lemma 3.4, we have
\[\frac{1}{p}a_{n}-\frac{1}{q}b_{n}-\frac{1}{p}c_{n}=m_{V,a}+o_{n}(1),\]
and
\[a_{n}-\frac{N(q-p)}{pq}b_{n}-\frac{N}{p}c_{n}-d_{n}=o_{n}(1),\]
which implies
\[a_{n}=\frac{Np(q-p)}{B}m_{V,a}-\frac{N(2p-q)}{B}c_{n}-\frac{p^{2}}{B}d_{n}+o_{n }(1).\]
By the Gagliardo-Nirenberg inequality and Holder inequality, we obtain
\[|c_{n}|\leqslant\|V\|_{r}\|u_{n}\|_{\alpha}^{p}\leqslant C_{N,p,\alpha}^{p}a^{ p-\frac{N}{r}}\|V\|_{r}\|\nabla u\|_{p}^{\frac{N}{r}}=C_{N,p,\alpha}^{p}a^{p- \frac{N}{r}}\|V\|_{r}a_{n}^{\frac{N}{pr}}, \tag{4.10}\]
and
\[|d_{n}| \leqslant\|W\|_{s}\|u_{n}\|_{\beta}^{p-1}\|\nabla u_{n}\|_{p} \leqslant C_{N,p,\beta}^{p-1}a^{p-1-\frac{N}{s}}\|W\|_{s}\|\nabla u_{n}\|_{p}^ {1+\frac{N}{s}}\] \[=C_{N,p,\beta}^{p-1}a^{p-1-\frac{N}{s}}\|W\|_{s}a_{n}^{\frac{1}{p }\left(1+\frac{N}{s}\right)}, \tag{4.11}\]
where
\[r=\frac{\alpha}{\alpha-p}\quad\text{and}\quad s=\frac{p\beta}{(p-1)(\beta-p)} \quad\text{with }p\leqslant\alpha,\beta<p^{*}.\]
Therefore,
\[a_{n}\leqslant\frac{Np(q-p)}{B}m_{V,a}+\frac{N|2p-q|}{B}C_{N,p,\alpha}^{p}a^{ p-\frac{N}{r}}\|V\|_{r}a_{n}^{\frac{N}{pr}}+\frac{p^{2}}{B}C_{N,p,\beta}^{p-1}a^{ p-1-\frac{N}{s}}\|W\|_{s}a_{n}^{\frac{1}{p}\left(1+\frac{N}{s}\right)}+o_{n}(1).\]
We know
\[\frac{N}{pr}<1\quad\text{and}\quad\frac{1}{p}\Big{(}1+\frac{N}{s}\Big{)}<1.\]
If \(a_{n}\geqslant 1\), by assumption (4.8), we can choose \(L_{2}\) sufficiently small such that
\[N|2p-q|C_{N,p,\alpha}^{p}a^{p-\frac{N}{r}}\|V\|_{r}+p^{2}C_{N,p,\beta}^{p-1}a^ {p-1-\frac{N}{s}}\|W\|_{s}<\frac{B}{2},\]
which implies
\[a_{n}\leqslant\frac{2Np(q-p)m_{V,a}}{B}+o_{n}(1).\]
Therefore, \(\{a_{n}\}\) is bounded and
\[a_{n}\leqslant\max\Big{\{}1,\frac{2Np(q-p)m_{V,a}}{B}+o_{n}(1)\Big{\}}. \tag{4.12}\]
Now, we can deduce that \(b_{n}\), \(c_{n}\), \(d_{n}\) and the Lagrange multipliers \(\lambda_{n}\) are bounded as well. Then, we can assume that \(a_{n}\), \(b_{n}\), \(c_{n}\), \(d_{n}\) and \(\lambda_{n}\) converge to \(a_{0}\), \(b_{0}\), \(c_{0}\), \(d_{0}\) and \(\lambda\) respectively.
**Lemma 4.8**.: _Under the assumption (4.8), \(\lambda>0\)._
Proof.: By (4.10) and (4.11), we have
\[\lambda a^{p} =-\lim_{n\to\infty}F^{\prime}(u_{n})u_{n}=-a_{0}+b_{0}+c_{0}\] \[=\frac{1}{B}\Big{\{}p[Np-(N-p)q]m_{V,a}-(N-p)(q-p)c_{0}-p(q-p)d_{0} \Big{\}}\] \[\geqslant\frac{1}{B}\Big{\{}p[Np-(N-p)q]m_{V,a}-(N-p)(q-p)C_{N,p, \alpha}^{p}a^{p-\frac{N}{r}}\|V\|_{r}a_{0}^{\frac{N}{pr}}\] \[\qquad\qquad-p(q-p)C_{N,p,\beta}^{p-1}a^{p-1-\frac{N}{s}}\|W\|_{s }a_{0}^{\frac{1}{p}\left(1+\frac{N}{s}\right)}\Big{\}}.\]
If \(a_{0}\geqslant 1\). Since
\[\frac{N}{pr}<1\quad\text{and}\quad\frac{1}{p}\Big{(}1+\frac{N}{s}\Big{)}<1,\]
then, by (4.8) and (4.12), for sufficiently small \(L_{2}\), there is
\[\lambda a^{p}\geqslant C(m_{V,a}-\varepsilon a_{0})\geqslant Cm_{V,a}>0.\]
If \(a_{0}<1\), then, by (4.8) and Lemma 4.6, for sufficiently small \(L_{2}\), there is
\[\lambda a^{p}\geqslant C(m_{a}-\varepsilon)>0.\]
Proof of the existence of mountain pass type solution.: Firstly, we can prove \(\{u_{n}\}\) is a PS sequence for \(E_{\lambda}\) at level \(m_{V,a}+\frac{\lambda}{p}a^{p}\). By (4.12), we can assume that \(u_{n}\rightharpoonup u\) in \(W^{1,p}(\mathbb{R}^{N})\). If \(\{u_{n}\}\) does not strongly convergence to \(u\) in \(W^{1,p}(\mathbb{R}^{N})\), then, by Lemma 2.4, there exists \(k\in\mathbb{N}_{+}\) and \(y_{n}^{j}\in\mathbb{R}^{N}(1\leqslant j\leqslant k)\) such that
\[u_{n}=u+\sum_{j=1}^{k}w^{j}(\cdot-y_{n}^{j})+o_{n}(1)\quad\text{in }W^{1,p}( \mathbb{R}^{N}),\]
where \(w^{j}\) satisfies
\[-\Delta_{p}w+\lambda|w|^{p-2}w=|w|^{q-2}w.\]
Similar to the proof of Theorem 1.1, we have
\[m_{V,a}=E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j}). \tag{4.13}\]
Denote \(\|u\|_{p}=\mu\) and \(\|w^{j}\|_{p}=\beta_{j}\). Then
\[a^{p}=\mu^{p}+\sum_{j=1}^{k}\beta_{j}^{p}.\]
By Lemma 2.7 and the definition of \(m_{a}\),
\[\sum_{j=1}^{k}E_{\infty}(w^{j})\geqslant\sum_{j=1}^{k}m_{\|w^{j}\|_{p}} \geqslant m_{\beta_{1}}=m_{a}\Big{(}\frac{\beta_{1}}{a}\Big{)}^{-p\theta},\]
where
\[\theta=\frac{Np-q(N-p)}{N(q-p)-p^{2}}.\]
We claim that, under the assumption (4.7), we have
\[E(u)\geqslant-\theta m_{a}\Big{(}\frac{\mu}{a}\Big{)}^{p}. \tag{4.14}\]
Therefore, using the fact that \(\mu^{p}+\beta_{1}^{p}\leqslant a^{p}\), there is
\[E(u)+\sum_{j=1}^{k}E_{\infty}(w^{j}) \geqslant-\theta m_{a}\Big{(}\frac{\mu}{a}\Big{)}^{p}+m_{a}\Big{(} \frac{\beta_{1}}{a}\Big{)}^{-p\theta}\geqslant m_{a}\Big{[}-\theta+\theta \Big{(}\frac{\beta_{1}}{a}\Big{)}^{p}+\Big{(}\frac{\beta_{1}}{a}\Big{)}^{-p \theta}\Big{]}\] \[\geqslant m_{a}\min_{0<x\leqslant 1}\{-\theta+\theta x+x^{-\theta} \}=m_{a},\]
a contradiction with \(m_{V,a}<m_{a}\).
Now, by strong convergence, we know \(u\in S_{a}\) satisfies (1.1) \(\square\)
Finally, we prove (4.14).
**Lemma 4.9**.: _Under the assumption (4.7), (4.14) is true._
Proof.: Set
\[a_{0}=\|\nabla u\|_{p}^{p},\quad b_{0}=\|u\|_{q}^{q},\quad c_{0}=\int_{\mathbb{ R}^{N}}V(x)|u|^{p}dx,\quad\text{and}\quad d_{0}=\int_{\mathbb{R}^{N}}V(x)|u|^{p-2}u \nabla u\cdot xdx.\]
Then, by Pohozaev identity
\[a_{0}-\frac{N(q-p)}{pq}b_{0}-\frac{N}{p}c_{0}-d_{0}=0,\]
(4.10) and (4.11), we have
\[E(u) =\frac{1}{p}a_{0}-\frac{1}{q}b_{0}-\frac{1}{p}c_{0}=\frac{N(q-p)- p^{2}}{Np(q-p)}a_{0}+\frac{2p-q}{p(q-p)}c_{0}+\frac{p}{N(q-p)}d_{0}\] \[\geqslant C\Big{(}2a_{0}-C_{1}\mu^{p-\frac{N}{r}}\|V\|_{r}a_{0}^{ \frac{N}{pr}}-C_{1}\mu^{p-1-\frac{N}{s}}\|W\|_{s}a_{0}^{\frac{1}{p}\big{(}1+ \frac{N}{s}\big{)}}\Big{)},\]
where \(C,C_{1}\) are positive constants depending on \(N\), \(p\), \(q\), \(r\) and \(s\). It is not difficult to prove that
\[a_{0}-C_{1}\mu^{p-\frac{N}{r}}\|V\|_{r}a_{0}^{\frac{N}{pr}}\geqslant-C_{2}\mu ^{p}\|V\|_{r}^{\frac{pr}{pr-N}},\]
and
\[a_{0}-C_{1}\mu^{p-1-\frac{N}{s}}\|W\|_{s}a_{0}^{\frac{1}{p}\big{(}1+\frac{N}{s }\big{)}}\geqslant-C_{3}\mu^{p}\|W\|_{s}^{\frac{ps}{(p-1)s-N}}.\]
Therefore, under assumption (4.7), by Lemma 2.7, we have
\[E(u)\geqslant-\theta m_{a}\Big{(}\frac{\mu}{a}\Big{)}^{p},\]
by taking \(L_{1}\) sufficiently small. |
2302.00512 | Decay law of magnetic turbulence with helicity balanced by chiral
fermions | In plasmas composed of massless electrically charged fermions, chirality can
be interchanged with magnetic helicity while preserving the total chirality
through the quantum chiral anomaly. The decay of turbulent energy in plasmas
such as those in the early Universe and compact stars is usually controlled by
certain conservation laws. In the case of zero total chirality, when the
magnetic helicity density balances with the appropriately scaled chiral
chemical potential to zero, the total chirality no longer determines the decay.
We propose that in such a case, an adaptation to the Hosking integral, which is
conserved in nonhelical magnetically dominated turbulence, controls the decay
in turbulence with helicity balanced by chiral fermions. We show, using a high
resolution numerical simulation, that this is indeed the case. The magnetic
energy density decays and the correlation length increases with time just like
in nonhelical turbulence with vanishing chiral chemical potential. But here,
the magnetic helicity density is nearly maximum and shows a scaling with time
$t$ proportional to $t^{-2/3}$. This is unrelated to the $t^{-2/3}$ decay of
magnetic {\it energy} in fully helical magnetic turbulence. The modulus of the
chiral chemical potential decays in the same fashion. This is much slower than
the exponential decay previously expected in theories of asymmetric baryon
production from the hypermagnetic helicity decay after axion inflation. | Axel Brandenburg, Kohei Kamada, Jennifer Schober | 2023-02-01T15:34:06Z | http://arxiv.org/abs/2302.00512v3 | # Decay law of magnetic turbulence with helicity balanced by chiral fermions
###### Abstract
In plasmas composed of massless electrically charged fermions, chirality can be interchanged with magnetic helicity while preserving the total chirality through the quantum chiral anomaly. The decay of turbulent energy in plasmas such as those in the early Universe and compact stars is usually controlled by certain conservation laws. In the case of zero total chirality, when the magnetic helicity density balances with the appropriately scaled chiral chemical potential to zero, the total chirality no longer determines the decay. We propose that in such a case, an adaptation to the Hosking integral, which is conserved in nonhelical magnetically dominated turbulence, controls the decay in turbulence with helicity balanced by chiral fermions. We show, using a high resolution numerical simulation, that this is indeed the case. The magnetic energy density decays and the correlation length increases with time just like in nonhelical turbulence with vanishing chiral chemical potential. But here, the magnetic helicity density is nearly maximum and shows a novel scaling with time \(t\) proportional to \(t^{-2/3}\). This is unrelated to the \(t^{-2/3}\) decay of magnetic _energy_ in fully helical magnetic turbulence. The modulus of the chiral chemical potential decays in the same fashion. This is much slower than the exponential decay previously expected in theories of asymmetric baryon production from the hypermagnetic helicity decay after axion inflation.
+
Footnote †: preprint: NORDITA-2023-001, RESCEU-1/23
Magnetic helicity characterizes the knottedness of magnetic field lines and plays important roles in cosmological, astrophysical, and laboratory plasmas. Since the early work of Woltjer of 1958 [1], we know that the magnetic helicity is an invariant of the ideal magnetohydrodynamic (MHD) equations. Even in the non-ideal case of finite conductivity, it is asymptotically conserved in the limit of large magnetic Reynolds numbers [2]. This is because, unlike the magnetic energy dissipation, which is finite at large magnetic Reynolds numbers, the magnetic helicity dissipation converges to zero in that limit [3]. The magnetic helicity controls the decay of magnetic fields in closed or periodic domains, provided the magnetic helicity is finite. However, even when the net magnetic helicity over the whole volume vanishes, there can still be random fluctuations of magnetic helicity. In this case, the conservation of magnetic helicity still plays an important role, but only in smaller subvolumes, as was shown only recently [4]. The conserved quantity in that case is what is now known as the Hosking integral [5; 6], which characterizes magnetic helicity fluctuations in smaller subvolumes [4].
At relativistic energies, the chirality of fermions combines with the helicity of the magnetic field to a total chirality that is _strictly_ conserved in a periodic or closed domain - even for finite magnetic diffusivity [7; 8] which is a consequence of the chiral anomaly [9; 10]. This can have a number of consequences. There is an instability that can amplify a helical magnetic field [11]. It is now often referred to as the chiral plasma instability (CPI) [12] and it causes the chiral chemical potential carrying the chirality of the fermions to decay such that the total chirality remains unchanged [13; 14; 15]. Conversely, if a helical magnetic field decays, the chiral chemical potential can increase [16; 17]. Finally, when the chiral chemical potential balances the magnetic helicity to produce vanishing total chirality of the system, which is realized in, e.g., cosmological MHD after axion inflation [18; 19; 20], the magnetic field can only decay. It has been thought that the decay is triggered by the CPI and that it would be therefore exponential [18; 19]. In this Letter, however, we show that this decay occurs only in a power-law fashion. This has consequences for explaining the baryon asymmetry of the Universe [21; 22; 23] and for theories of primordial magnetic fields, which will open up a new direction for early Universe cosmology model building. The purpose of this Letter is to show that the decay of the magnetic field in chiral MHD is governed - similarly to nonhelical MHD - by a new conserved quantity that we call the adapted Hosking integral. While the model adopted here is based on quantum electrodynamics, the extension to the realistic cosmological models based on the Standard Model of
particle physics is straightforward; see, e.g., Ref. [14; 24].
The Hosking integral \(I_{\rm H}\) is defined as the asymptotic limit of the relevant magnetic helicity density correlation integral, \({\cal I}_{\rm H}(R)\), for scales \(R\) large compared to the correlation length of the turbulence, \(\xi_{\rm M}\), but small compared to the system size \(L\). The function \({\cal I}_{\rm H}(R)\) is given by
\[{\cal I}_{\rm H}(R)=\int_{V_{R}}\langle h(\mathbf{x})h(\mathbf{x}+\mathbf{r})\rangle\,{\rm d }^{3}r, \tag{1}\]
where \(V_{R}\) is the volume of a ball of radius \(R\) and, in MHD, \(h=\mathbf{A}\cdot\mathbf{B}\) is the magnetic helicity density with \(\mathbf{A}\) being the magnetic vector potential, so \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\). Here, angle brackets denote averages over the volume \(L^{3}\).
For relativistic chiral plasmas, on the other hand, we now amend the magnetic helicity density with a contribution from the chiral chemical potential \(\mu_{5}\). We work here with the scaled chiral chemical potential \(\mu_{5}\to\mu_{5}^{{}^{\prime}}=(4\alpha/\hbar c)\,\mu_{5}\), where \(\alpha\) is the fine structure constant, \(\hbar\) is the reduced Planck constant, and \(c\) is the speed of light. Our rescaled \(\mu_{5}^{\prime}\) has the dimension of a wave number. From now on, we drop the prime and only work with the rescaled chiral chemical potential. We also define the quantity \(\lambda=3\hbar c\,(8\alpha/k_{\rm B}T)^{2}\), where \(k_{\rm B}\) is the Boltzmann constant and \(T\) is the temperature. We define the total helicity density \(h_{\rm tot}\equiv\mathbf{A}\cdot\mathbf{B}+2\mu_{5}/\lambda\) and replace \(h\to h_{\rm tot}\) when defining the adapted Hosking integral.
Similarly to earlier studies of non-relativistic chiral plasmas (\(\mu_{5}\to 0\)) with a helical magnetic field, the case of a finite net chirality, \(\langle h_{\rm tot}\rangle\neq 0\), is governed by the conservation law for \(\langle h_{\rm tot}\rangle\). Of course, when \(\langle h_{\rm tot}\rangle=0\), it is still conserved, but it can then no longer determine the dynamics of the system. This is when we expect, instead, \(I_{\rm H}\) to control the dynamics of the decay. As before, we define \(I_{\rm H}={\cal I}_{\rm H}(R_{*})\) for values of \(R_{*}\) for which \({\cal I}_{\rm H}(R)\) shows a plateau, which is here the case for values of \(R\) that are comparable to or less than \(\xi_{\rm M}\). In the following, we focus on this case using numerical simulations to compute the decay properties of a turbulent magnetic field and the conservation properties of \(I_{\rm H}\) using the total helicity in a relativistic plasma.
Setting now \(c=1\), the evolution equations for \(\mathbf{A}\) and \(\mu_{5}\) are [8]
\[\frac{\partial\mathbf{A}}{\partial t} = \eta(\mu_{5}\mathbf{B}-\mathbf{J})+\mathbf{u}\times\mathbf{B},\quad\mathbf{J}=\mathbf{ \nabla}\times\mathbf{B}, \tag{2}\] \[\frac{\partial\mu_{5}}{\partial t} = -\frac{2}{\lambda}\eta(\mu_{5}\mathbf{B}-\mathbf{J})\cdot\mathbf{B}-\mathbf{ \nabla}\cdot(\mu_{5}\mathbf{u})+D_{5}\nabla^{2}\mu_{5}, \tag{3}\]
where \(\eta\) is the magnetic diffusivity, \(D_{5}\) is the diffusion coefficient of \(\mu_{5}\), spin flipping is here neglected (but see [25] for cases where it is not), and \(\mathbf{u}\) is the velocity, which is governed by the compressible Navier-Stokes equations [8; 26; 27]
\[\frac{{\rm D}\mathbf{u}}{{\rm D}t} = \frac{2}{\rho}\mathbf{\nabla}\cdot(\rho\mathbf{\nabla})-\frac{1}{4}\mathbf{ \nabla}\ln\rho+\frac{\mathbf{u}}{3}\left(\mathbf{\nabla}\cdot\mathbf{u}+\mathbf{u}\cdot\mathbf{ \nabla}\ln\rho\right) \tag{4}\] \[- \frac{\mathbf{u}}{\rho}\left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta\mathbf{ J}^{2}\right]+\frac{3}{4\rho}\mathbf{J}\times\mathbf{B},\] \[\frac{\partial\ln\rho}{\partial t} = -\frac{4}{3}\left(\mathbf{\nabla}\cdot\mathbf{u}+\mathbf{u}\cdot\mathbf{\nabla} \ln\rho\right)+\frac{1}{\rho}\left[\mathbf{u}\cdot(\mathbf{J}\times\mathbf{B})+\eta\mathbf{ J}^{2}\right]\]
where \({\cal S}_{ij}=(\partial_{i}u_{j}+\partial_{j}u_{i})/2-\delta_{ij}\mathbf{\nabla} \cdot\mathbf{u}/3\) are the components of the rate-of-strain tensor, \(\nu\) is the kinematic viscosity, \(\rho\) is the density (which includes the rest mass density), and the ultrarelativistic equation of state for the pressure \(p=\rho/3\) has been employed. We assume uniform \(\nu\), \(\eta\), and \(D_{5}\) such that \(\nu=\eta=D_{5}\). Our use of Eq. (4) compared to the nonrelativistic counterpart only affects the kinetic energy and not the magnetic field evolution; see Ref. [28] for comparisons in another context.
We define spectra of a quantity \(h(\mathbf{x})\) as \({\rm Sp}(h)=\oint_{4\pi}|\tilde{h}|^{2}\,k^{2}{\rm d}\Omega_{k}/(2\pi L)^{3}\), where a tilde denotes the quantity in Fourier space and \(\Omega_{k}\) is the solid angle in Fourier space, so that \(\int{\rm Sp}(h)\,{\rm d}k=\langle h^{2}\rangle\). Here, \(k\equiv|\mathbf{k}|\). The magnetic energy spectrum is \(E_{\rm M}(k,t)\equiv{\rm Sp}(\mathbf{B})/2\) and \(\int E_{\rm M}\,{\rm d}k=\langle\mathbf{B}^{2}\rangle/2\) is the mean magnetic energy density. The mean magnetic helicity density is \({\cal H}_{\rm M}=\langle\mathbf{A}\cdot\mathbf{B}\rangle\), the magnetic helicity spectrum is \(H_{\rm M}(k,t)\) with \(\int H_{\rm M}\,{\rm d}k={\cal H}_{\rm M}\), and \(\xi_{\rm M}={\cal E}_{\rm M}^{-1}\int k^{-1}E_{\rm M}\,{\rm d}k\) is the correlation length.
For an initially uniform \(\mu_{5}\equiv\mu_{50}\), Eq. (2) has exponentially growing solutions proportional to \(e^{i\mathbf{k}\cdot\mathbf{x}+\gamma_{5}t}\), when \(k<\mu_{5}\). The maximum growth rate is \(\gamma_{5}=\mu_{5}^{2}\eta/4\) for \(k=k_{5}\equiv\mu_{5}/2\)[8; 26]. As initial condition for \(\mathbf{A}\), we consider a Gaussian distributed random field with a magnetic energy spectrum that is a broken power law with \(E_{\rm M}(k,t)\propto k^{4}\) for \(k<k_{0}\), motivated by causality constraints [29], and a Kolmogorov-type spectrum, \(E_{\rm M}(k,t)\propto k^{-5/3}\), for \(k>k_{0}\), which may be expected if there is a turbulent forward cascade. By setting \(k_{0}=1\) for the spectral peak, we fix the units of velocity and length. The unit of time is then \((k_{0})^{-1}\). We set initially \(\rho=\rho_{0}=1\), which then also fixes the units of energy.
We solve the governing equations using the Pencil Code[30], where the equations are already implemented [31; 32]. We consider a cubic domain of size \(L^{3}\), so the smallest wave number is \(k_{1}=2\pi/L\). The largest wave number is \(k_{\rm Ny}=k_{1}N/2\), where \(N\) is the number of mesh points in one direction. In choosing our parameters, it is important to observe that \(k_{1}\ll k_{0}\ll k_{5}\ll k_{\rm Ny}\). Here, we choose \(k_{1}=0.02\), \(k_{0}=1\), \(k_{5}=5\), and \(k_{\rm Ny}=10.24\), using \(N=1024\) mesh points in each of the three directions. This means that \(|\mu_{50}|=10\), which is virtually the same as \(k_{\rm Ny}\). However, experiments with other choices, keeping \(N=1024\), showed that ours yields an acceptable compromise that still allows us to keep \(k_{1}\) small enough. We choose the sign of \(\mu_{5}\) to be negative, and adjust the amplitude of the magnetic field such that \(2\xi_{\rm M}\xi_{\rm M}={\cal H}_{\rm M}=-2\mu_{50}/\lambda\). Using \(\eta=2\times 10^{-4}\) and \(\lambda=2\times 10^{4}\), we have, following
Ref. [28], \(v_{\lambda}\equiv\mu/\sqrt{\rho_{0}\lambda}\approx 0.07\) and \(v_{\mu}\equiv\mu\eta=0.002\), so \(v_{\lambda}/v_{\mu}\approx 35\gg 1\), corresponding to what is called regime I.
In Fig. 1(a), we present magnetic energy spectra at different times. We clearly see an inverse cascade where the spectral magnetic energy increases with time for \(k\ll k_{0}\) (indicated by the upward arrow), but decays for \(k\gg k_{0}\). As time goes on, the peak of the spectrum moves to smaller wave numbers with \(k_{\rm peak}\approx\xi_{\rm M}^{-1}\), where \(\xi_{\rm M}\) increases approximately like a power law, \(\xi_{\rm M}\propto t^{q}\), while the energy density decreases, also approximately like a power law with \(\mathcal{E}_{\rm M}\propto t^{-p}\). The spectral peak always evolves underneath an envelope \(\propto k^{3/2}\), which implies that \(\max[E_{\rm M}(k,t)]=\xi_{\rm M}(t)^{-\beta}\) with \(\beta=3/2\), indicated by the upper dashed-dotted line in Fig. 1(a).
To compute \(\mathcal{I}_{\rm H}\) (and thereby \(I_{\rm H}\)), we employ a spectral technique by computing the total helicity variance spectrum \({\rm Sp}(h_{\rm tot})\); see Fig. 1(b). Compared to the inverse cascade seen in \({\rm Sp}(\mathbf{B})\), we here see the conservation of the large-scale total helicity variance spectrum \(\propto k^{2}\). We thus obtain
\[\mathcal{I}_{H}(R,t)=L^{-3}\int w(\mathbf{k},R)\,{\rm Sp}(h_{\rm tot})\,{\rm d}^{3} \mathbf{k}/(2\pi)^{3}. \tag{5}\]
We choose \(w(k,R)=(4\pi R^{3}/3)[6j_{1}(kR)/kR]^{2}\) as weight function [6] with \(j_{n}\) being spherical Bessel functions.
In Fig. 2(a) we plot \(\mathcal{I}_{H}(R,t)\) versus \(R\) for different values of \(t\), and in Fig. 2(b) versus \(t\) for four choices of \(R\), indicated by the colored lines in both panels. It turns out that \(\mathcal{I}_{H}(R)\) is nearly flat for \(k_{0}R\ll 10\), but grows cubically for \(k_{0}R\gg 1\). This is different for non-helical MHD with non-chiral plasmas [4; 6], where \(\mathcal{I}_{H}(R)\) was found to grow cubically for small \(R\) and is flat for large \(R\), i.e., just the other way around. Cubic scaling of \(\mathcal{I}_{H}(R)\) implies that the total helicity density in subvolumes is space filling and does not change randomly. At small \(R\), the scaling is spatially flat. This is also where \({\rm Sp}(2\mu_{5}/\lambda)\) has a large contribution (in addition to that at \(k=0\)). This suggests that here the total magnetic helicity is spatially random. This is consistent with the finding that the magnetic energy produced by the CPI is rather weak [33]. Note also that the transition to cubic scaling happens only for \(R>\xi_{\rm M}\), which might explain why the Hosking integral determines the decay until the end of the simulation.
Figure 2: (a) \(\mathcal{I}_{H}(R,t)\) versus \(R\) for different times \(t_{*}\) (indicated by the same colors/line styles as in Fig. 1), and (b) \(\mathcal{I}_{H}(R,t)\) versus \(t\) (normalized) for \(R=\xi_{\rm M}(t_{*})\) marked by the four colors. The \(t^{-0.13}\) scaling is indicated as the dashed-dotted curve for comparison. In (a), the four colored symbols indicate the positions of \(k_{0}\xi_{\rm M}(t_{*})\), and in (b), the time dependencies are plotted for those \(R=\xi_{\rm M}(t_{*})\).
Figure 1: (a) Magnetic energy and (b) total helicity variance spectra at \(t=31\) (dashed), \(100\) (solid), \(316\) (dotted), \(10^{3}\) (blue), \(3.16\times 10^{3}\) (green), \(10^{4}\) (orange), and \(3.16\times 10^{4}\) (red). In (a), note that \({\rm Sp}(\mathbf{B})\) evolves underneath the envelope \(k^{3/2}\), and the upward arrow indicates the sense of time. For orientation, the slopes \(k^{-5/2}\) and \(k^{-4}\) have been indicated in what is expected to correspond to the inertial ranges in (a) and (b), respectively. In (a), the inset shows \((k/2)\,H_{\rm M}(k)\) at the last time with positive (negative) values in red (blue), and in (b), the inset compares \({\rm Sp}(2\mu_{5}/\lambda)\) (solid) with \({\rm Sp}(h_{\rm tot})\) (dotted) at the last time.
As a function of time, we see that for \(k_{0}R\approx 4\) (orange) and \(7\) (red), \({\cal I}_{H}(R,t)\) shows large excursions, but no net trend; see Fig. 2(b). These excursions are caused by the oscillatory nature of the weight function in Eq. (5) [33]. It should also be noted that the time axis is on a logarithmic scale, so the excursions are still at comparatively early times. For \(k_{0}R=1.5\) (blue), \({\cal I}_{H}(R,t)\) is nearly constant, which suggests the conservation of the adapted Hosking integral, but we see a decline at late times. In spite of the semilogarithmic representation, we can see that this decline corresponds to a \(t^{-0.13}\) scaling, which is weak and similar to what has been seen for other simulations at that resolution; see, e.g., Ref. [34]. While our choice of the relevant value of \(R\) is not well determined, we present in the following a different argument of why the adapted Hosking integral is actually conserved.
As in the case of nonrelativistic MHD (\(\mu_{5}\to 0\)), the dimensions of \({\cal I}_{\rm H}\) and \(I_{\rm H}\) are \(\,{\rm cm}^{9}\,{\rm s}^{-4}\). This implies that in \(\xi_{\rm M}\propto t^{q}\), the value of the exponent is \(q=4/9\), if the conservation of \({\cal I}_{\rm H}\) determines the time evolution of the magnetic field around the characteristic scale. Next, assuming selfsimilarity, the magnetic spectra can be collapsed on top of each other by plotting them versus \(k\xi_{\rm M}(t)\) and compensating the decline in the height by \(\xi_{\rm M}^{\beta}\) to yield the universal function \(\phi(k\xi_{\rm M})=\xi_{\rm M}^{\beta}E_{\rm M}(k\xi_{\rm M})\); see Appendix B of Ref. [6] and Refs. [34; 35] for examples in other contexts. Using also the invariance of the spectrum under rescaling [36], \({\mathbf{x}}\to{\mathbf{x}}^{\prime}={\mathbf{x}}\ell\) and \(t\to t^{\prime}=t\ell^{1/q}\), and since the dimension of \(E_{\rm M}(k,t)\) is \(\,{\rm cm}^{3}\,{\rm s}^{-2}\), we have \(E_{\rm M}(k\ell^{-1},t\ell^{1/q})=\ell^{3-2/q+\beta}[\xi_{\rm M}\ell]^{-\beta }\phi(k\xi_{\rm M})\), and therefore \(\beta=2/q-3=3/2\), which agrees with Fig. 1(a). Finally, for \({\cal E}_{\rm M}\propto t^{-p}\), we find with \({\cal E}_{\rm M}(t)=\int E_{\rm M}\,{\rm d}k\propto t^{-(\beta+1)q}\) the line \(p=2(1-q)\), which is also known as the self-similarity line [6; 35]. With \(q=4/9\), we thus obtain \(p=10/9\). This is completely analogous to the MHD case with zero magnetic helicity[37]; see also Table 2 of Ref. [34]. Thus, the cancelation of finite magnetic helicity by fermion chirality with \({\cal H}_{\rm M}(t)=-2\mu_{5}(t)/\lambda\neq 0\) has the same effect as that of zero magnetic helicity.
To understand the decay of magnetic helicity density in the present simulations, it is important to remember that the real space realizability condition of magnetic helicity [38] is always valid and implies \(|{\cal H}_{\rm M}|\leq 2\xi_{\rm M}\xi_{\rm M}\). Assuming the inequality to be saturated, we find the scaling \(|{\cal H}_{\rm M}|\propto|\mu_{5}|\propto t^{-r}\) with \(r=p-q=2/3\). This is well obeyed, as is shown in Fig. 3. In the inset, we show that \(2\xi_{\rm M}\xi_{\rm M}/{\cal H}_{\rm M}\approx 1\) at early times and about \(1.1\) at late times. It is thus thus fairly constant, confirming therefore the validity of our underlying assumption. On top of this evolution of the chiral asymmetry, the growth rate of the CPI, \(\gamma_{5}\propto\mu_{5}^{2}\propto t^{-4/3}\), decays more rapidly than \(t^{-1}\), which causes it to grow less efficiently so as not to spoil the scaling properties of the system.
To characterize the scaling expected from the conservation of the adapted Hosking integral further, we plot in Fig. 4 the \(pq\) diagram of the instantaneous scaling exponents \(p(t)=-{\rm d}\ln{\cal E}_{\rm M}/{\rm d}\ln t\) versus \(q(t)={\rm d}\ln\xi_{\rm M}/{\rm d}\ln t\). The solution converges to a point close to the crossing point between the \(\beta=3/2\) line and the scale-invariance line \(p=2(1-q)\). The approach to the point \((p,q)=(10/9,\,4/9)\) does not occur predominantly along the \(\beta=3/2\) line, as in nonhelical standard MHD, but is now closer to the \(r=2/3\) line, where \(p=q+r\). In the unbalanced case, where the net chirality is non-vanishing, however, the decay is solely governed by \(\langle h_{\rm tot}\rangle={\rm const}\)[25].
In conclusion, we have presented evidence that, in the
Figure 4: \(pq\) diagram for times \(t=700\), \(1000\), \(1500\), \(2200\), \(3200\), \(4600\), \(6800\), \(10^{4}\), \(1.5\times 10^{4}\), \(1.5\times 10^{4}\), \(2.2\times 10^{4}\), and \(3.2\times 10^{4}\), corresponding to symbols of increasing size. The solid line denotes the scale-invariance line \(p=2(1-q)\), the dashed line the \(\beta=3/2\) line for adapted Hosking scaling, and the dashed-dotted line is the new \(r=2/3\) line that does not have any correspondence in standard MHD.
balanced case of zero total chirality, the Hosking integral, when adapted to include the chiral chemical potential, is approximately conserved around the characteristic scale. This implies decay properties for magnetic energy and correlation length that are unchanged relative to nonhelical MHD, but here with \(\mathcal{H}_{\rm M}+2\mu_{5}/\lambda=0\) (instead of \(\mathcal{H}_{\rm M}=0\)). This yields the novel scaling \(|\mathcal{H}_{\rm M}|\propto|\mu_{5}|\propto t^{-2/3}\), along with the familiar scalings \(\mathcal{E}_{\rm M}\propto t^{-10/9}\) and \(\xi_{\rm M}\propto t^{4/9}\) that also apply to the case with \(\mathcal{H}_{\rm M}=0\). These scalings have consequences for understanding the properties of the chiral magnetic effect in the early Universe [13; 18; 19; 20; 39] and young neutron stars [40; 41]. Our work has significant impact on the baryon asymmetry of the Universe from hypermagnetic helicity decay after axion inflation. It also exposes a rather unexpected application of the general idea behind the recently developed Hosking integral, raising therefore the hope that there may be other ones yet to be discovered.
We thank Valerie Domcke and Kai Schmitz for useful comments on the manuscript and Kyohei Mukaida for fruitful discussions. Support through the grant 2019-04234 from the Swedish Research Council (Vetenskapsradet) (AB), Grant-in-Aid for Scientific Research Nos. (C) JP19K03842 from the JSPS KAKENHI (KK), and the grant 185863 from the Swiss National Science Foundation (JS) are gratefully acknowledged. We acknowledge the allocation of computing resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC Center for High Performance Computing Stockholm and Linkoping.
|
2310.00703 | A Comparative Study of Training Objectives for Clarification Facet
Generation | Due to the ambiguity and vagueness of a user query, it is essential to
identify the query facets for the clarification of user intents. Existing work
on query facet generation has achieved compelling performance by sequentially
predicting the next facet given previously generated facets based on
pre-trained language generation models such as BART. Given a query, there are
mainly two types of training objectives to guide the facet generation models.
One is to generate the default sequence of ground-truth facets, and the other
is to enumerate all the permutations of ground-truth facets and use the
sequence that has the minimum loss for model updates. The second is
permutation-invariant while the first is not. In this paper, we aim to conduct
a systematic comparative study of various types of training objectives, with
different properties of not only whether it is permutation-invariant but also
whether it conducts sequential prediction and whether it can control the count
of output facets. To this end, we propose another three training objectives of
different aforementioned properties. For comprehensive comparisons, besides the
commonly used evaluation that measures the matching with ground-truth facets,
we also introduce two diversity metrics to measure the diversity of the
generated facets. Based on an open-domain query facet dataset, i.e., MIMICS, we
conduct extensive analyses and show the pros and cons of each method, which
could shed light on model training for clarification facet generation. The code
can be found at \url{https://github.com/ShiyuNee/Facet-Generation} | Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng | 2023-10-01T15:47:07Z | http://arxiv.org/abs/2310.00703v1 | # A Comparative Study of Training Objectives for Clarification
###### Abstract.
Due to the ambiguity and vagueness of a user query, it is essential to identify the query facets for the clarification of user intents. Existing work on query facet generation has achieved compelling performance by sequentially predicting the next facet given previously generated facets based on pre-trained language generation models such as BART. Given a query, there are mainly two types of training objectives to guide the facet generation models. One is to generate the default sequence of ground-truth facets, and the other is to enumerate all the permutations of ground-truth facets and use the sequence that has the minimum loss for model updates. The second is permutation-invariant while the first is not. In this paper, we aim to conduct a systematic comparative study of various types of training objectives, with different properties of not only whether it is permutation-invariant but also whether it conducts sequential prediction and whether it can control the count of output facets. To this end, we propose another three training objectives of different aforementioned properties. For comprehensive comparisons, besides the commonly used evaluation that measures the matching with ground-truth facets, we also introduce two diversity metrics to measure the diversity of the generated facets. Based on an open-domain query facet dataset, i.e., MIMICS, we conduct extensive analyses and show the pros and cons of each method, which could shed light on model training for clarification facet generation. The code can be found at [https://github.com/ShiyuNee/Facet-Generation](https://github.com/ShiyuNee/Facet-Generation).
Query facet, facet Generation, Search Clarification +
Footnote †: Footnote †: thanks: [
Existing methods typically conduct facet generation by generating the sequence of facets associated with a query (Hasserman et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2018), using two common training objectives where the ordering of a facet sequence has an impact or no impact during training. One is to generate the default sequence of facets (Hasserman et al., 2017; Krizhevsky et al., 2018), denoted as _Seq-Default_ in Table 1. This approach forces the model to generate a specific facet ordering that is only one of the possible combinations, which may lead to suboptimal performance. Realizing this issue, Hashemi et al. (Hashemi et al., 2017) propose the other training objective that enumerates the permutations of the query facets and uses the one with minimum loss to guide model training, denoted as _Seq-Min-Perm_ in Table 1. Theoretically, the training objective should be permutation-invariant since the ground-truth facets are an unordered set. However, enumerating the permutations incurs huge computation costs and only using minimum loss may not leverage the permutations of facets sufficiently. When we examine the existing methods from other perspectives (Hasserman et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2018) (See Table 1), we find that they all generate the next facet depending on the previously generated facets. Also, the count of generated facets is determined by the model and no more facets can be generated if the model has already output a termination symbol.
In this paper, we aim to conduct a comparative study of various training objectives for the task of clarification facet generation from multiple perspectives. To this end, we propose another three training objectives for facet generation that are all permutation-invariant but have different properties in terms of whether to sequentially predict facets based on the context of so-far generated facets and whether the count of generated facets is controllable. 1) _Seq-Avg-Perm_: It is similar to _Seq-Min-Perm_ but uses the average loss of facet permutations instead of minimum; 2) _Set-Pred_: It predicts each facet in the ground-truth set as independent targets during training and uses sampling algorithms such as beam search to select the top k facets with the highest probabilities for inference; 3) _Seq-Set-Pred_: It sequentially predicts the remaining facet set based on arbitrarily generated facets as context. As shown in Table 1, among these methods, only Set-Pred does not output the next facet based on previously generated facets. So, it does not have the training cost due to enumerating the facet permutations. However, it could suffer from keep generating similar facets due to only referring to the original query and top documents. Only Set-Pred and Seq-Set-Pred can output the designated number of facets. They focus on the facet prediction task alone, which relieves them from also concerning how many facets they need to produce and could lead to potentially better performance.
Previous work mainly evaluates the effectiveness of facet generation by measuring the matching degree between the generated facets and the ground truth facets (Hasserman et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2018). This, however, can not reflect the diversity of generated facets sufficiently. To conduct systematic comparisons of different types of methods, we introduce two diversity metrics that measure the term and semantic diversity of the generated facets respectively. We evaluate models trained with the two existing objectives and three new objectives using both the commonly used matching metrics and our proposed diversity metrics. Based on MIMICS (Miyu et al., 2018), an open-domain query facet dataset, Our experimental analyses demonstrate that the appropriate permutation-invariant objectives can help generate better facets; Facet prediction that is only based on the query and top-retrieved documents (i.e., Set-Pred) achieves compelling performance in terms of the metrics measuring matching with the ground truth but have the worst diversity performance; Methods that only learn facet prediction given context (i.e., Seq-Set-Pred) have better semantic matching metrics with ground truth but worse diversity performance than its counterpart that also learns when to stop generating facets (i.e., Seq-Avg-Perm). To sum up, the main contributions of this work include:
1) To the best of our knowledge, this is the first work that conducts a systematic comparative study on a wide variety of training objectives for clarification facet generation.
2) We propose three extra training objectives that are of different properties from the existing work and introduce two diversity-oriented metrics for evaluation.
3) We conduct comprehensive analyses and show the pros and cons of each type of method, which we believe could provide insights for future research efforts in clarification facet generation.
## 2. Related Work
Clarifying user intent is a crucial issue in information retrieval. There have been many works focusing on learning facet prediction. Additionally, some studies utilize predicted intents to generate clarifying questions, clarifying intent by asking questions to the user. So the two threads of research related to our work are: asking clarifying questions and facet prediction.
### Asking Clarifying Questions
Asking clarifying questions(CQ) is an important way to clarify the intent of a query (Krizhevsky et al., 2017). Typically, before asking a CQ, we need to determine a facet and generate based on the facet. Current research on CQ can be mainly divided into two categories: (1) ranking/selecting CQ as (Krizhevsky et al., 2017; Krizhevsky et al., 2018; Krizhevsky et al., 2018) and (2) generating CQ like (Krizhevsky et al., 2018; Krizhevsky et al., 2018; Krizhevsky et al., 2018; Krizhevsky et al., 2018; Krizhevsky et al., 2018).
For ranking/selecting CQ, Rao and Daume III (Rao and Daume, 2010) sorted questions based on the usefulness of their answers, using the Expected Value of Perfect Information as the theoretical basis for ranking. Later, Aliannejadi et al. (Aliannejadi et al., 2018) collected a dataset for CQ and proposed a baseline for selecting CQ. (Krizhevsky et al., 2018; Krizhevsky et al., 2018) conducted CQ ranking based on top-retrieved documents and negative feedback, respectively.
For CQ generation, Rao and Daume III (Rao and Daume, 2010) applied generative adversarial learning techniques when training the sequence-to-sequence question generation model. Zamani et al. (Zamani et al., 2019) proposed a rule-based model and two neural question generation models to generate CQ when given a query and its facet. Later, Dhole (Kumar et al., 2018) proposed a model that utilizes rule-based systems to generate discriminative questions, aiming to obtain clarifications on user intent. Wang and Li (Wang and Li, 2019) introduced a template-based question generation model called TG-ClariQ, which selects a template question from a candidate set. The missing parts in the template question are filled in using selected words. They converted generating CQ into a selection task. Meanwhile, Sekulic et al. (Sekulic et al., 2018) also proposed a facet-driven approach and Zhao et al. (Zhao et al., 2019) showed that such facets can be extracted from top retrieved documents. Recently, Wang et al. (Wang et al., 2019) converted CQ generation to a facet-constrained question generation task to guide effective and precise question generation.
### Facet Prediction
Current research on facet prediction can be mainly divided into two categories: (1) **facet extraction** and (2) **facet generation**.
The majority of early work on facet extraction (Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019) primarily relied on specific domains or external resources. Kohlschter et al. (Kohlschter et al., 2018) introduced an approach for extracting based on personalized PageRank link analysis and annotated taxonomies. Stoica et al. (Stoica et al., 2019) put forward a technique for generating hierarchical faceted metadata. The method utilized hypernym relations in WordNet to extract this metadata from textual descriptions of items. Subsequently, Dakka and Ipeirotis(Dakka and Ipeirotis, 2019) use entity hierarchies in Wikipedia and WordNet to extract candidate facet terms. Li et al. (Li et al., 2019) created a faceted retrieval system that showcases pertinent facets extracted from Wikipedia hyperlinks and categories. While these methods have shown promising results in certain scenarios, they often struggle under large-scale open-domain settings. Apart from the approaches mentioned above that rely on specific domains or external resources, another approach for facet extraction and generation is based on top retrieved documents. Dou et al. (Dou et al., 2019) introduced QDMiner, one of the earliest open-domain facet extraction systems. This system utilizes textual patterns to aggregate frequent lists from the top web search results and gets query dimensions based on the aggregated results. Kong and Allan (Kong and Allan, 2019) proposed a graph-based probabilistic model for determining whether a candidate term is a facet term and for identifying whether two candidate terms belong to the same query facet. Later, they extended faceted search to the general web (Kong and Allan, 2019). In their subsequent work (Li et al., 2019), the authors put forward a graphical model that optimizes the expected performance measure and selectively displays facets just for part of queries by their predicted performance.
After the rise of pre-trained language models, query facet generation has become a popular way of facet generation and achieved compelling performance. Hashemi et al. (Hashemi et al., 2019) proposed NMIR to cluster the documents prior to generation and learned a representation for each facet. Subsequently, to address the issue of matching between clustered documents and facets, as well as the influence caused by the order of generating facets, they proposed PINIMR (Saraminas et al., 2019). Samarinas et al. (Samarinas et al., 2020) revisited the task of query facet extraction and generation and considered facet generation as autoregressive text generation which produces state-of-the-art results.
Inspired by the aforementioned works, in this paper, we summarize the impact of various training objectives on facet generation and conduct a comparative analysis of the generated results. We hope to provide guidance for the research in facet generation.
## 3. Methodology
In this section, we first introduce the definition of the facet generation task. We illustrate two existing training objectives in Section 3.2 and the newly proposed three in Section 3.3.
### Task Description
Given an open-domain query, our task is to generate the associated facets based on their corresponding related search engine result pages (SERPs). We use top-retrieved documents in the SERPs as evidence to help generate better facets during both training and inference. Let \(D=\{(q_{1},D_{1},F_{1}),(q_{2},D_{2},F_{2}),\cdots,(q_{n},D_{n},F_{n})\}\) denote the training data which consist of \(n\) triples \((q_{i},D_{i},F_{i})\), where \(q_{i}\) means the \(i-th\) open domain query, \(D_{i}=\{d_{i1},d_{i2},\cdots,d_{ik}\}\) is the top \(k_{i}\) retrieved documents for the given query \(q_{i}\) and \(F_{i}=\{f_{1i},f_{2i},\cdots,f_{im}\}\) represents \(m_{i}\) ground truth facets related to \(q_{i}\). The task is to generate a set of related facets \(F\) for any given query \(q\) with its associated documents \(D\).
In this section, we describe five representative training objectives for facet generation. Two of them have been proposed in previous work and both of them conduct sequential facet prediction. They are: (1) _seq-default_(Li et al., 2019; Li et al., 2019) and (2) _seq-min-perm_(Saraminas et al., 2019). These two are order-sensitive and permutation-invariant respectively. Moreover, we propose another three permutation-invariant training objectives: (3) _seq-avg-perm_, (4) _set-pred_, and (5) _seq-set-pred_. The comparative characteristics of these five objectives can be seen in Table 1. As in (Li et al., 2019; Li et al., 2019; Li et al., 2019), we use an autoregressive model BART (Li et al., 2019), a Transformer-based encoder-decoder model defined by the parameter \(\theta\) for sequence generation and leave the studies based on decoder-only methods in the future. Next, we will describe the details of all the objectives.
### Existing Objectives for Facet Generation
In this subsection, we introduce two existing methods used for facet generation _seq-default_ and _set-min-perm_.
**Seq-Default**(Li et al., 2019; Li et al., 2019). It considers the default facet sequences in the corpus as training targets and is commonly used in previous work (Li et al., 2019; Li et al., 2019). For a given query \(q_{i}\) and its corresponding related documents \(D_{i}\), we concatenate \(q_{i}\) and each \(d_{ij}\in D_{i}\) using [SEP] and produce \(q_{i}[SEP]d_{i1}[SEP]\cdots[SEP]d_{ik}\) as input \(x_{i}\). Note that we follow the same process of encoding this concatenated sequence with BART for all the studied training methods. We concatenate \(f_{ij}\in F_{i}\) with ',' and the yielded text string \(f_{i1},f_{i2},\cdots,f_{im}[EOS]\) is the target \(y_{i}\) for input \(x_{i}\). This objective can be expressed in the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Sequential-Prediction & Permutation-Invariant & Facet-Count-Controllable & Complexity-Per-Training-Epoch \\ \hline Seq-Default & ✓ & \(\times\) & \(\times\) & \(O(n*((|q|+|d)^{2}+(m*(|q|)^{2}+(|q|+|d)*m*(|f|)))\) \\ Seq-Min-Perm & ✓ & ✓ & \(\times\) & \(O(n*m!*((|q|+|d)^{2}+(m*|f|)^{2}+(|q|+|d)*m*(|f|))\) \\ Seq-Avg-Perm & ✓ & ✓ & \(\times\) & \(O(n*m*((|q|+|d)^{2}+(m*|f|)^{2}+(|q|+|d)*m*(|f|))\) \\ Set-Pred & \(\times\) & ✓ & ✓ & \(O(n*m*((|q|+|d)^{2}+|f|^{2}+(|q|+|d)*(|f|))\) \\ Seq-Set-Pred & ✓ & ✓ & ✓ & \(O(n*((|q|+|d)^{2}+|f|^{2}+(|q|+|d)*(|f|))\) \\ Seq-Set-Pred & ✓ & ✓ & ✓ & \(O(n*(\sum_{i=0}^{m-1}q_{i}^{m}*q_{i}^{m})*((|q|+|d|+i*(|f|)^{2}+(|q|+|d+i*(|f|)* *(|f|))\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparison of characteristics among different objectives. ✓ means true and \(\times\) means false. \(|q|\) is the average length for all the queries, \(|d|\) is the average length for all the concatenations of documents, \(m\) is the average number of facets for each query and \(|f|\) is the average length for all the facets. \(\mathcal{A}_{m}^{i}\) means selecting \(i\) elements without repetition from \(m\) elements, considering the ordering.
following mathematical forms:
\[\theta=\arg\min_{\theta^{*}}\sum_{i=1}^{n}P(v,y_{i}), \tag{1}\]
where \(P(v,y_{i})=\frac{1}{|y_{i}|}\sum_{x=1}^{|y_{i}|}-\log p\ (y_{ix}\mid v,y_{1i},\cdots,y_{ ix-1})\) and \(v\) is the output of BART encoder. This training objective could lead to suboptimal results since the model learns towards only the given facet ordering and ignores other equally valid permutations. This could harm the model performance. Aware of this issue, Hashemi et al. (Hashemi et al., 2017) have proposed a loss function that is permutation-invariant, which we name as _seq-min-perm_ and will introduce next.
**Seq-Min-Perm**(Hashemi et al., 2017). It treats the query intents as a set rather than a sequence to eliminate the impact of facet order. It extends the Hungarian loss (K
generate one facet at each step. We concatenate the input \(x_{t-1}\) with the output \(y_{t-1}\) from the (t-1)-th step to form the t-th step's input \(x_{t}=x_{t-1}[SEP]y_{t-1}\). During training, similar to _seq-avg-perm_, to mitigate the influence of facet order, we train the model on the data that covers all the possible permutations. The organization of training data is depicted in Figure 1. This prevents the model from learning specific generation ordering and enables it to make precise predictions under different permutation contexts effectively. We formalize this optimization objective as:
\[\theta=\arg\min_{\theta^{\prime}}\sum_{i=1}^{n}\frac{1}{V_{i}}\sum_{j=0}^{|F_ {i}|-1}\sum_{k=1}^{|\mathcal{A}_{i}(F_{i})|}\sum_{k=1}^{|\mathcal{A}_{i}(R_{i }\mu)|}P(\nu_{ijk},y_{ijkk}), \tag{5}\]
where \(P(u,y)\) is the negative log likelihood of the generation probability for \(y\) given \(v\). \(\mathcal{A}_{i}(X)\) means all the possible orderings of \(j\) selected non-repetitive elements from \(X\). For example, \(\mathcal{A}_{2}((f_{A},f_{B},f_{C}))=\{(f_{A},f_{B}),(f_{B},f_{A})\), \((f_{A},f_{C}),(f_{C},f_{A})\), \((f_{B},f_{C})\), \((f_{C},f_{B})\}\). \(R_{ijk}\) is the remaining set consisting of facets that are in \(F_{i}\) but not in the k-th set in \(\mathcal{A}_{j}(F_{i})_{k}\). \(\alpha_{ijk}\) is the output of BART encoder given the input \(x_{ijk}\), i.e., the concatenation of \(q_{i}\), \(D_{i}\) and facets in \(\mathcal{A}_{j}(F_{i})_{k}\). \(y_{ijkh}\) is the h-th ground truth facet chosen from \(R_{ijk}\). We get the average loss of all the possible permutations for query \(q_{i}\) by dividing \(V_{i}=\sum_{j=0}^{|F_{i}|-1}\sum_{k=1}^{|\mathcal{A}_{j}(F_{i})|}\left| \mathcal{A}_{1}(R_{ijk})\right|\)
The time complexity of _seq-set-pred_ is the highest among the five approaches. It also conducts sequential facet prediction so that the previously predicted facets could help it avoid generating similar facets. Same as _set-pred_, it is able to control the count of generated facets. Note that based on decoder-only language generation models such as the GPT series (Beng et al., 2017; Zhang et al., 2018), _seq-set-pred_ will be essentially the same as _seq-avg-perm_ since we do not need to move the generated facets to the encoder.
### Model Inference
For _seq-avg-perm_ inference, we generate facets the same as _seq-default_ and _seq-min-perm_. For _set-pred_ and _seq-set-pred_, we generate each facet independently and set the number of facets manually. We generate facets in parallel and utilize a search algorithm to select the top few facets with the highest probability as the generated results for _seq-min-perm_ while for _seq-set-pred_, we sequentially generate all the facets, appending the generated facets to the current input as the input of next step, until the count of generated facets reaches the specified number.
### Stochastic Optimization
Optimizing towards all permutations of query facets is computationally challenging. Therefore, as (K
seq-default, seq-min-perm_, and _seq-avg-perm_ is determined by themselves, while we specifically set the number of generated facets to 3 for _set-pred_ and _seq-set-pred_, as it yields the best results on the validation set. Additionally, practical considerations led us to perform deduplication on all the generated results for each method. It is worth noting that there was only a minor difference between the results before and after deduplication. For _seq-default_, we utilized the checkpoint provided by Samarinas et al. (Samarinas et al., 2019).
For the _seq-avg-perm_ and _seq-min-perm_ techniques, we also fine-tuned the BART-base model with a maximum sequence length of 512 tokens. However, we augmented the maximum output sequence length to 128 tokens to empower the model with the capability to generate multiple facets within a single sequence. During each training epoch, we sampled six permutations for each query and the batch size for _seq-min-perm_ was set to 18, enabling it to handle all permutations of three queries within one batch.
Regarding _seq-set-pred_, we set the maximum input sequence length to 640 tokens. This ensured that the facets appended to the input would not be truncated. We sampled 6 permutations, 8 permutations, 9 permutations, 11 permutations, and 13 permutations when the count of ground-truth facets is 1, 2, 3, 4, 5 respectively to ensure the model to see almost the same number of facets as in the _seq-avg-perm_ approach during each training epoch.
## 5. Results and Discussion
Next, we show the experimental results of the five training objectives. First, we evaluate the accuracy of the generated facets against the ground-truth facets. Then, we measure their diversity with our proposed diversity metrics. We also show the model performance when different counts of facets are generated and compare the model performance using a similar amount of training data. Finally, we conduct case analyses to show the quality of the facets generated by each method.
### Evaluation Against Ground Truth
Table 2 shows the matching degree between the generated facets and ground-truth facets for all the methods. We have the following observations: 1) Most permutation-invariant methods except _seq-min-perm_ perform better than the order-sensitive methods. This is consistent with our presumption that _seq-default_ was hurt when forced to generate a specific facet ordering that is only one of the possible combinations. However, _seq-min-perm_ is the exception and performs the worst most of the time. There are some potential reasons for its unsatisfactory performance. If the model consistently selects the same sequence for a query throughout the entire training phase, it is expected to observe a similar performance to that of _seq-default_. However, in the early stages of training, the selection of the sequence with the minimum loss exhibits randomness. This random selection introduces noise to the training process, resulting in a decrease in performance. So the effectiveness of generated facets is even worse than _seq-default_. 2) The method that generates facets without depending on the previously generated facets (i.e., _set-pred_) has compelling performance in terms of both term-based and semantic matching metrics. Previous studies do not consider this way of training so this has not been observed. Its diversity, however, is lower than the others, which we will show in Section 5.2. 3) Methods that only learn facet prediction given context(e.g., _seq-set-pred_) have better semantic matching performance with ground truth compared to those that also learn when to stop generating facets.
### Evaluation on Diversity
Table 3 shows the results of diversity evaluation on the generated facets with the query words removed. It indicates that _seq-avg-perm_ performs the best on both metrics, suggesting there are not only fewer repeated terms between the generated facets but also a higher level of semantic differentiation. It performs worse than the ground-truth facets regarding term diversity but better in terms of semantic diversity. By checking some of their generated facets, we find that _seq-avg-perm_ usually generates terms such as prepositions to maintain correct grammatical structures while such connection words are fewer in the ground truth and this phenomenon also exists in
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Term Overlap} & \multicolumn{3}{c}{Exact Match} & \multicolumn{3}{c}{Set BLEU Score} & \multicolumn{3}{c}{Set BERT-Score} \\ \cline{2-13} & P & R & F1 & P & R & F1 & 1-gram & 2-gram & 3-gram & 4-gram & P & R & F1 \\ \hline Seq-Default & 0.2976\({}^{*}\) & 0.2769\({}^{*}\) & 0.2752\({}^{+}\) & 0.0718\({}^{*}\) & 0.0561\({}^{-}\) & 0.0611\({}^{-}\) & 0.2335\({}^{+}\) & 0.1040\({}^{*-}\) & 0.0444\({}^{*-}\) & 0.0175\({}^{+}\) & 0.6391\({}^{-}\) & 0.6455\({}^{-}\) & 0.6419\({}^{-}\) \\ Seq-Min-Perm & 0.2761\({}^{-}\) & 0.2536\({}^{*}\) & 0.2537\({}^{-}\) & 0.0620\({}^{*}\) & 0.0470\({}^{-}\) & 0.0519\({}^{-}\) & 0.2102\({}^{-}\) & 0.0850\({}^{-}\) & 0.0346\({}^{-}\) & 0.0125\({}^{-}\) & 0.6442\({}^{-}\) & 0.6482\({}^{-}\) & 0.6457\({}^{+}\) \\ \hline Seq-Avg-Perm & 0.2977\({}^{*}\) & **0.3263\({}^{*}\)** & **0.3005\({}^{*}\)** & **0.1040\({}^{*}\)** & 0.0960\({}^{*}\) & **0.0977\({}^{+}\)** & 0.2422\({}^{+}\) & 0.1081\({}^{+}\) & 0.0568\({}^{+}\) & 0.0288\({}^{+}\) & 0.6665\({}^{+}\) & 0.6697\({}^{+}\) & 0.6676\({}^{+}\) \\ Set-Pred & **0.3029\({}^{*}\)** & 0.2978\({}^{*}\) & 0.2897\({}^{*}\) & 0.0988\({}^{*}\) & 0.0973\({}^{*}\) & 0.0953\({}^{*}\) & 0.2567\({}^{+}\) & 0.1198\({}^{+}\) & 0.0606\({}^{+}\) & 0.0260\({}^{+}\) & **0.6873\({}^{*}\)** & **0.6897\({}^{*}\)** & **0.6880\({}^{*}\)** \\ Seq-Set-Pred & 0.2930\({}^{*}\) & 0.2989\({}^{*}\) & 0.2863\({}^{*}\) & 0.0993\({}^{*}\) & **0.1009\({}^{*}\)** & 0.0973\({}^{*}\) & **0.2577\({}^{*}\)** & **0.1228\({}^{*}\)** & **0.0676\({}^{*}\)** & **0.0308\({}^{*}\)** & 0.6849\({}^{+}\) & 0.6887\({}^{+}\) & 0.6863\({}^{+}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Evaluation of different methods based on matching scores. The superscript + denotes significant improvements compared to the worst one which is underscored and – means significant decreases compared to the best one which is in bold in terms of a two-tailed paired t-test with Bonferroni correction with 99% confidence.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Term Diversity Ratio} & \multicolumn{2}{c}{BERT-Score Diversity} \\ & Ratio & P & R & F1 \\ \hline Ground Truth & 0.9284 & 0.0829 & 0.0829 & 0.0836 \\ \hline Seq-Default & 0.8783 & 0.0743 & 0.0743 & 0.0749 \\ Seq-Min-Perm & 0.8922 & 0.0883 & 0.0889 & 0.0893 \\ Seq-Avg-Perm & **0.9218** & **0.0993** & **0.0989** & **0.1000** \\ Set-Pred & 0.8883 & 0.0630 & 0.0630 & 0.0635 \\ Seq-Set-Pred & 0.9117 & 0.0657 & 0.0666 & 0.0667 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Diversity evaluation on the facet body which removes words from the facets that appear in the query.
other methods. For example, given query "internet explorer", _seq-avg-perm_ generates facets "for windows 10" and "for windows 7" while the corresponding ground truth facets are actually "windows 10" and "windows 7" respectively(shown in Table 6). This leads to a decrease regarding term diversity for _seq-avg-perm_. The higher semantic diversity scores of _seq-avg-perm_ compared to the ground truth indicates that its generated facets are more semantically different. It is worth noting that all the sequential prediction methods, except for _seq-default_, exhibit higher diversity than _set-pred_. It is not surprising to observe worse diversity in _set-pred_ because it could keep generating similar facets since it only refers to the original query and top documents during generation. The possible reason why _seq-default_ performs poorly regarding term diversity is that it is trained towards only one possible permutation of the facets, which could constrain the output term space and in turn result in lower term diversity. We observe good term diversity but worse semantic diversity in _seq-set-pred_. Multiple target ground-truth facets could be helpful for it to obtain higher term diversity. However, when we move the previously generated facets to the encoder and predict the rest with the decoder, the encoder is struggling to learn that the target facet should be similar to the original query while different from the facets. In contrast, for the sequential prediction methods(i.e., _seq-default_, _seq-min-perm_, _seq-avg-perm_), the encoder only encodes query and captures its relevance with the target facet and it is feasible for the decoder to capture the difference between target facet with the previously generated facets. This could be the reason for the much lower semantic diversity of _seq-set-pred_ compared to the sequential prediction counterparts.
### Performance w.r.t. Facet Counts
Table 4 shows the performance when generating different numbers of facets using the facet-count-controllable methods _set-pred_ and _seq-set-pred_. We also compute the ratio of the generated facet counts matching the ground truth for each method and show the ratios in 5. Table 4 shows that the evaluation scores constantly change with different facet counts. The best matching scores are mainly achieved when generating 2 or 3 facets because the average number of ground truth facets is between 2 and 3(shown in Table 5). The ratio is the average of one minus \(\frac{|f-g_{i}|}{g_{i}}\) across all the queries where \(g_{i}\) is the count of ground-truth facets for \(q_{i}\). The ratio matching with the ground-truth facet counts is 0.7039 and 0.7431 when generating 3 and 2 facets respectively. Although _seq-default_ and _seq-min-perm_ have generated more facets that have the same count with ground truth than _seq-avg-perm_, they have worse term or semantic level matching scores with the ground-truth facets. It indicates that they do generate worse facet contents. _Set-pred_ and _seq-set-pred_ have similar accuracy in terms of facet counts than _seq-default_ and _seq-min-perm_ but have better facet contents as well. Compared to _seq-avg-perm_, facet-count-controllable methods perform better on set BLEU score and set BERT-Score. The possible reason is that these two metrics are more sensitive to the number of generated facets. When calculating these two metrics, facets that do not match the ground-truth facets in the count will receive a score of 0 and the mismatch between the count of generated facets and the count of ground-truth facets results in worse matching scores. However, this phenomenon does not exist When calculating BERT-score diversity. Because each facet has its comparable facets.
When it comes to diversity, the term diversity and BERT-score diversity of _seq-set-pred_ decrease and increase, respectively, with the increase in the number of generated facets. The results of _set-pred_ indicate that this approach performs well when generating a smaller number of facets. However, when generating more facets, it may produce more repeated tokens. In conclusion, _seq-set-pred_ demonstrates better diversity than _set-pred_ across all the numbers of generated facets.
Simultaneously, we visualize the count of facets generated by the adaptive generation methods as Figure 2. It demonstrates that _seq-default_ and _seq-min-perm_ tend to generate more results with two facets while _seq-avg-perm_ generate various numbers of facets.
\begin{table}
\begin{tabular}{c c c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Term Overlap} & Set BERT-Score & Term Div & BERT-Score Div \\ & Num & F1 & F1 & Ratio & F1 \\ \hline \multirow{4}{*}{Set-Pred} & 1 & 0.2696 & 0.3254 & - & - \\ & 2 & 0.2885 & 0.6477 & 0.8812 & 0.0613 \\ & 3 & **0.2897** & **0.680** & 0.8838 & 0.0635 \\ & 4 & 0.2867 & 0.6231 & **0.8903** & 0.0658 \\ & 5 & 0.2814 & 0.5450 & 0.8869 & **0.0683** \\ \hline \multirow{4}{*}{Seq-Set-Pred} & 1 & 0.2709 & 0.3253 & - & \\ & 2 & **0.2886** & 0.6474 & **0.9133** & 0.0649 \\ \cline{1-1} & 3 & 0.2863 & **0.6863** & 0.9117 & 0.0667 \\ \cline{1-1} & 4 & 0.2759 & 0.6259 & 0.9095 & 0.0687 \\ \cline{1-1} & 5 & 0.2677 & 0.5660 & 0.9083 & **0.0705** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Matching scores and diversity on a different number of facets. Div means diversity for short.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Seq-default & Seq-Min-Seq & Seq-Avg-Perm & Three Facets & Two Facets \\ \hline Ratio & 0.7038 & 0.7072 & 0.6678 & 0.7039 & **0.7431** \\ \hline \hline \end{tabular}
\end{table}
Table 5. The proportion of generated facets that match the ground truth facets.
Figure 2. Distribution of the number of facets generated by different models.
### Impact of Training Data Amount
Due to the significant differences in the quantity of information utilized per training epoch for each method, we compare each method when the amount of training data is at a similar scale. Specifically, we only train the methods that learn towards all the facet permutations (_seq-avg-perm_ and _seq-set-pred_) for 1 epoch, so that the overall training data amount is similar to the other methods. As shown in Table 7, their results have different extents of regressions since it takes longer for the model training to converge under the permutation-invariant context. However, both _seq-avg-perm_ and _seq-set-pred_ still outperform _seq-default_ and _seq-min-perm_ in terms of all the metrics.
### Facet Generation with ChatGPT
Recently, Large Language Models (LLMs), such as ChatGPT, have demonstrated remarkable capabilities across various tasks. So, one may wonder how such LLMs perform on the facet generation task. With this regard, we assess the ability of ChatGPT in facet generation and compare it to the methods investigated in this paper. We randomly sampled 50 test examples from MIMICS-Manual dataset and ash ChatGPT to generate facets using the instruction as in Figure 3. The results are displayed in Table 8. It shows that ChatGPT has a large performance gap compared to most of the methods presented in our paper across all metrics. However, despite ChatGPT's poor performance on the metrics, we still believe that its generated results are reasonable. For example, for the query "Express vp", the facets generated by ChatGPT are "virtual private network", "privacy and security", "content access", "server network", and "pricing" while the ground-truth facets are "expressvpn mac", "expressvpn android", "expressvpn windows", "expressvpn linux", and "expressvpn
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Query} \\ & internet explorer & nike boys shoes & abiotic factors \\ \hline \multirow{2}{*}{Ground Truth} & windows 10, windows 7, & basketball shoes, running shoes, & grasslands, savanna, \\ & windows 8, windows xp & tennis shoes, soccer shoes, golf shoes & tundra \\ \hline \multirow{2}{*}{Seq-default} & internet explorer windows 7, & nike boys tennis shoes, & tropical rainforest, temperate forest, \\ & internet explorer windows 10 & nike boys running shoes & tundra, desert, savanna \\ \hline \multirow{2}{*}{Seq-Min-Perm} & internet explorer 32 bit, & nike boys shoes for boys, & in animals, \\ & internet explorer 64 bit & for men, for kids & for kids \\ \hline \multirow{2}{*}{Seq-Avg-Perm} & internet explorer for windows 10, windows xp, & running shoes, basketball shoes, & taiga, savanna. \\ & internet explorer windows 8, for windows 7 & golf shoes, tennis shoes, soccer shoes & tundra, desert \\ \hline \multirow{4}{*}{Set-Pred} & for windows vista, & nike boys basketball shoes, & of tundra, \\ & for windows 10, & nike boys golf shoes, & of the rainforest, \\ & for windows 8, & nike boys tennis shoes, & tropical rainforest, \\ & for windows 7, & nike boys running shoes, & tundra, \\ & for windows xp & basketball shoes & tundra habitat \\ \hline \multirow{4}{*}{Seq-Set-Pred} & internet explorer windows 7, & nike boys basketball shoes, & in desert, \\ & internet explorer windows xp, & nike boys running shoes, & in rainforest, \\ & internet explorer windows 10, & nike boys baseball shoes, & in the tundra, \\ & internet explorer windows server 2012, & nike boys football shoes, & in the savanna \\ \hline \hline \end{tabular}
\end{table}
Table 6. Some examples of the facets generated by each model. We selected the top-5 facets for Set-Pred and Seq-Set-Pred. The duplicate facets were removed from all the models. Facets are separated using the ’; symbol.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & Term Overlap & Exact Match & Set BERT-Score \\ & F1 & F1 & F1 \\ \hline Seq-Default & 0.2752\({}^{+-}\) & 0.0611\({}^{+-}\) & 0.6419\({}^{-}\) \\ Seq-Min-Perm & 0.2537\({}^{+-}\) & 0.0519\({}^{-}\) & 0.6457\({}^{+-}\) \\ \hline Seq-Avg-Perm & **0.2898\({}^{+}\)** & 0.0737\({}^{+-}\) & 0.6562\({}^{+-}\) \\ Set-Pred & 0.2897\({}^{+}\) & **0.0953\({}^{+}\)** & **0.6880\({}^{+}\)** \\ Seq-Set-Pred & 0.2777\({}^{+-}\) & 0.0816\({}^{+-}\) & 0.6791\({}^{+-}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7. Matching scores of different methods based on similar training data amount. The superscript + denotes significant improvements compared to the worst one which is underscored and \(-\) means significant decreases compared to the best one which is bold in terms of a two-tailed paired t-test with Bonferroni correction with 99% confidence.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & Term Overlap & Exact Match & Set BERT-Score \\ & F1 & F1 & F1 \\ \hline Seq-Default & 0.2566 & 0.0374 & 0.6070 \\ Seq-Min-Perm & 0.2627 & 0.0044 & 0.6293 \\ Seq-Avg-Perm & **0.3276** & 0.0760 & 0.6654 \\ Set-Pred & 0.3187 & **0.1225** & **0.7039** \\ Seq-Set-Pred & 0.2998 & 0.1003 & 0.5811 \\ \hline ChatGPT & 0.1598 & 0.0315 & 0.5711 \\ \hline \hline \end{tabular}
\end{table}
Table 8. Comparison between ChatGPT with our studied methods on 50 random test samples in MIMICS-Manual. The worst performance is underscored and the best one is in bold.
ios". This finding is consistent with the results in (Krishnan et al., 2020). In sum, the facets generated by ChatGPT are general concepts related to the query but do not match the facets manually labeled according to the provided document snippets.
### Case Study
We demonstrate the generated results for three queries with varying numbers of ground-truth facets in Table 6. We can observe that _seq-default_ and _seq-min-perm_ tend to generate two facets for a given query, which aligns with the pattern shown in Figure 2. Many facets generated by _seq-default_ match the terms in the ground truth, resulting in higher precision in term overlap, but due to limitations in the number of generation facets, the recall metric is not very high. For _seq-min-perm_, the generated facets are somewhat related to the query but could not generate the desired facets, leading to lower scores. These findings are consistent with the results in Table 2. The remaining three methods _set-pred_, _seq-avg-perm_, and _seq-set-pred_ demonstrate good generation performance. In the facet-count-controllable methods, we select the top three facets as the generated results. However, we can find that the ignored facets may be better. Therefore, we can generate more facets and choose the best facets instead of directly selecting the first three based on the similarity between the facets and the query. We will investigate this in the future.
## 6. Conclusions and Future Work
In this paper, we conducted a systematic comparative study of various types of training objectives, with different properties of, whether it is permutation-invariant, whether it conducts sequential prediction, and whether it can control the count of output facets. For comprehensive comparisons, besides the commonly used evaluation that measures the matching with ground-truth facets, we also introduce two diversity metrics. Our experimental analyses demonstrate: the appropriate permutation-invariant objectives can help generate better facets; facet prediction that is only based on the query and top-retrieved documents (i.e., _Set-Pred_) achieves compelling performance in terms of the metrics measuring matching with the ground truth but have the worst diversity performance; methods that only learn facet prediction given context (i.e., _Seq-Set-Pred_) have better semantic matching metrics with ground truth but worse diversity performance than its counterpart that also learns when to stop generating facets. Our newly proposed methods outperform the previous state-of-the-art models (Krishnan et al., 2020).
For future work, We plan to evaluate these objectives on a decoder-only architecture such as GPT-2 (Krishnan et al., 2020) in the next step. As we mentioned in Section 3.3, _seq-perm-avg_ and _seq-set-pred_ will be essentially the same based on the decoder-only models. Another interesting research direction is to utilize a small amount of data to learn how to predict user intents. Although MIMICS have enough annotations for fine-tuning, in reality it is common to have limited annotated data. Thus, we plan to study intent prediction with few-shot learning.
###### Acknowledgements.
This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 62302486, the Lenovo-CAS Joint Lab Youth Scientist Project, and the project under Grants No. JCKY2022130C039. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
|
2304.14831 | Earning Extra Performance from Restrictive Feedbacks | Many machine learning applications encounter a situation where model
providers are required to further refine the previously trained model so as to
gratify the specific need of local users. This problem is reduced to the
standard model tuning paradigm if the target data is permissibly fed to the
model. However, it is rather difficult in a wide range of practical cases where
target data is not shared with model providers but commonly some evaluations
about the model are accessible. In this paper, we formally set up a challenge
named \emph{Earning eXtra PerformancE from restriCTive feEDdbacks} (EXPECTED)
to describe this form of model tuning problems. Concretely, EXPECTED admits a
model provider to access the operational performance of the candidate model
multiple times via feedback from a local user (or a group of users). The goal
of the model provider is to eventually deliver a satisfactory model to the
local user(s) by utilizing the feedbacks. Unlike existing model tuning methods
where the target data is always ready for calculating model gradients, the
model providers in EXPECTED only see some feedbacks which could be as simple as
scalars, such as inference accuracy or usage rate. To enable tuning in this
restrictive circumstance, we propose to characterize the geometry of the model
performance with regard to model parameters through exploring the parameters'
distribution. In particular, for the deep models whose parameters distribute
across multiple layers, a more query-efficient algorithm is further
tailor-designed that conducts layerwise tuning with more attention to those
layers which pay off better. Extensive experiments on different applications
demonstrate that our work forges a sound solution to the EXPECTED problem. Code
is available via https://github.com/kylejingli/EXPECTED. | Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Yulei Sui, Ivor W. Tsang | 2023-04-28T13:16:54Z | http://arxiv.org/abs/2304.14831v2 | # Earning Extra Performance from Restrictive Feedbacks
###### Abstract
Many machine learning applications encounter a situation where model providers are required to further refine the previously trained model so as to gratify the specific need of local users. This problem is reduced to the standard model tuning paradigm if the target data is permissible fed to the model. However, it is rather difficult in a wide range of practical cases where target data is not shared with model providers but commonly some evaluations about the model are accessible. In this paper, we formally set up a challenge named _Earning eXtra Performance from restriCTive t&DDbacks_ (EXPECTED) to describe this form of model tuning problems. Concretely, EXPECTED admits a model provider to access the operational performance of the candidate model multiple times via feedback from a local user (or a group of users). The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks. Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate. To enable tuning in this restrictive circumstance, we propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution. In particular, for the deep models whose parameters distribute across multiple layers, a more query-efficient algorithm is further tailor-designed that conducts layerwise tuning with more attention to those layers which pay of better. Our theoretical analyses justify the proposed algorithms from the aspects of both efficacy and efficiency. Extensive experiments on different applications demonstrate that our work forces a sound solution to the EXPECTED problem, which establishes the foundation for future studies towards this direction.
Model tuning, restrictive feedbacks, gradient estimation, query-efficiency, layerwise update.
## 1 Introduction
Plainly using a provided model usually cannot fulfill the requirement of downstream tasks, probably on account of the distribution shift on data [1, 2], or altered evaluation metrics [3]. For example, after the deployment of a language model on user devices, model updates are often needed to enable a stronger performance on personalized data [1]. These necessary model updates, known as model tuning in the machine learning community, compensate for the potential discrepancy between upstream and downstream tasks. With accessible target data to the deployed model, the end-to-end back-propagation based model tuning has been demonstrated as a powerful technique in a wide range of fields, such as computer vision [4, 5], natural language [6, 7], and medicals [8]. However, there remain many model tuning applications that cannot be dealt with in this manner.
_Example._ (Model tuning with hided target data) Alice possesses a model developed on a collection of data (public or private), and she would only send out an application instead of the source model (e.g., to protect intellectual property). Bob is a user who owes local data which might be different from Alice's. He is also unwilling to share his data (e.g., personal concerns) but wishes Alice could update the source model so that the corresponding application eventually achieves a pleasant performance for him. This requirement is rational because the users who experiences unsatisfied performance will become discouraged and more likely to quit using the application [9]. Typically, Bob returns some feedback about how the candidate model performs, assisting Alice in doing meaningful product updates. Such interactions are normally included in mutually agreed protocols in the real world [10]. Please refer to more real scenarios such as tuning personalized foundation models [11] and learning without third-party platform [12] in Appendix.
Beyond privacy motivation, there are other cases for which the standard model tuning is not applicable either. For example, in a user-centric system, the perceptual evaluation that reflects personal and situational characteristics is invaluable to boost the system [13]. These subjective evaluations (e.g., user rating) show the desire for model efficacy and cannot be naturally treated as input data for the tuning purpose. Despite the specific scenarios, we summarize the common trait of all the involved applications. That is, although the model provider cannot access target data for
standard model tuning, the model is still hopefully to be improved by merely utilizing some downstream feedback information.
In this study, we abstract this form of learning problems as _Earning eXtra PerformanceE from restriCTive feEDdhucks_, dubbed by EXPECTED. Each feedback in EXPECTED is an evaluation result of a legal candidate model, thus our tuning objective is to achieve a satisfactory evaluation result on the target task. In this sense, "earning extra performance" means the improvement of evaluation performance over the initially provided model after multiple queries. "Restrictive" has twofold meanings here. On one hand, the model evaluation result should be uncomplicated, because the common evaluation metric like inference accuracy is typically a score and the user subjective evaluation is probably a star-rating. On the other hand, the number of evaluations is supposed to be limited due to the practical requirement for communication cost or efficiency. Fig. 1(a) depicts the EXPECTED problem, where the given model \(\theta_{0}\) is to be tuned so as to achieve a performance as high as possible with \(Q\) queries. To make it more understandable, we use "unobserved evaluation" to absorb different cases of the aforementioned applications.
EXPECTED challenges the existing research and poses a new and difficult model tuning task. Without the explicit target data which has been treated as an indispensable ingredient for learning, model tuning cannot be executed through the standard back-propagation implemented in different software repositories. In addition, although one can design some heuristic strategies to guess what a better model for the target task looks like from feedbacks, conducting a gradient-based optimization for model tuning remains troublesome, especially for modern Deep Neural Networks (DNNs) which are in high dimensional space and often with complex structures.
Intuitively, the past evaluated models provide valuable experience for crafting the latter ones. For instance, if a model is said poorly behaved from the feedback, then the current query model should explore in a different direction. This insight suggests solving the EXPECTED problem not only involves cherishing the valuation of feedback information but also focuses on designing approaches to instruct the generation of candidate models. Having this principle in mind, we propose to refine the provided model by characterizing the geometry of evaluation performance with regard to model parameters. Based on natural evolution strategy (NES) [14], we develop the efficient gradient-based model tuning algorithms for both lightweight models and complex DNNs. Our theoretical analyses and experiments justify the utility of the proposed algorithms.
The main contributions of this work are summarized as follows.
* Motivated by many real applications which demands further updates to the previously trained model from restrictive model evaluations, we introduce the setting of Earning eXtra PerformanceE from restriCTive feEDdbacks (EXPECTED). EXPECTED is not a conventional data-driven optimization problem and thus supplements the existing model tuning regime. We also clarify the difference between EXPECTED and other model tuning paradigms to highlight its novelty.
* We realize that EXPECTED problem could be effectively remedied if historical model evaluations are elaborately designed to provide valuable clues. Based on this understanding, we propose Performance-guided Parameter Search (PPS) algorithm which resorts to optimizing the distribution of model parameters via gradient estimation. In terms of tuning DNNs, Layerwise Coordinate Parameter Search (LCPS) algorithm is further brought forward to significantly reduce the query number. Plus, we theoretically justify the soundness of our algorithms.
* The experiments on different modality data, including tabular data, text, and images, demonstrate the efficacy of the proposed algorithms for both data distribution shifts and altered evaluation metrics. We also verify that LCPS adaptively prioritizes more useful layers to update which saves the query cost. In particular, we find that our method that only uses restrictive feedbacks even rivals the unsupervised tuning works [2, 15] that access the entire features of target data on corrupted image classification task, showing a great potential in applications where features are not shared out.
## 2 Expected Compared with Other Learning Settings
Adapting a pre-trained model to the related target tasks motivates the model tuning setting. In this work, we focus
Fig. 1: Overview of EXPECTED. (a) Given a deployed model parameterized by \(\theta_{0}\), EXPECTED aims to adapt it to the target task with limited query-feedbacks (budget \(Q\)) through the unobserved evaluation. (b) We instance the unobserved evaluation by the inaccessibility of target data. In this case, EXPECTED is compared with other three model tuning settings from the aspects of (1) how much information about target data \(\mathcal{D}\) is accessible and (2) how the gradient information \(\nabla_{\theta}\) is attained. The grey filling indicates the object is unobserved to the tuning executor. In term of the federated learning, although local data \(\mathcal{X},\mathcal{Y}\) is inaccessible to the global model, the gradient \(\nabla_{\theta}\) is directly returned. Note that \(E_{i}\) is informally short for \(E(\mathcal{D};(\theta_{0}+\delta_{i}))\).
on discriminative models. For a better statement, we assume the inaccessible target data only differs source data by (input) distribution shift (We leave the case of altered metrics in Section 5.3), and thus the network architecture does not need modifying. Fig. 1 (b) compares EXPECTED with the three most related model tuning settings from the following two aspects.
**1) Model tuning setting evolves with more restrictive target information accessible.** If sufficient target data including features and labels are accessible during tuning, this is literally the supervised learning paradigm, i.e., fine-tuning [4]. Once the label information is absent, it comes to the unsupervised tuning, which is also known as the test-time training [2, 15] or source-free unsupervised domain adaptation [16, 17, 18]. Federated learning [19] lets the decentralized global model fit local data without sharing them out. While studied in a one-to-many context, it can be viewed as a model tuning process from global to local1. However, one can see that federated learning preserves local data by bringing model training to the device, which is in fact not applicable to the scenarios where intellectual property is also concerned as referred in the previous example. Uniquely, the proposed EXPECTED neither accesses features nor labels of target data, and it only allows limited two-way communications, i.e., querying with model candidates and receiving model performances as feedbacks.
Footnote 1: The involved global-local interaction strictly becomes a model tuning process when a collaborator has data changes and aims to acquire a fine-tuned model from the global model, referred by the recent work [20].
**2) More restrictive target information implies a harder gradient-based optimization.** Through the above statement, we notice that both supervised tuning and federated learning actually take the sufficient gradient information because they are empirically derived on the labelled target data. In terms of unsupervised tuning where only \(\mathcal{X}\) is observed, model gradient is computed from the self-supervision formulation [15] or in a fashion of self-training [21]. Therefore, their gradients might be biased (See CIFAR-10-C experimental results in Section 5.2). We can see all these three settings can be conclusively categorized to the data-driven model tuning, as their gradients can be sample-wise decomposed. Things for EXPECTED are indirect by contrast, because the ingredients of EXPECTED for computing gradients are query models and their feedbacks. By understanding every feedback \(E_{i}\) as a summary statistic of the target data in terms of the \(i\)-th query model, EXPECTED is consequently interpreted as a model-driven tuning problem. In particular, we expect that our proposed algorithms achieve the compared performance with the data-driven model tuning methods, even when the query budget is not very generous.
## 3 Preliminaries
We first prepare the common mathematical notations used in this paper as Table. I.
**Problem Formulation.** Let \(F_{\theta_{0}}\) denote the initially provided (pre-trained) model which is parameterized with \(\theta_{0}\), \(\mathcal{D}\) denote the inaccessible target data that the model aims to adapt to, and \(Q\) denote query budget, i.e., the tolerant number of model evaluations. For a probing candidate model \(F_{\theta_{i}}(1\leq i\leq Q)\), evaluation function \(E(\cdot)\) measures its performance over target data \(\mathcal{D}\) and returns a score \(s_{i}\) (When multiple evaluation metrics are used, a tuple might be returned of which \(s_{i}\) could be as an element. One can refer to Section 5.3.1 for a case study about this scenario.) as feedback. That is, \(s_{i}=E(\mathcal{D};F_{\theta_{i}})\). Supposing a larger score is preferred, e.g., accuracy, EXPECTED aims to solve the following problem,
\[\theta_{*}=\arg\max_{\theta}E(\mathcal{D};F_{\theta}),\ \ s.t.\ \#queries\leq Q. \tag{1}\]
Please also see Fig. 1(a) for this example. Note that we will use an alternative form \(E(\mathcal{D};\theta)\) or \(E(\theta)\) to replace \(E(\mathcal{D};F_{\theta})\) for the convenient expression in the rest of the paper when it does not cause any ambiguity.
**A Naive Approach - Random Search.** With query chance budgeted by \(Q\), one can randomly perturb the deployed model and ask for its evaluation on the target data. Afterwards, the model that achieves the best performance is selected as the optimal approximation of \(\theta_{*}\). That is,
\[\theta_{*}\approx\theta_{0}+\arg\max_{\delta_{i}}\{E(\mathcal{D};\theta_{0}+ \delta_{i})\}_{i=1}^{Q}, \tag{2}\]
where \(\delta_{i}\) represents the difference between the initially provided model \(F_{\theta_{0}}\) and the tuned model \(F_{\theta_{i}}\). If each \(\delta_{i}\) is derived independently, solving Eq. (2) comes to a Random Search [22] game, which may not meet the need of aforementioned restrictive conditions as the parameter space is too large to do a search in this way (See Section 5.2).
_Notice._ Transferring other emerging hyperparameter searching techniques [23] or advanced evolution algorithms [24] to the model tuning scenario is beyond our scope. This study will focus on how to effectively solve the EXPECTED problem based on gradient estimation, especially when modern DNNs are to be tuned.
## 4 Tuning from Restrictive Feedbacks
This section starts with a general case in which we aim to optimize \(\theta\) by taking advantage of the feedback information. With the consideration of model structures, the second part focuses on tuning DNNs under EXPECTED. We leave the computation cost analyses of our methods to Appendix.
\begin{table}
\begin{tabular}{l|l} \hline \hline Notation & Explanation \\ \hline \(\mathcal{D}=\{\mathcal{X},\mathcal{Y}\}\) & target data with features \(\mathcal{X}\) and labels \(\mathcal{Y}\) \\ \(Q\) & query budget \\ \(H\) & layer number of tuned parameters \\ \(\theta_{i}\) & model parameter for query index \(i\) \\ \(\theta^{\ddagger}\) & model parameter updated after \(t\)-th iteration \\ \(F_{\theta}(\cdot)\) & model \(F\) parameterized with \(\theta\) \\ \(E(\cdot)\) & evaluation metric \\ \(G(\cdot)\) & performance gain by tuning \\ \(\pi(\cdot)\) & distribution of model parameters \\ \(l_{h}\in\theta\) & parameters of the \(h\)-th layer to be sampled \\ \(p_{h}\) & probability of the \(h\)-th layer to be sampled \\ \(u\) & the number of queries for a unit update \\ \(b\) & batch size of samplings \\ \(\sigma\) & standard variance \\ \(\epsilon\sim\mathcal{N}(0,I)\) & standard Gaussian noise \\ \(|\cdot|\) & dimension of a vector \\ \(||\cdot||\) & \(\ell_{2}\)-norm of a vector \\ \hline \hline \end{tabular}
\end{table} TABLE I: Common mathematical notations.
### _Gradient-based Optimization from Query-feedbacks_
The constraint about the query number in objective (1) can be simply eliminated by applying a stopping criterion about the performance gain. For convenience, we treat it as an unconstrained problem by still running the full \(Q\) queries.
#### 4.1.1 Learning the Distribution of Model Parameters
If the evaluation function \(E(\cdot)\) of EXPECTED denotes the classification accuracy, its specific form is then written as
\[E(\mathcal{D};\theta)=1-\frac{1}{|\mathcal{D}|}\sum_{(x_{i},y_{i})\in\mathcal{D }}\mathbb{I}(F_{\theta}(x_{i})\neq y_{i}), \tag{3}\]
where \(\mathbb{I}(\cdot)\) is the sign function, a.k.a. zero-one loss. The right hand term is the negative expression of empirical risk, which suggests that problem (1) can be decomposed over the target samples, i.e.,
\[\max_{\theta}\mathbb{E}_{x,y\sim p_{(x,y)}}E(x,y;\theta). \tag{4}\]
Problem (4) can be viewed as the standard tuning paradigm with the indifferentiable loss. As \(p(x,y)\) is agnostic, we alternatively consider the following form via introducing the distribution of \(\theta\),
\[E(\mathcal{D};\theta)=E(\mathcal{D};\mathbb{E}_{\theta\sim\pi(\theta)}(\theta ))\approx\mathbb{E}_{\theta\sim\pi(\theta)}E(\mathcal{D};\theta), \tag{5}\]
where \(\theta\) is assumed sampled from the parameter distribution \(\pi(\theta)\), the equality holds by defining \(\theta\) as its expectation over \(\pi(\theta)\), and the later approximation follows [14]. Since we intend to obtain the optimal \(\theta_{*}\), solving problem (1) can be written as
\[\theta_{*}\sim \pi_{*}(\theta), \tag{6}\] \[\pi_{*}(\theta)=\arg\max_{\pi(\theta)}\mathbb{E}_{\theta\sim\pi( \theta)}E(\mathcal{D};\theta),\]
where \(\pi_{*}(\theta)\) represents the best estimation of \(\pi(\theta)\), and the expectation is taken over all candidate models. We can see that this proxy objective relaxes the optimization to \(\theta\) into characterizing its distribution \(\pi(\theta)\).
To make it practical, we parameterize \(\pi\) by the density probability \(\pi(\theta|\omega)\), where \(\omega\) denotes the distribution parameters. As a result, solving problem (6) requires the maximization of the following objective,
\[J(\omega)=\mathbb{E}_{\theta\sim\pi(\theta|\omega)}[E(\mathcal{D};\theta)]= \int E(\mathcal{D};\theta)\pi(\theta|\omega)\,d\theta. \tag{7}\]
The gradient of Eq. (7) w.r.t. \(\omega\) can be computed by
\[\nabla_{\omega}J(\omega) \stackrel{{\mbox{\small$\mathsf{T}$}}}{{=}}\mathbb{E }_{\theta\sim\pi(\theta|\omega)}[E(\mathcal{D};\theta)\nabla_{\omega}\log\pi( \theta|\omega)] \tag{8}\] \[\stackrel{{\mbox{\small$\mathsf{T}$}}}{{\approx}} \frac{1}{b}\sum_{i=1}^{b}E(\mathcal{D};\theta_{i})\nabla_{\omega} \log\pi(\theta_{i}|\omega),\]
where \(\mbox{\small$\mathsf{T}$}\) uses the so-called log-likelihood trick [14] which enables the gradient decoupled from the evaluation function \(E(\cdot)\), \(\mbox{\small$\mathsf{Q}$}\) adopts the Monte Carlo approximation by empirically conducting \(b\) samplings from \(\pi(\theta|\omega)\), i.e., \(\theta_{1},...,\theta_{b}\). From Eq. (8), we can see that the involved samplings \(\theta_{i}\) for estimating the gradient of \(\omega\) can be properly achieved by query chances of EXPECTED. Concretely, in every iteration, we consume \(b\) queries to draw from the current \(\pi(\theta|\omega)\) which are used to estimate the gradient of \(\omega\). Then we update \(\omega\) by the gradient ascent and obtain a new \(\pi(\theta|\omega)\) for the next round of samplings. This process will not terminate until \(Q\) queries run out or some candidate is already satisfactory.
#### 4.1.2 Implementation with Gaussian Prior
Recall that the canonical form of training a supervised model is i.i.d. log-likelihoods plus a log prior. It is known that the widely used weight regularizer from literature is the weight decay, which corresponds to a centered Gaussian prior. Following this convention, we instance the distribution of the model parameter \(\pi(\theta|\omega)\), i.e., \(\omega=[\mu,\Sigma]\). Optimizing \(\omega\) requires the natural gradient for scale conformity [14] which generally involves the inverse of the Fisher information matrix with the size of \(|\theta|\times|\theta|\) in our case. To reduce the burden of heavy computation, we assume that \(\Sigma\approx\sigma^{2}I\) will not hurt the tuning performance by much, which enables us to treat \(\sigma^{2}\) as a hyperparameter and only to estimate the gradient of \(\theta\). For example, at the first iteration, \(\mu\) is initialized by \(\theta^{0}\) (\(\theta^{0}:=\theta_{0}\)), i.e., \(\pi(\theta|\omega)=\mathcal{N}(\theta|\theta^{0},\sigma^{2}I)\). By leveraging the reparameterization technique, the candidate models are sampled around the current model \(\theta^{0}\), i.e., \(\theta_{i}=\theta^{0}+\sigma\epsilon_{i}\) (\(1\leq i\leq b\)), where \(\epsilon_{i}\sim\mathcal{N}(0,I)\). Taking such samplings into Eq. (8) yields the gradient estimation w.r.t. \(\omega\). As updating \(\omega\) is equivalently updating \(\theta\), we then directly provide the gradient estimation with sampling batch size \(b\) w.r.t. \(\theta\),
\[\nabla\mathbb{E}[E(\theta)]\approx\frac{1}{\sigma b}\sum_{i=1}^{b}\epsilon_{ i}E(\theta+\sigma\epsilon_{i}), \tag{9}\]
where \(\mathcal{D}\) is dropped from now on for simplicity. Although starting from a surrogate objective (6), the above implementation helps us optimize model parameters \(\theta\) directly.
Before taking Eq.(9) into a gradient ascent update, we exhibit two techniques to facilitate this gradient estimation. First, we adopt antithetic sampling [25] which is demonstrated to stabilize the update. That is, in each round, we only independently sample \(b/2\) Gaussian points \(\epsilon_{j}\) and let the rest be the negative copies, i.e., \(\epsilon_{b-j+1}=-\epsilon_{j}\). Second, to reduce the impact of the scale of model performance, we normalize the feedbacks by subtracting the mean and then dividing by the standard deviation before using them. Such a normalization step has been demonstrated to maintain a constant learning rate \(\eta\)[26] and also provides an important condition for some fundamental facts used in Appendix. Alg. 1 summarizes the whole procedure, which is named Performance-guided Parameters Search (PPS).
#### 4.1.3 Quality Analysis of the Estimated Gradient
We first show that applying the antithetic sampling on Eq. (9) allows us to explicitly build the connections between the estimated gradient and the true gradient.
**Proposition 1**.: _If \(\sigma\) is small, any estimated gradient \(\nabla\mathbb{E}[E(\theta)]\) derived by Alg. 1 can be seen as a projection of the corresponding true gradient \(\nabla E(\theta)\in\mathbb{R}^{|\theta|}\) onto a lower dimensional space with \(b/2\) independent random Gaussian vectors being bases._
Proof.: When antithetic sampling is used, the expression of Eq. (9) can be written as:
\[\nabla\mathbb{E}[E(\theta)] \approx\frac{1}{\sigma b}\sum_{i=1}^{b}E(\theta+\sigma\epsilon_{i}) \epsilon_{i}\] \[\text{\textcircled{1}} \frac{1}{b/2}\sum_{i=1}^{b/2}\frac{E(\theta+\sigma\epsilon_{i})-E( \theta-\sigma\epsilon_{i})}{2\sigma}\epsilon_{i}\] \[\text{\textcircled{2}} \frac{1}{b/2}\sum_{i=1}^{b/2}\left(\nabla_{\epsilon_{i}}E(\theta )\cdot\|\epsilon_{i}\|\right)\epsilon_{i}\] \[\text{\textcircled{3}} \frac{1}{b/2}\sum_{i=1}^{b/2}\langle\nabla E(\theta),\epsilon_{i} \rangle\epsilon_{i}\] \[\text{\textcircled{4}} \frac{1}{b/2}\sum_{i=1}^{b/2}\text{Proj}_{\epsilon_{i}}(\nabla E (\theta))\cdot||\epsilon_{i}||\cdot\epsilon_{i},\]
where \(\textcircled{1}\) follows the step 2 of Alg 1, \(\textcircled{2}\) uses the definition of directional derivative when \(\sigma\to 0\), \(\textcircled{3}\) rewrites the directional derivation into a form of the dot product, and \(\textcircled{4}\) is a natural reformulation to align with the definition of vector projection. Taking all the independent \(\epsilon_{i}\) as bases, the coordinate value of \(\nabla\mathbb{E}[E(\theta)]\) onto each base is \(\text{Proj}_{\epsilon_{i}}(\nabla E(\theta))\cdot\|\epsilon_{i}\|\), which completes the proof.
We then introduce the following theorem which quantifies how well a random projection preserves the length information of a vector.
**Theorem 1**.: _Let \(M\in\mathbb{R}^{|\theta|\times\frac{b}{2}}\) denote the random projection matrix with \(||\epsilon_{i}||\cdot\epsilon_{i}\quad(i=1,2,...,\frac{b}{2})\) being the columns. For the true gradient \(\nabla E(\theta)\) at any \(\theta\), we have_
\[\Pr\left\{(1-\xi)||\nabla E(\theta)||^{2}\leq||M^{T}\nabla E( \theta)||^{2}\leq(1+\xi)||\nabla E(\theta)||^{2}\right\}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
impact [30], which means PPS may waste query chance on less contributive layers.
Our strategy for overcoming the above limitations consists of two techniques. **(1) Layerwise tuning.** To remedy the problem of inefficiency in updating all parameters \(\theta\) as a whole, we partition the tuned parameters layerwise, i.e., \(\theta=\{\ell_{1},\ell_{2},...,\ell_{H}\}\), where \(\ell_{h}(1\leq h\leq H)\) represents the parameters of the \(h\)-th layer. We then propose to tune parameters \(\theta\) more naturally; every time we only focus on updating a single layer's parameters \(\ell_{h}\) while freezing the remaining layers, i.e., \(\theta-\{\ell_{h}\}\). The basic idea is related to the sequential training on neural networks [31]. However, they try to scale the end-to-end training to large size datasets while we focus on query-efficient tuning from feedbacks. **(2) Query budget reassignment.** We model the importance of different layers by inspecting their performance improvements. Instead of leaning on the static weights from the prior knowledge [32], we propose to dynamically assign more queries to the layers which receive bigger pay-offs.
#### 4.2.2 Query-Efficient Layerwise Tuning
Let \(\alpha\in\mathbb{R}^{H}\) be a layer importance vector. We intend to map it to a (\(H-1\))-dimensional simplex, based on which a layer \(l_{h}\) is sampled to be updated or not. As all the tuned parameters are deemed useful, a base probability \(\frac{\gamma}{H}\) is maintained for each layer, and the sampling distribution \(p\) is then written component-wise
\[p_{h}=(1-\gamma)\frac{\text{exp}(\alpha_{h})}{\sum_{i=1}^{H}\text{exp}(\alpha_ {i})}+\frac{\gamma}{H}. \tag{11}\]
In practice, we make the base probability deterministic to guarantee a least update for each layer. That is, we conduct a unit execution for every layer before samplings, which equals to give \(u\) queries2 to each layer beforehand. As the least update tells us which layers are more contributive, their respective performance improvements will be used to measure the layer importance. Specifically, for the \(h\)-th layer at the \(t+1\)-th iteration, the update rule for \(\alpha\) is written as:
Footnote 2: As the number of parameters in different layers varies, \(u\) is not identical for different layers in practice. We use the same \(u\) for the convenient statement here.
\[\alpha_{h}^{t+1}=\alpha_{h}^{t}+\beta\underbrace{\max\{0,\tilde{E}_{h}^{t+1}( \theta)-\hat{E}_{h-1}^{t+1}(\theta)\}}_{\text{Average improvement }I_{h}^{t+1}}, \tag{12}\]
where \(\beta\) represents how much we rely on the observed improvement from the least update. The involved average improvement \(I_{h}^{t+1}\) is obtained by comparing with the last layer update. That is, \(\hat{E}_{h}^{t+1}(\theta)\) denotes the average evaluation result of the candidates models that perturb \(h\)-th layer at the iteration of \(t+1\),
\[\bar{E}_{h}^{t+1}(\theta)=\mathbb{E}_{\ell_{h}\sim\mathcal{N}(\ell_{h}^{t+1},\sigma^{2})}E\left(\{\ell_{1}^{t+\frac{1}{2}},...,\ell_{h-1}^{t+\frac{1}{2}}, \ell_{h}^{t},...,\ell_{H}^{t}\}\right),\]
and \(\hat{E}_{h-1}^{t+1}(\theta)\) denotes the evaluation after \((h-1)\)-th layer is updated by unit queries during the iteration of \(t+1\),
\[\hat{E}_{h-1}^{t+1}(\theta)=E\left(\{\ell_{1}^{t+\frac{1}{2}},...,\ell_{h-1}^{ t+\frac{1}{2}},\ell_{h}^{t},...,\ell_{H}^{t}\}\right).\]
Particularly, \(\hat{E}_{h-1}^{t+1}(\theta)=\hat{E}_{H}^{t}(\theta)\) if \(h=1\).
Note that the above statement essentially suggests to split a single iteration into two stages. The first stage is in charge of the least update for every layer which yields the importance factors used for the queries reassignment during the second half. We emphasize that the first stage is indispensable because we want to inspect the response of every layer with the fact that only a few layers selected in the second stage. The complete algorithm, named Layerwise Coordinate Parameter Search (LCPS), is formally summarized into Alg. 2.
#### 4.2.3 Regret Analysis
Reassigning different numbers of queries to different layers can be viewed as an exploration-exploiting game. Concretely, tuning without layer importance equals a pure exploration process, which is not a good option when the limited queries are given but layer discrimination does exist. By contrast, barely updating the most important layer corresponds to an exploiting strategy, which is not wise unless the query number is very small. If we regard tuning a specific layer as selecting a slot machine to play, our strategy can be understood to solve a multi-armed bandit problem [33]. In this sense, we aim to minimize the following expected regret,
\[G_{\max}(T)-\mathbb{E}[G_{\mathcal{A}}(T)], \tag{13}\]
where \(T=1,2,...,\lfloor Q/b\rfloor\) is the horizon time, \(G_{\max}(T)\) denotes the performance gain by picking the unknown optimal layer sequence, \(G_{\mathcal{A}}(T)\) stands for the performance gain achieved by a designed algorithm, and the expectation is taken over the sampled layer sequences.
Straightforward optimizing Eq. (13) comes to an intractable problem, but this expected regret serves as a measurement to evaluate how the algorithm \(\mathcal{A}\) approaches the oracle performance. Here we present the expected regret bound for the proposed LCPS.
```
0: Initially provided model \(F_{b_{0}}\), query budget \(Q\), learning rates \(\eta\), \(\beta\), batch size \(b\), variance \(\sigma^{2}\), unit size \(u\).
1:for\(t=0,...,\lfloor Q/b\rfloor\)do
2:for\(h=1,...,H\)do
3: Update \(\ell_{h}^{t+\frac{1}{2}}\) with \(u\) queries following Alg. 1.
4: Compute average improvement \(I_{h}^{t+1}\) through Eq. (12).
5:\(\alpha_{h}^{t+1}\leftarrow\alpha_{h}^{t}+\beta I_{h}^{t+1}\).
6:endfor
7: Compute \(p^{t+1}\) by Eq. (11) with \(\gamma=0\).
8:for\(j=1,...,\lfloor(b-Hu)/u\rfloor\)do
9: Sample a layer \(h\) with \(p_{h}^{t+1}\).
10: Update \(\ell_{h}^{t+1}\) with \(u\) queries following Alg. 1.
11:endfor
12:endfor
13:Output:\(\theta^{(Q/b)+1}\).
```
**Algorithm 2** Layerwise Coordinate Parameter Search (LCPS)
**Theorem 2**.: _Given a deep model whose tuned parameters are \(\theta=\{\ell_{1},\ell_{2},...,\ell_{H}\}\), for any \(\beta>0\), we have that_
\[G_{\max}-\mathbb{E}[G_{\text{LCPS}}]\leq(\beta c(e-2)+1)G_{\max}+\frac{c}{\beta} \ln H\]
_holds for any \(T>0\), where \(c=\frac{b-Hu}{u}\), (\(b\) is the batch size, \(u\) is the unit size), and \(e\) is Euler's number._
The proof of Theorem 2 follows the sketch of Exp3 algorithm [33]. We leave it to Appendix for the readers
who are interested in the differences. From Theorem 2, we can see that: (1) A smaller \(c\) implies less expected regret, which suggests us to compute the query reassignment more frequently. However, we assume the performance gain hardly change in a few updates and thus attribute this problem to the selection of batch size \(b\). (2) This weak regret bound is also a function of step-size \(\beta\). From the Karush-Kuhn-Tucker (KKT) conditions, we obtain the regret reaches its minimum if we set \(\beta=\sqrt{\frac{\ln H}{(e-2)G}}\), where \(G\) is the predicted maximum performance gain.
**Remark 2**.: _This regret bound is related to the scale of performance gain; the accumulative performance gain \(G_{\max}\) could be very large if \(T\rightarrow\infty\). Expect for some evaluation measurements like accuracy which naturally makes \(G_{\max}\) upper bounded, one can do re-normalization to the immediate reward so that the bound of Theorem 2 remains meaningful._
## 5 Experiment
We remind readers two rules that all the experiments will obey. First, our experiments include how the initially provided model is produced, i.e., pre-training. But once the pre-training is completed, the source data is no longer used during the tuning, following the convention of standard model tuning. Second, we assume providers would only tune the parameters, which enables a lightweight modification on the users' side and also a small communication cost.
### _Experimental Setup_
**Datasets.** Adult [34] is a tabular dataset for categorizing the annual income of different groups of citizens. As personal information is recorded, it is also a benchmark for fairness studies. Amazon review [35] is a text dataset that contains user comments about various products. The corrupted CIFAR-10/CIFAR-100 [36] are visual datasets where different type/level of corruptions simulate the real-world data noises. STS-B [37] predicts the semantic similarity between pairs of sentences which are extracted from different sources.
Except for the particular restatement, the main usages of the above datasets are described as follows. Adult has \(14\) properties such as country, age, work class, education, etc, and we predict whether income exceeds \(50K\) per year. We pre-train a binary classifier on the records with the country of "U.S" and take "non-US" records as unobserved target data. Amazon dataset is constructed from Amazon review data with two categories of products selected, i.e., "Electronics" and "Watches". In our experiments, the data-rich category "Electronics" is used to pre-train a prediction model which maps user comments to the rating score ranging from one to five, and "Watches" is treated as the target data. The settings of Adult and Amazon follows the work [38]. In terms of CIFAR-10-C/CIFAR-100-C, the initial provided model is built on clean images, and it is then tuned to fit the disjoint corrupted images following the unsupervised tuning research [2], which mimics the unexpected distribution shift in the real world. Last, we aim to tune BERT [39] and its variants on the STS-B task under EXPECTED. Following the research [40], the models are firstly trained on the sentence pairs from the genre of MSRvid and then tuned to fit the unknown target data which are extracted from Images, where the evaluation metric is Pearson's correlation coefficient.
On the task of corrupted image classification, we treat all the corrupted images (target data) as the tuning data for a fair comparison with the unsupervised tuning methods. Throughout all the remaining datasets, the target data are split into two sets. We do randomly equal splitting for Adult and Amazon and use the default splitting for STS-B. One is the _support set_ that is used for evaluating the query efficiency of tuning algorithms, and the other is the _holdout set_ on which the model generalization is assessed. The corresponding performances are denoted by "sup" and "hol", respectively.
**Models.** In respect of Adult, we use a 3-MLP with the penultimate layer of \(80\) neurons. Following the fashion of [43], we simply perturb the weights of the last layer which thus contains \(80\) parameters for binary classification. For score prediction on Amazon, our implementation is based on the torchtext library, in which the first layer is a mapping from the vocabulary to a latent representation, followed by three convolutional layers and a linear transformation to label space. The weights of the first layer are set as the tune parameters (with the size of \(250K\)) because the remaining layers are found less sensitive to the change of domains according to our experiments3. In terms of corrupted CIFAR-10/CIFAR-100, we use residual networks [44] with \(26\) blocks
Fig. 3: Performance comparison on Adult and Amazon. Throughout all the experiments, the accuracy on the support set is monotonically non-decreasing, since we display the historically best at every iteration. Note that “good VOC” and “bad VOC” correspond to the different selections of vocabulary. The line shadow represents the standard deviation.
which are implemented by pycls library [45]. We modulate features for target data by estimating normalization statistics and then update transformation parameters channel-wise. This setting is consistent with the recent research [15, 18], which turns out the tuned parameters make up a small proportion of the whole model. Similarly, we resort to tuning the layer normalization of BERT and its variants for predicting sentence similarity, managing to earn more performance improvement by tuning on target data.
**Baselines.** Naive baselines for EXPECTED include the initially provided model by pre-training and supervised tuning, which are dubbed as "INI" and "OPT", respectively. In addition, Random Search (RS) [22] is also considered here in a similar manner for hyperparameter tuning. The rest baselines are used for specific comparisons, whose results are retrieved from the literature or recomputed when it is possible. For the methods involving randomness, we repeat the experiments 10 times for a convincing comparison.
### _EXPECTED on Shifted Data Distribution_
Distribution shift is one of the common roots for model tuning. This group of experiments justify the EXPECTED setting over different tasks under this configuration.
**Income classification.** We conduct Alg. 1 on Adult with the query budget \(Q=1K\). Fig. 3(a) exhibits the classification performance on support set and holdout set, respectively. We observe that (1) PPS significantly improves the performance of INI and closely approaches OPT at \(Q=1K\). (2) RS only achieves a subtle improvement to the initial model with the same number of queries and thus is less efficient for tuning model parameters.
**Rating prediction.** Alg. 1 is also run on Amazon, where a small query budget \(Q=80\) is used4. When a good vocabulary is carefully selected shown as Fig. 3(b), the improvement space w.r.t. the initially provided model is found quite limited, capped by the supervised tuning performance, i.e., OPT. In this case, the accuracy implemented by PPS only increases \(1.3\%\) and \(0.4\%\) on support set and holdout set respectively. By contrast, when a bad vocabulary is unintentionally selected shown as Fig. 3(c), we can see that PPS rapidly boosts the performance of INI, and it becomes stable after \(50\) queries. In summary, the comparison between Fig. 3(b)&(c) shows that (1) PPS consistently earns more performance than RS regardless of different equipments of vocabulary. (2) PPS is found sometimes trapped in a local optimum, which probably because the non-smooth evaluation function is insensitive to model perturbations. In other words, the multiple samplings with a fixed variance probably fail to be the model to escape from saddle points.
Footnote 4: We empirically find the desired batch size on this task could be very small, and we use \(b=4\) in our experiments. From Proposition 1, we suspect that the derived gradient for the tuning purpose lies in a very low dimensional space.
**Corrupted image classification.** We run both Algs. 1&2 on CIFAR-10-C/CIFAR-100-C with the query number of \(3K/1K\) to tune batch normalization layers. In this experiment, more baselines are included, such as Domain Adversarial Netowrks (DAN) [41], Test-Time Training (TTT) [15], test-time Batch Normalization (BN) [46], and test-time adaptation work Tent [2]. Fig. 4(a) summarizes these methods and presents the average errors of tuning performance, where OPT with a supervised end-to-end tuning is omitted here because it can achieve a very low error. The results show that (1) Tent is the most powerful among unsupervised tuning methods which access the entire features of target data, while the average error of LCPS is surprisingly better than Tent with \(Q=1K\). (2) Even offered more queries, RS and PPS cannot compete with LCPS, implying the advantage of Alg. 2 in tuning modern DNNs. Fig. 4(b) exhibits a close look to the specific results over each type of corruption. We observe that Tent updates towards the wrong direction on some particular corruptions, such as "motion" and "bright", causing even worse performance than BN. However, such performance drops do not happen to LCPS because feedbacks are simple but reliable (as label information is used during evaluations).
**Sentences similarity prediction.** Alg. 2 is also run on STS-B in which different models including BERT, RoBERTa, and DistilBERT [40] are examined. Pearson's correlation coefficient of each pre-trained model on the holdout set, i.e., the test set of Images, is \(0.861\), \(0.907\), and \(0.849\), respectively. Again, we denote them by "INI_hol" in terms of each backbone. Although these results are found comparable with what they behave on the source task [40], we are interested to know whether they can be further tuned to achieve a better performance. By using \(Q=5K\) queries, we apply RS and LCPS on these three models and the corresponding improvements are shown as Fig. 5, where
Fig. 4: (a) Comparison of different model tuning methods on CIFAR-10-C and CIFAR-100-C with the highest severity. (b) Performance of BN, Tent, and LCPS on CIFAR-10-C in terms of various corruptions.
the standard tuning denoted by "OPT_hol" is added in as a reference. The experimental results show that LCPS significantly improves the pre-trained model across different backbones, which certainly outperforms the RS strategy as well. Surprisingly, the standard tuning fails to upper bound LCPS as other experimental tasks. A reasonable explanation is that our LCPS utilizes **estimated full-batch gradients**, which is shown to better improve the generalization. Note standard tuning on BERT typically uses stochastic gradients which may be too noisy to provide effective information to update the model.
### _EXPECTED for Customized Evaluation Metrics_
In some applications like the machine learning service provision, a customized evaluation metric might be needed for clients. Thus, the provided model which is never trained towards such an objective usually cannot fulfill the downstream expectation. In this part, we study two interesting topics as the representatives of this situation. The first one is the fair classification where not only classification accuracy but also fairness critic is considered. The second one is fault-intolerant learning where the original evaluation metric is replaced by another metric in target tasks. For simplicity, we follow the basic configuration about datasets where the data distribution shift still exists, targeting a more challenging model tuning task.
#### 5.3.1 Fair Classification
In this experiment, _demographic parity_[47] is adopted as the fairness metric. Suppose a user requires a classifier which is unbiased on gender \(z\) (\(z=1\) denotes male and \(z=0\) is for female) in terms of the high salary (\(>850k\) per year). The corresponding discrimination level of demographic parity then can be defined by \(\Gamma(\theta)=|\Pr(F_{\theta}=1|z=1)-\Pr(F_{\theta}=1|z=0)|\). That means every time after a local evaluation, the user will return a two-dimensional tuple with one element for the classification accuracy and the other for the discrimination level, i.e. \((E,\Gamma)\). Since two metrics commonly compete with each other [48], we propose to update their joint gradients as shown in Alg. 3 of Appendix.
Fig. 6 shows the results of 100 independent executions of PPS under the above setting. Each green point denotes the model performance of a tuned model on the support set, and red stars are the corresponding performances on the holdout set. The most lower-right is the best. From this figure, we can see that (1) the particles falling in the green zone refer to the models which achieve improvements over the pre-trained model in terms of both classification accuracy and model fairness on the holdout set, making up \(100\%\) of the whole trials. (2) The overall tuning accuracy is superior to testing while the discrimination level of testing is slightly better, implying an acceptable discrepancy between tuning and testing. (3) The discrimination level of INI has been dramatically decreased (by more than half) after tuning. Thus we can see our method under EXPECTED serves as an efficient fair-tuning approach for inaccessible data.
#### 5.3.2 Fault-intolerant Evaluation
One of the common fault-intolerance metrics is top-\(K\) accuracy [49]. Unlike the single output prediction, top-\(K\) classification produces lower errors. We take the multi-class classification task over CIFAR-10-C as an example. In our experiment, apart from tuning with top-\(1\) error, top-\(5\) error is used for tuning metric as a comparison. We achieve this by simply replacing the standard top-\(1\) error with the top-\(5\) error during the local evaluation. The experimental results on two corruption types with \(2K\) queries, i.e., Gaussian and Impulse noise, are finally reported.
Fig. 7 exhibits the results of tuning with top-\(1\) and top-\(5\) metric separately. When any of them is not used for tuning, its corresponding error is computed by evaluating the tuned model on the target task with this metric. For example, regarding images corrupted by Gaussian noises, the model tuned with top-\(1\) metric under EXPECTED eventually achieves about \(17.7\%\) error, and we also obtain its top-\(5\) error by evaluating the tuned model with the top-\(5\) metric whose performance turns out around \(1.5\%\). From Fig. 7, we can see that (1) our LCPS is efficient for the model tuning under EXPECTED because it has dramatically decreased the classification errors on both metrics. Notably, through \(2K\) queries, the top-\(1\) error has been decreased by around \(7.7\%\) and \(11.7\%\) on two types of corruptions, respectively. (2) Although tuning with the top-\(1\) metric decreases the top-\(5\) error as well, the top-\(5\) error could be reduced to a smaller value when it is directly used as the tuning metric. That means if the user demands a lower top-\(5\) error, LCPS naturally satisfies this requirement by straightforward replacing the top-\(1\) error with top-\(5\) error at the beginning of tuning.
**Note.** The fairness metric is often hard to optimize since it is a group level measure defined on the entire tuning set. Top-\(K\) error is non-differentiable which can be implemented by some extra operation like truncation. That means both of them need some elaborate design in a standard model tuning task. Interestingly, we emphasize that these metrics can be innocently used for our methods as we barely collect the emitted performance over them under EXPECTED.
### _A Close Investigation to LCPS_
We further investigate how LCPS works for complex models by visualizing the process of layerwise tuning on CIFAR-10-C (with Gaussian corruption) and STS-B (on BERT). The basic experimental settings follow Section 5.2 but we let \(Q=2K\) on CIFAR-10-C for a better comparison.
We present the experimental results in Fig. 8, which shows that (1) in each iteration, only partial layers are selected in LCPS for updates, whose additive query numbers turn out much higher than the corresponding expected numbers (which are proportional to \(|\ell_{h}|(h=1,2,...,H)\) and indicated by the grey dashed lines). (2) A layer selected in the previous iterations would be prone to be selected again later. This is because the sampling probabilities of selected layers are much higher than the remaining ones due to the dominant average improvements (Refer to Eqs. (11)&(12)). (3) With the increment of iteration number, fewer layers are sampled, which means that the tuning process is towards exploitation given the limited query budget. This observation is also demonstrated by their steadily decreasing entropy of sampling probabilities (See Fig. 8(c)). (4) The additive queries which are reassigned during the second stage are
dependent on the specific task. Roughly speaking, shallow features update is more crucial to CIFAR-10-C while STS-B prefers deep features. A possible explanation is that Gaussian corruption changes low features significantly while data genre in STS-B is encoded by some high-level information.
### _Important Factors Study_
We empirically verify four factors that may have impact on the results.
**Sampling batch size.** We vary the sampling batch size from \(2\) to \(80\) with the step size of \(2\) on Adult to investigate the trade-off between the precision of estimated gradients and the total number of update steps. The results are shown as Fig. 9(a). In terms of the support set, the optimal performance is achieved when the batch size is about \(10\). While it reaches the optima with the batch size being \(20\) on the holdout set. Hence, we use \(b=20\) throughout all other experiments on Adult. Related research [14, 50] suggests that \(b\) is determined by the parameter dimensions, i.e., \(b=4+\lfloor 3\log|\theta|\rfloor\). We find out this setup is useful for most cases except on Amazon. Recall that the sampling batch size for Amazon is quite smaller from Section 5.2. Therefore,
Fig. 5: Generalization improvement of BERT and its variants after the model tuning on STS-B, which is computed by \(\frac{\pi}{\kappa_{0}}\), where \(s_{0}\) and \(s\) represent the model performance before and after tuning, respectively.
Fig. 8: Query budget reassignment of LCPS on CIFAR-10-C and STS-B. (a) and (b) are corresponding the results of CIFAR-10-C with Gaussian compruptions and STS-B with BERT being backbones. The grey dashed line indicates the expected query assignment for each layer without the layer importance concern. (c) exhibits the entropy of sampling probability over each iteration for the two experiments.
Fig. 6: Discrimination level reduction for model fairness tuning, where the particles falling in "improved Zone" represent the models that have been improved in terms of both accuracy and fairness metrics on the holdout set.
Fig. 7: Evaluation performance (%) of LCPS with top-\(1\) or top-\(5\) error as a tuning metric on two types of compruptions (Gaussian and Impulse noises) over CIFAR-10-C. “Non” represents an initially provided model with the test-time BN is directly evaluated without any tuning efforts. The lowest errors are marked as bold.
Fig. 9: Ablation study on three factors: sampling batch size, support size, and precision of feedbacks. “XDEC” in (c) means that the feedback value is rounded with X decimals.
we remind that this hyperparameter should be carefully selected, especially when the query efficiency is required. One possible workaround to this issue is resorting to an auxiliary validation set before executing tuning.
**Support size.** We explore the effects of the size of support set by varying its ratio from \(10\%\) to \(90\%\) on Adult, and the results are shown as Fig. 9(b). With the increase of support size, it becomes harder to fit all the supported samples given the same query budget, but the model generalization, i.e., the accuracy on the holdset set, gradually improves. Additionally, we can see a smaller support set leads to a larger variance. By contrast, when more than \(50\%\) of full support data (\(>1000\) samples) is used, the model generalization becomes steady with a slight fluctuation only. This observation also suggests that EXPECTED does not demand a big support set, showing a desired trait for some data-scarcity scenarios. In practice, the support size should be increased to guarantee the generalization if the distribution shifts between the original pre-training data and target data are aware to be large, while it should not be decreased if collecting data is expensive.
**Precision of feedbacks.** We study whether the precision of feedbacks has a direct impact in EXPECTED, which is also important when the back-doors attack [51] is concerned (Please refer to Appendix for more explanations). To this end, we run PPS on Adult by setting the number of decimals for the accuracy values from \(0\) to \(3\). The tuning performances on the support set are shown as Fig. 9(c), which demonstrates that (1) zero decimal case fails to preserve the quality of feedbacks as the performance drops dramatically compared with the best configuration. (2) The more precise feedbacks usually guarantee the better performance. However, as \(\frac{1}{N_{\text{sup}}}>0.01\%\) on Adult where \(N_{\text{sup}}\) is the support size, 2-decimal feedback is sufficient to use in this case. Hence, we can attribute the selection of the number of decimals to the side information about the support size.
**Layer importance.** To verify the necessity of developing LCPS for tuning complex models, we compare PPS and LCPS (with and without layer importance) through running them on CIFAR-10-C in terms of Gaussian and Impulse corruptions. Table II displays the corresponding results. We can see that LCPS only needs fewer queries to achieve the preset performances than both PPS and LCPS (w/o), showing a favourable property in tuning DNNs.
## 6 Discussion
We discuss the affinities of this work and existing research to clarify the scope of this study.
**Model tuning or model adaptation?** Our statement of not changing the semantic classes is consistent with the convention of domain adaptation [41]. However, we use "tuning" instead of "adaptation" throughout this paper because of three reasons. (1) Except some source-free studies [2, 18], most domain adaptation works [41, 52, 53, 54] are doing the alignment between source and target data, while EXPECTED focuses on tuning a provided model to fit the target data only regardless of the performance on the source. (2) The application of handling customized metrics on target data conceptually falls in the model tuning community, because the assumption of source-target distribution shift in domain adaptation is not a necessary requirement in the proposed setting. (3) Standard tuning with the accessible target data usually upper bounds the proposed method. Technically, similar to the standard tuning, our methods can also apply to the case where semantic labels are changed and the classifier's head need renewing. Nevertheless, searching in such a huge space is more difficult to find the optimal solution, especially when a tight query budget is offered.
**Black-box optimization or reinforcement learning?** To the best of our knowledge, this is a first-of-a-kind work that conducts model tuning on inaccessible data through Black-box Optimization (BO). As a result, the PPS used for solving EXPECTED could be replaced by other alternative solutions in BO, such as CMA-ES [50] and Bayesian optimization [55]. Note that we prefer PPS here because it achieves the close performance to CMA-ES and is scalable to higher-dimensional parameters as well (Refer to Fig. 10 and its experimental setup and result analyses in Appendix). In addition, one may realize this challenge w.r.t. complex models is related to Reinforcement Learning (RL) [56] because we aim to find the optimal update strategy to maximize the accumulated reward (Eq. (13)). Essentially, we cast layerwise tuning as a multi-armed bandit problem [33], which is a said simple form of RL without _state_ modeling [57]. We present more detailed analyses in Appendix for a clearer exhibition.
## 7 Conclusion
This paper presented a pioneer work of studying how to tune a provided model with only restrictive feedbacks on target task. To make this new setting clearer, we compared it with three summarized model tuning paradigms by carefully categorizing existing research. Our main technique borrowed the idea of NES [14] to estimate the distribution of tuned parameters, in which we especially considered its practicability on tuning modern DNNs. The equipped theoretical analyses were to support the utility of the proposed methods. Our experiments verified that the proposed methods can deal
\begin{table}
\begin{tabular}{l c c c} \hline \hline Type (\%) & PPS & LCPS (w/o) & LCPS (w/) \\ \hline Gaussian (\(22.0\)) & \(>10.0\) & \(\sim 3.0\) & \(\sim 0.2\) \\ Impulse (\(20.0\)) & - & \(>8.4\) & \(\sim 3.5\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: The required query number (\(K\)) to achieve the preset tuning performance for two types of corruptions (Gaussian and Impulse) on CIFAR-10-C.
Fig. 10: Comparison of three BO strategies. **Left**: tuning performance curves over query number on Adult. **Right**: running time (seconds) comparison on Adult and Amazon where the time cost of CMA-ES and Bayesian are not applicable due to OOM issues with high-dimensional parameters.
with the potential distribution shift (happening on target data) and customized metric problems. Besides, we revealed the properties of layerwise tuning strategy and explored some factors that may influence the experimental results.
In the future, we will explore the following two directions. (1) Query efficiency improvement. The specific form of Eq. (8) is actually not a unique choice for model parameters search. There are a number of surrogate objectives available from the literature [58]. Recent study [59] reveals that the ensemble of some surrogates achieves superior results for the hyperparameter tuning task. Inspired by this finding, we will explore if it is helpful to improve the query-efficiency in our methods. (2) Extension to more general tasks. Some tuning applications may require modifying the model structures, e.g, in the multi-class classification, downstream data may have different semantic classes. Or model providers tries to change the model structure during tuning to expand more tuning space. Other extensions like tuning generative models are also interesting. For example, in molecular synthesis [60], we cannot describe what the desired protein looks like, but there are multiple metrics for us to evaluate how good the protein-synthesis model is.
|
2310.16161 | MyriadAL: Active Few Shot Learning for Histopathology | Active Learning (AL) and Few Shot Learning (FSL) are two label-efficient
methods which have achieved excellent results recently. However, most prior
arts in both learning paradigms fail to explore the wealth of the vast
unlabelled data. In this study, we address this issue in the scenario where the
annotation budget is very limited, yet a large amount of unlabelled data for
the target task is available. We frame this work in the context of
histopathology where labelling is prohibitively expensive. To this end, we
introduce an active few shot learning framework, Myriad Active Learning (MAL),
including a contrastive-learning encoder, pseudo-label generation, and novel
query sample selection in the loop. Specifically, we propose to massage
unlabelled data in a self-supervised manner, where the obtained data
representations and clustering knowledge form the basis to activate the AL
loop. With feedback from the oracle in each AL cycle, the pseudo-labels of the
unlabelled data are refined by optimizing a shallow task-specific net on top of
the encoder. These updated pseudo-labels serve to inform and improve the active
learning query selection process. Furthermore, we introduce a novel recipe to
combine existing uncertainty measures and utilize the entire uncertainty list
to reduce sample redundancy in AL. Extensive experiments on two public
histopathology datasets show that MAL has superior test accuracy, macro
F1-score, and label efficiency compared to prior works, and can achieve a
comparable test accuracy to a fully supervised algorithm while labelling only
5% of the dataset. | Nico Schiavone, Jingyi Wang, Shuangzhi Li, Roger Zemp, Xingyu Li | 2023-10-24T20:08:15Z | http://arxiv.org/abs/2310.16161v2 | # MyriadAL: Active Few Shot Learning for Histopathology
###### Abstract
Active Learning (AL) and Few Shot Learning (FSL) are two label-efficient methods which have achieved excellent results recently. However, most prior arts in both learning paradigms fail to explore the wealth of the vast unlabelled data. In this study, we address this issue in the scenario where the annotation budget is very limited, yet a large amount of unlabelled data for the target task is available. We frame this work in the context of histopathology where labelling is prohibitively expensive. To this end, we introduce an active few shot learning framework, Myriad Active Learning (MAL), including a contrastive-learning encoder, pseudo-label generation, and novel query sample selection in the loop. Specifically, we propose to massage unlabelled data in a self-supervised manner, where the obtained data representations and clustering knowledge form the basis to activate the AL loop. With feedback from the oracle in each AL cycle, the pseudo-labels of the unlabelled data are refined by optimizing a shallow task-specific net on top of the encoder. These updated pseudo-labels serve to inform and improve the active learning query selection process. Furthermore, we introduce a novel recipe to combine existing uncertainty measures and utilize the entire uncertainty list to reduce sample redundancy in AL. Extensive experiments on two public histopathology datasets show that MAL has superior test accuracy, macro F1-score, and label efficiency compared to prior works, and can achieve a comparable test accuracy to a fully supervised algorithm while labelling only 5% of the dataset.
## 1 Introduction
Deep learning [19] has achieved numerous successes in supervised settings, producing state of the art accuracy and generalization [21, 25, 33]. However, for tasks with a scarcity of labelled data, deep learning is not nearly as effective [13, 36]. With the ever increasing amount of unlabelled data, and the growing annotation cost, innovation has shifted towards more label efficient strategies [18, 34]. In recent years, two label efficient learning paradigms have emerged: Active Learning (AL) and Few Shot Learning (FSL). AL tackles the data scarcity problem by selecting only the most informative data for labelling [26]. However, conventional AL models require moderate annotation budgets and often underperform otherwise [22]. On the other hand, FSL takes a different approach, utilizing transfer learning or model adaptation/generalization techniques and only a handful of labelled samples from the target dataset [27, 30].
Despite the promising performance, it should be noted that both AL and FSL utilize only a few pieces of annotated data in model training, leaving the unlabelled samples as untapped potential. We argue that the effective use of the unlabelled data would further improve the performance, especially under the scenario where only a small annotation budget is available. To tackle this challenge, we propose a novel framework for Active Few Shot Learning: Myriad Active Learning (MAL). Our first contribution stemming from MAL is a framework that incorporates self-supervised learning, pseudo-labels, and active learning in a positive feedback loop. The pseudo-labels of the unlabelled data are updated every AL cycle and supplement the uncertainty measurement for a more precise and diverse query selection. Our second contribution is designing an algorithm to make the most efficient use of the pseudo-labels in the query selection. The new recipe combines classic uncertainty measures to precisely define sample types based on their comparative uncertainty. We then sample evenly
from these types using a self-regulating algorithm, facilitating pseudo-label updates in the next cycle.
We frame the target problem in the context of digital histopathology, where expert pathologists are required to annotate samples in a prohibitively expensive and time-consuming process [2]. In contrast, the number of unlabelled histopathology images is extremely high, as a single scan can produce hundreds of unique images due to the underlying tissue structures. Under the setting of a very limited annotation budget, we evaluate MAL on two histopathology image sets, and show its superiority to classical active learning techniques via comparison and ablation. Notably, MAL can achieve a comparable test accuracy to a fully supervised learning algorithm while labelling only 5% of a target dataset, demonstrating its potential for effective label efficient learning. Our contributions are summarized as follows:
1. We formulate a new problem: active few-shot learning to address high annotate cost in digital histopathology. The proposed framework utilizes the abundant unlabelled data for more label-efficient model learning.
2. We develop a novel uncertainty-based active learning algorithm, Myriad Active Learning (MAL). MAL defines a new type of sample diversity, effectively supplementing the existing annotated data for rapid classification accuracy increases.
3. We show that the proposed method achieves state-of-the-art performance on histopathology datasets in the target setting. As well, with a higher annotation budget, MAL can quickly obtain test accuracies comparable to that of a fully-supervised model.
## 2 Related Work
**Self-Supervised Learning** (SSL) aims to solve traditional supervised learning problems without labels [3]. Instead, SSL explores data relations by constructing self-supervised tasks [1]. A prominent type of SSL is contrastive learning [12], including the recent variant Momentum Contrastive Learning (MoCo) By He et al. [11], which has produced state of the art results on many image-based datasets. SSL algorithms have also produced results competitive with supervised learning on histopathology datasets [5, 15] when the datasets are large. However, most recent efforts to incorporate a limited annotation budget in computational histopathology have been unsuccessful in terms of label efficiency [5, 20].
**Deep Active Learning** (DAL) is another option to tackle the data hungry nature of Deep Learning (DL). The key proposition of DAL is that the majority of the benefits can be obtained from a minority of the samples. In DAL, the training set is unlabelled, optionally with a small number of labelled samples, called the seed. A portion of the unlabelled samples is selected for labelling every cycle, and then included in the training set.
Sample selection strategies [24, 36] are the subject of much innovation, and can be broadly classified into two categories: uncertainty based, and diversity based. Uncertainty based algorithms evaluate samples based on predefined criteria, such as Entropy [28]. CEAL [34], a recent sampling method, labels the least uncertain samples with their predicted labels, possibly greatly increasing the labelled set size for no additional labelling cost. However, all of these classical tactics underperform expectations on histopathology datasets [10, 36]. Diversity sampling methods, such as k-means clustering, or VAAL [29], posit that the model cannot predict uncertainties accurately enough to be useful. Instead, these methods gather as many types of samples as possible, in order to label, and expose the model to, as many classes as possible. These methods are broadly more effective on histopathology datasets, but are more inconsistent due to their random selection nature within clusters or buckets.
**Few Shot Learning** (FSL) attempts to tackle the data scarcity problem by learning through only a handful of labelled samples (often!1% of the dataset) in the task domain. This is often accompanied by a well pre-trained model on a large source dataset. FSL has seen great success in histopathology [27], but is still in a primitive stage with regards to label efficiency research. Active Few Shot Learning provides an potential avenue to solve this problem [22, 35], but the results in many applications, especially histopathology, have not met expectations. This may be a result of the underperformance of active learning on low annotation budgets in general.
## 3 Methods
### Motivation
In this work, we aim to investigate active few-shot learning solutions under very low annotation budgets. An example of such a scenario is computational histopathology, where annotation budgets are small, and labelling is prohibitively expensive, but unlabelled data is relatively abundant.
When constructing an effective and efficient solution for this setting, we considered many paradigms and components. We could not utilize many conventional methods, such as knowledge distillation [8], due to the lack of a well-labelled source dataset. Although it is still technically possible to use these methods, they would have to be trained on a more general image dataset, such as ImageNet [6]. Such a dataset would contain primarily natural images, and this would introduce a large domain gap between the source and target datasets, biasing the model for the downstream tasks.
Therefore, we decided to pursue active-few shot learning, a paradigm which has not yet been explored in the context of histopathology, but has the potential to operate effectively without nearly as many resources.
### Problem Formulation
Mathematically, the problem can be formulated as follows: we are given a large data set \(\mathcal{D}=\{(\textbf{x}_{j})\}_{j=1}^{N_{wl}}\) consisting of \(N_{wl}\) unlabelled samples **x** from \(K\) categories, and a very small annotation budget. In this case, data sample \(\textbf{x}\in R^{H\times W\times C}\) is an image, where \(W,H\) are the image width and height, and \(C=3\) represents the colour channels. Under the annotation budget, up to \(K\) samples can be selected from \(D\) for annotation in each active learning cycle. The target is to learn a model \(\mathcal{M}\) to predict the categories of queries.
Fig. 1 provides a visualization of MAL. Initially, two sets are constructed: one unlabelled set \(U_{0}=D\), and one empty set \(\mathcal{T}_{0}=\{\emptyset\}\) to record the labelled samples from the downstream AL cycles. To explore the rich information in the unlabelled set \(D\), we utilize a constrastive learning algorithm to train an encoder, mapping raw histopathology images into numerical features, \(f_{x}=E(x)\). Then, the encoder is frozen and a one-layer classifier is added on top. In the \(t^{th}\) active learning cycle, a batch of unlabelled samples \(\mathcal{Q}\) are removed from \(\mathcal{U}_{t-1}\) for annotation, and the newly labelled samples, \(\{(\textbf{x}_{i},y_{i})\}_{i=1}^{K}\), are added into the labelled set, i.e. \(\mathcal{U}_{t}=\mathcal{U}_{t-1}-\{(\textbf{x}_{i})\}_{i=1}^{K}\) and \(\mathcal{T}_{t}=\mathcal{T}_{t-1}\cup\{(\textbf{x}_{i},y_{i})\}_{i=1}^{K}\), where \(y_{i}\in(0,1)^{K}\) is the one-hot label vector for \(\textbf{x}_{i}\) over the \(K\) classes. The classifier is then optimised on \(\mathcal{T}_{t}\) and generates pseudo-labels \(P_{t}\) for all samples in \(\mathcal{U}_{t}\), preparing information for the \(t+1^{th}\) cycle. The active learning cycles continue until the budget is exhausted.
### Self Supervised Learning
In the Myriad Active learning framework, we first train an encoder with self-supervised learning on unlabelled samples in \(\mathcal{U}_{0}\). The encoder is then frozen and the extracted numerical features from the unlabelled samples form the basis of the down-stream active learning classification problem. Particularly, the initial clustering effect of SSL encoder helps generate the initial pseudo-labels, which remedies the cold start problem in subsequent active learning. This self-supervision stage is visualized in the top portion of Fig. 1.
For our framework, we utilize the SSL algorithm known as Momentum Contrastive Learning (MoCoV2), by Chen et al. [4], which has shown state-of-the-art results on many image-based datasets, including in histopathology [5]. MoCoV2 learns positive/negative (similar/dissimilar) representations from the data, which becomes a list of positive/negative pairs. MoCoV2 formulates this as a dictionary lookup problem, with keys for the representations. Given an unlabelled sample \(x\in\mathcal{U}_{0}\), we perform two different data augmentations on \(x\) and denote the data augmented versions as \(x^{\prime}\) and \(x^{\prime\prime}\) respectively. Then a query representation \(q=f_{x^{\prime}}=E(x^{\prime})\) and corresponding key representation \(k^{+}=f_{x^{\prime\prime}}=E(x^{\prime\prime})\) form the positive pair in MoCoV training. Representations from other images constitute a set of negative samples \(k^{-}\). The loss function for encoder opti
Figure 1: Diagram of the proposed framework Myriad Active Learning, where the pseudo-labels of the unlabelled set are updated and explored for query sample selection.
mization is formulated in Eq. (1).
\[\mathcal{L}_{q,k^{+},\{k^{-}\}}=\\ -\log\frac{\exp(q\cdot k^{+}/\tau)}{\exp(q\cdot k^{+}/\tau)+\Sigma_ {k^{-}}\exp(q\cdot k^{-}/\tau)}, \tag{1}\]
where \(\tau\) is the temperature hyper-parameter. The large, dynamic dictionary utilized by MoCoV2 is also more efficient than many SSL algorithms [4], increasing its usability in real-world settings.
Note that in this study, we chose not to use more specialized SSL algorithms, such as HistoSSL by Jin et al. [15], to show that it is not necessary to use a histopathology specific pretraining algorithm for our solution to be effective.
### Pseudo-Label Generation
Once the SSL model is pretrained on the unlabelled dataset, the learned features are used as an input to a shallow network - a one layer classifier. It is important to use a shallow network in this case to reduce the chance of overfitting, as we will only have a few labelled samples per class in the low budget setting.
Initially, there is no training data for the target task, as we do not use an initial seed; therefore, the shallow network cannot provide any meaningful information to the active learning algorithm. Instead, we use K-Means clustering on the numerical features from the frozen encoder to form \(K\) clusters in the first cycle. The first query is composed of one sample selected from each of the clusters. From the second cycle onwards, the framework proceeds in a closed-loop fashion, as depicted in Fig. 1. The classifier is updated using the labelled samples in \(\mathcal{T}_{t}\), and generates the set of pseudo-labels \(P_{t}\) to use in the next cycle. The pseudo-labels \(\hat{y}\) are generated based on the predicted class probabilities for each sample \(\mathbf{x}_{j}\in\mathcal{U}_{t}\); defining \(\hat{y}_{j}\) as the most likely class label of \(\mathbf{x}_{j}\), the set of pseudo-labels \(P_{t}=\{\hat{y}_{j}\}_{j=1}^{N_{wl}}\) is formed. Excellent pseudo-labels are crucial to the function of our framework, as the pseudo-labels will inform the diversity of the active learning query and be the best mode of prevention against redundant samples.
### Myriad Active Learning
Myriad Active Learning (MAL) provides a solution to the goal of collecting a diverse query, minimizing redundancy, and maximizing the accuracy in each cycle. To accomplish this, we utilize conventional active learning techniques in a novel way, allowing for more precise and deliberate sample selection. MAL uses a modified version of conventional uncertainty-based strategies, the pseudo-labels obtained using the methods in Sec. 3.4, and a novel sample selection strategy. First, we combine margin sampling and entropy sampling into Margin-Entropy (M-E) Sampling to allow for a more precise mapping of the uncertainties (using \(\mathcal{M}\) to denote the model):
\[\sigma_{\mathbf{m+e}}(\mathbf{x},\mathcal{M})=\frac{1}{\sigma_{\mathbf{margin} }(\mathbf{x},\mathcal{M})}+\sigma_{\mathbf{entropy}}(\mathbf{x},\mathcal{M}), \tag{2}\]
\[\sigma_{\mathbf{margin}}(\mathbf{x},\mathcal{M})=p_{\mathcal{M}}(\hat{y}_{1 }|\mathbf{x})-p_{\mathcal{M}}(\hat{y}_{2}|\mathbf{x}), \tag{3}\]
\[\sigma_{\mathbf{entropy}}(\mathbf{x},\mathcal{M})=-\Sigma_{i=1}^{k}p_{ \mathcal{M}}(y_{i}|\mathbf{x})\log p_{\mathcal{M}}(y_{i}|\mathbf{x}), \tag{4}\]
where \(p_{\mathcal{M}}(\hat{y}|\mathbf{x})\) represents the class probability of sample \(\mathbf{x}\) under model \(\mathcal{M}\). The aforementioned classical uncertainty sampling strategies are defined as follows: **Margin Sampling**[23] gives the uncertainty based on the difference between the probabilities of a sample's most and second most likely labels (\(\hat{y}_{1}\) and \(\hat{y}_{2}\), respectively). In this case, a lower value means a larger uncertainty. **Entropy Sampling**[28] selects data based on the maximal entropy. When combined, the \(1/\sigma_{\mathbf{margin}}\) is used rather than \(\sigma_{\mathbf{margin}}\) to ensure larger values map to larger uncertainties.
Margin sampling favours samples closer to the decision boundaries [23], regardless of how many classes intersect at those boundaries. Entropy sampling favours the most uncertain samples close to the decision boundary of _as many_ classes as possible, and therefore often picks out noisy or difficult samples. Classically, the samples will be selected from the most uncertain to the last uncertain until the per cycle quota \(K\) is filled.
The reason for this is that these sampling methods only provide concrete information on where the _highest_ uncertainty samples will be found. As a result, similar samples are often labelled, potentially wasting label information. A visualization of these conventional methods can be found on the left side of Fig. 2.
Comparatively, M-E has a much more predictable structure: a low M-E uncertainty sample is likely to be found near the centre of a cluster, due to a low margin score _and_ a low entropy. Conversely, a high M-E uncertainty sample will almost certainly be found near a decision boundary. In this way, the entire list of uncertainties can be deliberately utilized. The overconfident predictions at the low end of the list serve to establish anchors, while the top end of the list samples the usual suspects from conventional methods. We utilize these characteristics by splitting up the sorted uncertainty list into \(K\) sub-arrays, so that no more than one sample will be selected from the same area. A visual representation of this is given in Fig. 2, intuitively showing the difference between MAL and the current techniques.
Combining M-E sampling with the diversity presented by the pseudo-labels, we create the novel selection algorithm Myriad Active Learning (MAL). The pseudocode for MAL can be found in Alg. 1. A summary of the algorithm
follows: MAL takes in \(\mathcal{U}_{t}\), \(P_{t}\), and \(K\) as inputs, calculates the uncertainty \(\sigma\) for every sample in \(\mathcal{U}_{t}\), and stores them in order in an array \(\alpha\). This array is argument sorted into \(\beta\), such that \(\beta[i]\) is the index of the \(i^{th}\) most uncertain sample in \(\mathcal{U}_{t}\). \(\beta\) is then split into \(K\) sub-arrays \(\beta_{0},...,\beta_{K-1}\), and one-by-one the most uncertain sample from each sub-array is added to \(\mathcal{Q}\). After each sample is selected, its corresponding pseudo-label from \(P_{t}\) is appended to \(S\), and subsequent samples added to \(\mathcal{Q}\) must have a pseudo-label which is not in \(S\). This is to ensure that a sample with a different pseudo-label is selected from each sub-array. This portion loops until \(\mathcal{Q}\) contains \(K\) samples, at which point they are sent to the oracle for labelling.
## 4 Experiments
### Datasets
The proposed framework, MAL, is evaluated on two public histopathology datasets.
**NCT-CRC-HE-100K** (NCT) [16] provides 100,000 non-overlapping histopathology image patches cropped from 86 H&E stained tissue slides by the National Center for Tumor Diseases and the University Medical Center Mannheim pathology archive. All images in NCT are color-normalized with 224x224 pixels at 0.5 microns per pixel. The 9 tissue categories included in this dataset are adipose, background, debris, lymphocytes, mucus, smooth muscle, normal colon mucosa, cancer-associated stroma, and colorectal adenocarcinoma epithelium.
**Breast Cancer Histopathological Database** (BreaKHis) [31] is composed of 9,109 H&E stained breast tumor images from 82 patients, with 2,480 benign and 5,429 malignant samples. Aside from the binary labels of benign and malignant, images in BreakHis are further categorized into 8 categories, which are adenosis, fibroadenoma, phyllodes tumor, and tubular adenoma, carcinoma, lobular carcinoma, mucinous carcinoma, and papillary carcinoma. All images in BreaKHis is with 700X460 pixels in the PNG format of 3-Channel RGB, 8-bit depth in each channel. Different from the NCT dataset, BreakHis is a very difficult dataset since it includes images with different magnifying factors (40X, 100X, 200X, and 400X) which makes computational image diagnosis more challenging.
### Experimental Protocol
In our study, NCT and BreaKHis constitute 9-category classification and 8-class diagnosis tasks, respectively. For each dataset, images are divided into unlabelled training set and test set with a ratio of 80:20. The samples were divided randomly, as we operate under the assumption that we do
Figure 2: Abstracted t-SNE plot of example data (3-class). Left: 6 samples selected by classical active learning methods with entropy sampling, notably selecting many samples from the same area which are highly likely to be redundant. Right: 6 samples selected using MAL, finetuning several of the borders simultaneously, while providing anchor samples for two of the classes.
not know the underlying distribution of the datasets (i.e. we have no label information at this stage).
This study focuses on the scenario of active few-shot learning where a very low annotation budget is available. That is, we allow only \(n\)-shot samples for data annotation in experimentation, where \(n=1,5,10\).
The self-supervised learning model is composed of a ResNet-50 backbone and a 2-layer MLP head (2048-dimensional hidden layer with a ReLU activation). We follow the study in Chen et al. [4] for data augmentation. One exception is that for the BreaKHis dataset, a 460x460-pixel crop was used for the randomly resized images. The one-layer classification network is initialized using Xavier uniform distribution and is optimized with ADAM [17]. The hyper-parameters including learning rate, batch size, and training epoch number, are specified in Table 4.1.
### Comparison Baseline
To the best of our knowledge, no prior works address the active few-shot learning scenario in histopathology, so we will make our comparisons over several different paradigms. Specifically, we divide our comparison experiments into two main parts. First, we compare MAL to a recent few-shot learning benchmark, FHIST [27]. Here, we use classification accuracy and macro F1 scores as the performance metrics. Then, we compare our methods to popular active learning methods, summarized by Zhan et al. [36].
Note that both the few-shot learning results and active learning results reported in this paper were reproduced using the official code published by their respective authors. The active learning methods required an initial seed, as they do not utilize pretraining, so they are given \(K\) randomly chosen labelled samples initially. All results are reported with the mean and sample standard deviation from 3 seeds.
### Main Results and Discussion
**Few-Shot Learning Comparisons**: FHIST [27] is a recent few shot learning benchmark particularly designed for histopathology images. FHIST uses a large neural network pretrained on a well-annotated pathology image set, which is then transferred to and finetuned with the few-shot samples from the target dataset (NCT and BreaKHis in this case). Table 2 shows the \(n\)-shot learning results on NCT/BreaKHis with FHIST and MAL.
It is reasonable that the 1-shot accuracies are lower, as the classifier in MAL trains only on \(K\) samples, or a possible one sample per class from K-means, while FHIST takes advantage of the knowledge transferred from the extra annotated data. However, when more data is selected and annotated, for example, in 5-shots and 10-shots, MAL substantially improves the test accuracy and macro F1 score of the model, and significantly outperforms FHIST. Notably, in all 10-shot cases, MAL outperforms FHIST.
Higher budget settings are also explored in Table 6. A CNN trained from scratch achieves 96.16% accuracy on NCT [9], which can be boosted to 99.76% using transfer learning [7]. MAL achieves comparable results at only 5% labels, with a test accuracy of 95.9%.
**Active Learning Comparisons**: Next, we compare MAL against other classical deep active learning methods, using the methodology described by Zhan et al. [36] on the 8-class BreaKHis dataset and the NCT dataset. As shown in Tables 3 and 4, MAL improves upon popular deep active learning methods in the few shot setting. Specifically, macro F1 score is improved by 4.3%, 14.1%, and 21.5% at 1, 5, and 10-shots, respectively for the BreaKHis dataset. For NCT, a similar trend is observed with a 27.1%, 43.3%, and 43.5% increase in macro F1 score at 1, 5, and 10-shots, respectively. Similar trends are observed for the test accuracies.
**Discussion and limitations**: Our experiments show that MAL outperforms prior FSL and AL methods on histopathology images [27, 36] in terms of accuracy, macro F1 score, and label efficiency.
There is a notable large gap in macro F1 score between MAL and the other FSL and AL methods on the NCT dataset. This is due to the nature of MAL to select a more balanced query, which results in more even performance increases across the classes, and thus a higher macro F1 score for a comparable test accuracy.
One **limitation** of MAL is relatively low performance in the 1-shot setting, which is due to the use of K-Means clustering to circumvent the lack of information in the first cycle. This could be remedied, at the cost of budget, by using an initial seed. In addition, the use of pseudo-labels in MAL is simple and straightforward. A more sophisticated design would potentially improve the performance. For example, one can assign multiple pseudo-labels to each sam
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline Dataset & PrT LR & PrT Batch Size & PrT Epochs & LR & Batch Size & Epochs \\ \hline NCT & 0.015 & 128 & 200 & 0.0004 & 128 & 200 \\ BreaKHis & 0.0005 & 32 & 20 & 0.0012 & 128 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyper-parameters (i.e. learning rate, batch size, and training epoch number) for SSL-pretraining (denoted PrT) encoder and the active learning loop. A weight decay of 0.0005 is used in all cases.
ple, and reduce overlap on the whole set of pseudo-labels, rather than the most likely one. We leave the pursuit of these directions for future work.
### Ablation Studies
**Ablation 1: MAL Components**: In this ablation study, we gradually remove the four essential components of MAL to measure their impact on the overall performance on the NCT dataset. The four components are pseudo-label generation, the SSL pre-trained encoder, the segmentation of the uncertainty list into sub-arrays, and margin-entropy sampling. These are denoted as pseudo-labels, SSL pretraining, Sub-array, and M-E in Table 5, respectively. In each case, the relevant feature is either omitted, or replaced by a conventional version. Specifically, where the SSL encoder is not used, a few-shot learning model pretrained on the CRC-TP dataset (280 000 images, 7-classes of colorectal cancer) [14] is used; and when M-E sampling is not used, the conventional entropy sampling is used instead. As seen in Table 5, when each piece of MAL is removed, the test accuracies drop a significant amount at 5 and 10 shots. The 1-shot performance is once again hampered by the lack of target dataset information, but generally decreases as parts are removed. This high variance can be attributed to the lack of knowledge in the early cycles - in the first cycle, the algorithm has no information on the target dataset, so the samples selected are generally low quality, and do not accurately represent the underlying distribution of data.
**Ablation 2: Higher Budget Settings**: In this ablation, we relax the few-shot condition and investigate the performance of MAL with a higher annotation budget and report its performance in Table 6. For a reference, a fully supervised CNN-based classifier trained on the entire NCT dataset achieves 96.16% accuracy [9, 32]. That is, MAL is able to achieve similar performance with only 5% annotation by always selecting the most informative samples to supplement model learning.
## 5 Conclusions and Future work
In this work, we proposed Myriad Active Learning (MAL), a framework to efficiently and effectively increase the classification accuracy of an active few shot learning model by utilizing unlabelled data. MAL exploits the nature of uncertainty-based active learning sample selection by combining classical uncertainty estimation techniques for a more precise and deliberate query selection strategy. Pseudo-labels generated by an SSL encoder and classifier informed the active learning queries, allowing each sample to be "aware" of the others for a consistently more diverse and less redundant query. MAL produced excellent results, achieving comparable classification accuracy of a fully supervised model on the NCT dataset with only 5%
\begin{table}
\begin{tabular}{l|c|c c c|c c c} \hline \hline & & \multicolumn{3}{c|}{Accuracy} & \multicolumn{3}{c}{F1 Score} \\ Datasets & Method \(\downarrow\) & 1-shot & 5-shot & 10-shot & 1-shot & 5-shot & 10-shot \\ \hline \multirow{3}{*}{NCT} & MAL & 48.7\(\pm\)6.7 & **77.9\(\pm\)\({}_{2.0}\)** & **87.1\(\pm\)\({}_{0.5}\)** & **51.2\(\pm\)\({}_{9.0}\)** & **79.5\(\pm\)\({}_{2.3}\)** & **88.0\(\pm\)\({}_{0.4}\)** \\ & FHIST [27] & **56.2\(\pm\)\({}_{10.8}\)** & 75.4\(\pm\)\({}_{8.1}\) & 80.9\(\pm\)\({}_{7.2}\) & 30.3\(\pm\)\({}_{6.2}\) & 41.8\(\pm\)\({}_{4.5}\) & 44.9\(\pm\)\({}_{4.0}\) \\ \hline \multirow{3}{*}{BreaKHis} & MAL & **33.9\(\pm\)\({}_{6.6}\)** & 51.6\(\pm\)\({}_{4.0}\) & **65.1\(\pm\)\({}_{1.7}\)** & 16.2\(\pm\)\({}_{5.5}\) & **29.7\(\pm\)\({}_{3.1}\)** & **37.2\(\pm\)\({}_{0.9}\)** \\ & FHIST [27] & 33.8\(\pm\)\({}_{7.36}\) & **53.1\(\pm\)\({}_{8.54}\)** & 62.0\(\pm\)\({}_{7.44}\) & **18.0\(\pm\)\({}_{4.1}\)** & 29.1\(\pm\)\({}_{4.9}\) & 34.2\(\pm\)\({}_{4.2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracies (%) and Macro F1 scores (%) on the NCT and BreaKHis datasets in the few shot learning setting. FHIST by Shakeri et al. [27] is a recent study presenting a few-shot learning benchmark on histopathology images.
\begin{table}
\begin{tabular}{l|c|c c c c c c c} \hline \hline Dataset & Method [36] \(\rightarrow\) & MAL & Rand. & Margin & Entropy & VarRatio & CEAL & KMeans \\ \hline \multirow{3}{*}{NCT} & 1-shot & **48.7\(\pm\)\({}_{6.7}\)** & 29.0\(\pm\)\({}_{3.3}\) & 32.5\(\pm\)\({}_{1.5}\) & 24.5\(\pm\)\({}_{0.9}\) & 27.0\(\pm\)\({}_{5.0}\) & 30.2\(\pm\)\({}_{2.2}\) & 28.4\(\pm\)\({}_{7.7}\) \\ & 5-shot & **77.9\(\pm\)\({}_{2.0}\)** & 35.7\(\pm\)\({}_{4.1}\) & 39.9\(\pm\)\({}_{4.5}\) & 33.7\(\pm\)\({}_{2.4}\) & 32.0\(\pm\)\({}_{7.7}\) & 31.7\(\pm\)\({}_{10.5}\) & 33.7\(\pm\)\({}_{3.9}\) \\ & 10-shot & **87.1\(\pm\)\({}_{0.5}\)** & 47.7\(\pm\)\({}_{3.8}\) & 42.5\(\pm\)\({}_{2.3}\) & 38.0\(\pm\)\({}_{10.0}\) & 42.7\(\pm\)\({}_{10.0}\) & 37.9\(\pm\)\({}_{5.0}\) & 33.7\(\pm\)\({}_{3.0}\) \\ \hline \multirow{3}{*}{BreaKHis} & 1-shot & 33.9\(\pm\)\({}_{6.6}\) & 25.6\(\pm\)\({}_{13.4}\) & 38.3\(\pm\)\({}_{2.8}\) & 31.9\(\pm\)\({}_{7.3}\) & 44.3\(\pm\)\({}_{0.5}\) & 39.5\(\pm\)\({}_{2.1}\) & **43.7\(\pm\)\({}_{0.6}\)** \\ & 5-shot & **51.6\(\pm\)\({}_{4.0}\)** & 44.5\(\pm\)\({}_{0.9}\) & 44.6\(\pm\)\({}_{0.8}\) & 44.6\(\pm\)\({}_{6.3}\) & 45.7\(\pm\)\({}_{2.2}\) & 43.1\(\pm\)\({}_{2.9}\) & 42.6\(\pm\)\({}_{1.4}\) \\ \cline{1-1} & 10-shot & **65.1\(\pm\)\({}_{1.7}\)** & 43.4\(\pm\)\({}_{0.9}\) & 49.1\(\pm\)\({}_{0.3}\) & 46.1\(\pm\)\({}_{2.2}\) & 48.9\(\pm\)\({}_{0.8}\) & 44.2\(\pm\)\({}_{2.1}\) & 46.2\(\pm\)\({}_{2.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracies (%) for the NCT and 8-class BreaKHis datasets in the active learning setting. MAL is compared to conventional deep learning methods evaluated in Zhan et al. [36].
annotation. MAL also outperforms current few-shot learning methods at 5 and 10 shots, and outperforms common active learning methods in the limited budget setting.
One may notice that in our few-shot active learning paradigm, a SSL encoder is trained on the unlabelled data for the initial pseudo-label generation, avoiding cold start in active learning. Recently, we witness the surge of foundation models. We hypothesize the knowledge in these foundation models would be another good source for pseudo-labels. We will validate this hypothesis in future studies.
|
2303.01525 | Complex semiclassical theory for non-Hermitian quantum systems | Non-Hermitian quantum systems exhibit fascinating characteristics such as
non-Hermitian topological phenomena and skin effect, yet their studies are
limited by the intrinsic difficulties associated with their eigenvalue
problems, especially in larger systems and higher dimensions. In Hermitian
systems, the semiclassical theory has played an active role in analyzing
spectrum, eigenstate, phase, transport properties, etc. Here, we establish a
complex semiclassical theory applicable to non-Hermitian quantum systems by an
analytical continuation of the physical variables such as momentum, position,
time, and energy in the equations of motion and quantization condition to the
complex domain. Further, we propose a closed-orbit scheme and physical meaning
under such complex variables. We demonstrate that such a framework
straightforwardly yields complex energy spectra and quantum states, topological
phases and transitions, and even the skin effect in non-Hermitian quantum
systems, presenting an unprecedented perspective toward nontrivial
non-Hermitian physics, even with larger systems and higher dimensions. | Guang Yang, Yongkang Li, Yongxu Fu, Zhenduo Wang, Yi Zhang | 2023-03-02T19:00:03Z | http://arxiv.org/abs/2303.01525v3 | # Complex semiclassical theory for non-Hermitian quantum systems
###### Abstract
Non-Hermitian quantum systems exhibit various novel characteristics, such as exotic topological phenomena and non-Hermitian skin effect, yet their exponential localization and non-orthogonal eigenstates strange-hold analysis and calculations. The potent semiclassical theory, which offers physical pictures and quantitative results for various Hermitian systems, remains limited in non-Hermitian systems. Here, we establish a complex semiclassical theory for non-Hermitian quantum systems as a straightforward analytical continuation of their quantization conditions and physical variables, such as momentum, position, time, and energy, with consistent physical interpretations as wave packets with additional factors. Generally applicable to both continuous and lattice models, such complex semiclassical theory yields accurate complex energy spectra (e.g., Landau levels) and quantum states, presenting a helpful bedrock and an alternative perspective towards important physics such as nontrivial topology, skin effect, transport, dissipative behaviors, etc.
_Introduction_ --The non-Hermitian skin effect (NHSE), where most eigenstates localize at the open boundary, reveals that the physics of non-Hermitian quantum systems may differ significantly from Hermitian systems [1; 2; 3; 4; 5; 6; 7; 8; 9]. Examples of non-Hermitian origins include dissipation in quantum optics [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], open systems in cold atom systems [21; 22; 23; 24; 25], and finite quasiparticle lifetime in condensed matters with interactions or disorder [26; 27; 28; 29], etc. Generalizing the recent successes of topological materials and theories to non-Hermitian quantum systems, physicists have discovered a series of non-Hermitian topological phases [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. These have brought non-Hermitian quantum systems to the research frontier.
The semiclassical theory [47; 48; 49] is a conventional yet powerful framework capable of offering quantitative property explorations and explicit physical pictures for quantum materials and models, such as the quantum Hall effect in two and three dimensions [50; 51; 52; 53; 54; 55; 56], the quantum anomalous [57; 58; 59] and spin Hall effect [60; 61; 62; 63], and other topological phenomena. Given the emerging difficulties in non-Hermitian quantum systems due to their eigenstates' non-orthogonality and exponential amplification [7; 64; 39], such a semiclassical theory would be helpful, especially on large systems. Nevertheless, despite various semiclassical studies with Berry curvature [65], external electric field [66], real spectra [67], etc., a semiclassical theory for non-Hermitian quantum systems in generic electromagnetic fields is still lacking.
In this letter, we present and study a complex semiclassical theory applicable to non-Hermitian quantum systems. In particular, we generalize the conventional equations of motion (EOMs) and the quantization conditions for Hermitian quantum systems via an analytical continuation of the physical variables, e.g., (the center of mass) momentum and position. At first glimpse, it may seem odd to assign complex values to such quantities with classical implications; however, we show a consistent physical interpretation in terms of wave packets with augmented factors naturally connected to non-unitary dynamics. Complex energy levels emerge wherever exists closed orbits with satisfactory quantization conditions. We also establish a straightforward strategy to search and establish closed orbits in the complex domain, as well as approximations for the corresponding quantum eigenstates. We demonstrate our conclusions with continuous and lattice model examples, where the complex semiclassical theory yields consistent results with their quantum benchmarks and breeds insights into non-Hermitian topology, skin effect, dissipative physics, and more.
_Complex semiclassical theory_ --Conventionally, the semiclassical theory for a Hermitian quantum system begins with the EOMs of a wave packet following its quantum dynamics [49]:
\[\dot{\mathbf{r}} =\mathbf{v}=\partial\epsilon/\partial\mathbf{p},\] \[\dot{\mathbf{p}} =\mathbf{F}=-\partial\epsilon/\partial\mathbf{r}, \tag{1}\]
where energy \(\epsilon\) depends on the momentum \(\mathbf{p}\) and position \(\mathbf{r}\) of the wave packet's center of mass (COM). Such a wave-packet description remains a valid approximation as long as the length scale of energy variation surpasses its extent, e.g., when the external electromagnetic fields are weak. Once the resulting trajectory forms a closed orbit that satisfies the Bohn-Sommerfeld quantization condition:
\[\oint\mathbf{p}\cdot d\mathbf{r}=(n+\gamma)h,n\in\mathbb{Z}, \tag{2}\]
we have achieved a valid state at the corresponding energy level. For simplicity, we neglect the Berry curvature [49; 65], and \(\gamma=1/2\). As the solutions and analysis of the EOMs are essentially classical, the semiclassical theory offers excellent efficiency and transparency. For instance, the cycles perpendicular to a magnetic field vividly depict various topological phenomena, such as the quantum Hall effect [50; 51; 52; 53; 54; 55; 56].
Commonly, the variables in the EOMs undertake real values, as they are classical quantities or expectation values of Hermitian observables. Interestingly, to apply the semiclassical theory on non-Hermitian quantum systems, we need to analytically continue the physical variables
such as \(\mathbf{r}\), \(\mathbf{p}\), \(t\), and \(\epsilon\) in Eqs. 1 and 2 to complex values. Hence the name complex semiclassical theory. The WKB approximation offers a special case with \(\mathbf{p}\in\mathbb{C}\) and \(\mathbf{r}\in\mathbb{R}\). We will discuss the interpretation of complex variables, derivations, and various consequences later.
Given the non-unitary nature of the time evolution in non-Hermitian quantum systems, it is natural to encompass complex \(\epsilon\) and \(t\). Under a specific initialization, we map out the trajectory following Eq. 1, where the complex energy \(\epsilon\) remains conserved. Once we have located a closed orbit - a problem in differential equations known as the stable orbit [68; 69; 70; 71; 72; 73; 74; 75], we evaluate the complex integral \(\oint\mathbf{p}\cdot d\mathbf{r}\), and, according to Eq. 2, deem whether \(\epsilon\) and the underlying states are physical.
However, this is easier said than done: the complex nature of \(t\) may render the search for a closed orbit aimless and rather challenging, especially given that the complex period \(T\) is initially unknown and may differ between the orbits. We propose the following strategy: (1) upon an open cycle that fails to form a close orbit (yet reasonably approaches the initial point), we can evaluate the differences \(\Delta x\) and \(\Delta p\) between initial and current variables; (2) given \(\dot{x}\) and \(\dot{p}\) of the current step, we choose the complex phase of \(dt\), so that \(dx\) and \(dp\) reduce \(\Delta x\) and \(\Delta p\); (3) the variables at the next step inch towards initialization, and so on until we reach a closed orbit. As complex integrals do not depend on their particular paths (but instead on their windings around certain "fixed points"), such make-shift closed orbits provide equal bases for evaluating vital quantities, such as the period \(T\) and \(\int p\cdot dx\) accumulated over the cycle. This approach does not require fine-tuning in initialization or perturbative treatment of the non-Hermitian terms. Nevertheless, once \(T\) becomes available, there lay cycles along \(t=\lambda T\), \(\lambda\in[0,1]\), which are commonly smoother and more aesthetic.
_Toy model examples_ --First, let's consider a quadratic model in 1D [76]:
\[\hat{H}=\alpha\hat{x}^{2}+\beta\hat{p}^{2}+\eta(\hat{x}\hat{p}+\hat{p}\hat{x} )/2, \tag{3}\]
where \(\alpha,\beta,\eta\in\mathbb{C}\), and \(\hat{H}\) is non-Hermitian in general. We set \(\hbar=1\) here and afterward. Quantum mechanically, we can obtain the energy spectrum \(\epsilon_{n}=(n+1/2)\omega\), \(\omega=\sqrt{4\alpha\beta-\eta^{2}}\) is generally complex-valued, \(n\in\mathbb{Z}\), and the corresponding eigenstates \(\hat{a}|0\rangle=0\), \(|n+1\rangle=\hat{b}^{\dagger}|n\rangle/\sqrt{n+1}\) with the modified ladder operators \([\hat{a},\hat{b}^{\dagger}]=1\):
\[\hat{a}=\frac{2\alpha\hat{x}+(\eta+i\omega)\hat{p}}{2\sqrt{\alpha\omega}}, \hat{b}^{\dagger}=\frac{2\alpha\hat{x}+(\eta-i\omega)\hat{p}}{2\sqrt{\alpha \omega}}, \tag{4}\]
which we compare with the complex semiclassical theory.
Semiclassically, we begin with the energy relation \(\epsilon=\alpha x^{2}+\beta p^{2}+\eta xp\), and the resulting EOMs following Eq. 1 are exactly solvable [77]:
\[x(t) =z_{1}e^{i\omega t}+z_{2}e^{-i\omega t},\] \[p(t) =\frac{(i\omega-\eta)z_{1}}{2\beta}e^{i\omega t}-\frac{(i\omega+ \eta)z_{2}}{2\beta}e^{-i\omega t}, \tag{5}\]
where \(z_{1},z_{2}\in\mathbb{C}\) are complex constants determined by the initial conditions. These orbits are closed with a period of \(2\pi/\omega\) for \(\omega t\in\mathbb{R}\) (Fig. 1a); for a generic \(t\), however, \(e^{\pm i\omega t}\) introduces an exponential amplitude, and the trajectories end up as open spirals (Fig. 1b). Fortunately, after the trajectory reaches a good spot, we can achieve a closed orbit by choosing the complex phase of \(dt\) so that \(da\) is along \(a(0)-a(t)\), \(a=(x+ip)/\sqrt{2}\) for the following steps (Fig. 1c).
We can then evaluate the physical quantities over these closed orbits: \(\epsilon=z_{1}z_{2}\omega^{2}/\beta\) and \(\oint p\cdot dx=2\pi z_{1}z_{2}\omega/\beta\). The quantization condition requires \(z_{1}z_{2}=(n+1/2)\beta/\omega\), which in turn indicates the valid energy levels are \(\epsilon_{n}=(n+1/2)\omega\) and consistent with the quantum benchmark. We include further examples, such as continuous models with higher-order potential and kinetic terms, in the supplemental materials [78].
_Derivation and interpretation_ --Given a quantum state \(|\Phi\rangle\) at \(t=0\) and a non-Hermitian Hamiltonian \(\hat{H}\), we define the time dependence of a Hermitian observable \(\hat{O}\)'s expectation value as:
\[\langle\hat{O}\rangle(t)=\langle\Phi|U^{-1}(t)\hat{O}U(t)|\Phi\rangle, \tag{6}\]
where \(U(t)=\exp(-i\hat{H}t)\). An alternative to Eq. 6 is \(\langle\Phi|U^{\dagger}(t)\hat{O}U(t)|\Phi\rangle\), which guarantees the expectation val
Figure 1: Following the EOMs of the model in Eq. 3, the semiclassical trajectories form (a) a closed loop for \(\omega t\in\mathbb{R}\), or (b) an open spiral for \(t\in\mathbb{R}\), where we may (c) close the loop by adjusting \(dt\)’s phase. (d) The corresponding trajectories in the complex \(t\) plane show that the different closed orbits with identical winding (around its fixed point \(x=p=0\)) are equivalent and eventually reach the same period \(T\), which is complex given the non-Hermicity and the complex nature of \(\omega\). \(\alpha=1+0.5i\), \(\beta=1\), and \(\eta=0.7+0.3i\).
ues to be real and coincides with Eq. 6 for Hermitian \(\hat{H}\). In comparison, our choice of Eq. 6 has the following advantages: (1) \(\langle\Phi|U^{-1}(t)\cdot{\bf 1}\cdot U(t)|\Phi\rangle=1\) remains constant, thus normalization does not evolve in time. (2) Intimately related to the quantum-classical crossover and the conservation laws, the Heisenberg picture \(\dot{\hat{O}}=-i[\hat{O},\hat{H}]/\hbar\) remains valid. (3) Eq. 6 is consistent with the expectation value \(\langle nL|\hat{O}|nR\rangle\) and the time evolution operator \(U(t)=\sum_{n}e^{-iE_{n}t}|nR\rangle\langle nL|\) under the biorthogonal basis \(\langle nL|mR\rangle=\delta_{nm}\)[79]. However, there is a trade-off: given a non-Hermitian \(\hat{H}\), \(\hat{O}\) is no longer necessarily Hermitian for a Hermitian \(\hat{O}\); therefore, the expectation values \(\langle\hat{O}\rangle\) corresponding to the physical variables cannot remain real-valued and need analytical continuation to the complex domains. We further discuss our conventions in Ref. [78].
The semiclassical theory originates from quantum states' wave-packet representation. We can describe a wave packet with center-of-mass (COM) position \(x_{0}\) and momentum \(p_{0}\) as \(|x_{0},p_{0}\rangle=\hat{W}(x_{0},p_{0})|0\rangle\), where \(\hat{W}(x_{0},p_{0})=\exp[i(p_{0}\hat{x}-x_{0}\hat{p})]\) is a translation operator and \(|0\rangle\) is a wave packet with zero average momentum and position. The generalization to \(x_{0},p_{0}\in\mathbb{C}\) is valid and straightforward:
\[\langle x_{0},p_{0}|\hat{x}|x_{0},p_{0}\rangle =x_{0},\] \[\langle x_{0},p_{0}|\hat{p}|x_{0},p_{0}\rangle =p_{0}, \tag{7}\] \[\langle x_{0},p_{0}|x_{0},p_{0}\rangle =1,\]
establishing a mapping between a wave packet \(|x_{0},p_{0}\rangle\) and its complex-valued physical variables \((x_{0},p_{0})\).
What is the physical interpretation of such a wave packet? Defining \(x_{0}^{\prime}\) and \(x_{0}^{\prime\prime}\) (\(p_{0}^{\prime}\) and \(p_{0}^{\prime\prime}\)) as the real and imaginary parts of \(x_{0}\) (\(p_{0}\)), we note:
\[|x_{0},p_{0}\rangle =\exp(-\frac{x_{0}^{2}+p_{0}^{2}}{4})\sum_{n=0}\frac{(x_{0}+ip_{0 })^{n}}{2^{n/2}\sqrt{n!}}|n\rangle\] \[=A(x_{0},p_{0})|x_{r},p_{r}\rangle, \tag{8}\]
is a wave packet with real-valued COM position \(x_{r}=x_{0}^{\prime}-p_{0}^{\prime\prime}\) and momentum \(p_{r}=p_{0}^{\prime}+x_{0}^{\prime\prime}\), but with an additional factor:
\[A(x_{0},p_{0})=\exp(\frac{x_{r}^{2}+p_{r}^{2}-x_{0}^{2}-p_{0}^{2}}{4})\in \mathbb{C}. \tag{9}\]
We can also convert the real-valued COM variables and the overall factor into complex \(x_{0}\) and \(p_{0}\), as both parameterizations employ four real-valued parameters. The variable modulus of \(|x_{0},p_{0}\rangle\) is a direct signature of non-conserved probability in non-Hermitian quantum systems.
Next, we analyze the dynamics of such a wave packet. After evolution under a non-Hermitian Hamiltonian \(\hat{H}(t)\) for an infinitesimal time \(\tau\to 0\), the wave packet's COM becomes:
\[\langle\hat{x}\rangle(t+\tau) =\langle x(t),p(t)|e^{i\hat{H}\tau}\hat{x}e^{-i\hat{H}\tau}|x(t), p(t)\rangle\] \[\approx x(t)-i\tau\langle x(t),p(t)|[\hat{x},\hat{H}]|x(t),p(t)\rangle\] \[=x(t)+\tau\langle\frac{\partial\hat{H}}{\partial\hat{p}}\rangle(t), \tag{10}\]
\[\langle\hat{p}\rangle(t+\tau) =p(t)-\tau\langle\frac{\partial\hat{H}}{\partial\hat{x}}\rangle(t),\]
which characterizes the wave packet \(|x(t+\tau),p(t+\tau)\rangle\), if it survives to \(t+\tau\). Therefore, we establish the evolution of the complex variables:
\[\dot{x}(t) =\left[x(t+\epsilon)-x(t)\right]/\tau\] \[=\langle x(t),p(t)|\left(\partial\hat{H}/\partial\hat{p}\right)|x (t),p(t)\rangle\] \[=\langle 0|\frac{\partial\hat{H}}{\partial\hat{p}}\left(\hat{x}+x(t),\hat{p}+p(t)\right)|0\rangle\approx\partial E/\partial p, \tag{11}\] \[\dot{p}(t) =\langle x(t),p(t)|\left(-\partial\hat{H}/\partial\hat{x}\right)|x (t),p(t)\rangle\] \[=-\langle 0|\frac{\partial\hat{H}}{\partial\hat{x}}\left(\hat{x}+x(t),\hat{p}+p(t)\right)|0\rangle\approx-\partial E/\partial x,\]
which leads to the EOMs in Eq. 1 with complex variables. In the last steps, we have neglected the contributions from \(\langle 0|\hat{x}^{n}\hat{p}^{m}|0\rangle\) in the expansions of \(\partial\hat{H}/\partial\hat{p}\) and \(\partial\hat{H}/\partial\hat{x}\), which is valid when the extent of the orbit, such as \(x(t)\) and \(p(t)\), predominantly surpasses that of the wave packet, such as \(\langle 0|\hat{x}^{2}|0\rangle\) - the standard semiclassical prerequisites. The approximation becomes exact for a quadratic \(\hat{H}\), since \(\partial\hat{H}/\partial\hat{p}\) and \(\partial\hat{H}/\partial\hat{x}\) are linear in \(\hat{x}\) and \(\hat{p}\), where Eq. 7 applies.
The complex-variable description \(|x_{0},p_{0}\rangle\) of the wave packet is more redundant in terms of the choices of \(x_{0},p_{0}\in\mathbb{C}\) for a wave packet at \(x_{r},p_{r}\in\mathbb{R}\), just with different \(A\) factors. Nevertheless, the orbits following different conventions describe physics consistently. For example, we demonstrate the universal evolution of \(x_{r},p_{r}\) and \(A\) for various initializations that differ merely in their conventions in the supplemental materials [78]. Likewise, the orbits in Eq. 24 that differ by a conformal mapping \(z_{1}e^{i\omega t}\to z_{1}^{\prime}e^{i\omega t}\) are equivalent to each other. Note that \(z_{2}e^{-i\omega t}\to z_{2}^{\prime}e^{-i\omega t}\) under the constraint of the quantization condition \(z_{1}z_{2}=z_{1}^{\prime}z_{2}^{\prime}=(n+1/2)\beta/\omega\).
The complex semiclassical theory also allows us to approximate the quantum states from the corresponding closed orbits. For instance, let's consider the orbits in Eq. 24, and set \(2\alpha=\omega-i\eta\) for simplicity: see the second column of Table 1 for the resulting quantum eigenstates. Semiclassically, we can take the limit \(z_{2}\to\infty\) without loss of generality, then safely neglect \(z_{1}\to 0\) for an isotropic orbit and wave packet: \(x(t)=z_{2}e^{-i\omega t}\), \(p(t)=z_{2}e^{-i\omega t}\alpha/i\beta\), and the following COM location and \(A\) factor:
\[\frac{x(t)+ip(t)}{\sqrt{2}} = \frac{x(t)}{\sqrt{2}}(1+\frac{\alpha}{\beta}),\] \[-\frac{x^{2}(t)+p^{2}(t)}{4} = \frac{x^{2}(t)}{4}(\frac{\alpha^{2}}{\beta^{2}}-1). \tag{12}\]
Averaging the wave packets \(|x(t),p(t)\rangle\) with the geometric phase \(\exp(in\omega t)\) over the closed orbit, we obtain [80]:
\[|n\rangle \propto \int_{0}^{T}dt\cdot\exp(in\omega t)\exp[\frac{x^{2}}{4}(\frac{ \alpha^{2}}{\beta^{2}}-1)+\frac{x}{\sqrt{2}}(1+\frac{\alpha}{\beta})\hat{a}_{ 0}^{\dagger}]|0\rangle_{0} \tag{13}\] \[\propto \oint dx\cdot x^{-n-1}e^{\kappa x^{2}/2+x\hat{a}_{0}^{\dagger}} |0\rangle_{0},\]
where \(\kappa=\eta/i\omega=(\alpha-\beta)/(\alpha+\beta)\), and \(|0\rangle_{0}\) is an isotropic wave packet following \(\hat{a}_{0}|0\rangle_{0}=0\), \(\hat{a}_{0}=(\hat{x}+i\hat{p})/\sqrt{2}\). We have also renormalized \((1+\alpha/\beta)x/\sqrt{2}\to x\) in the second line. Following the residual theorem, the contour integral returns the \(x^{n}\)'s coefficient in the expansion of \(\exp(x\hat{a}^{\dagger}+\kappa x^{2}/2)\), which compare consistently with the quantum benchmarks as the examples in Table 1. However, an analytical solution for the target quantum state may not be readily available and generally need numerical calculations, e.g., integration under the real-space basis for the wave function \(\psi_{n}(x)\), and approximations on the wave-packet shapes; see examples in the supplemental materials [78].
_Lattice model examples_ --The complex semiclassical theory also applies to lattice models, for which we consider the following 2D non-Hermitian model for illustration:
\[\hat{H}=\sum_{\mathbf{r}}\eta c_{\mathbf{r}-\hat{x}+\hat{y}}^{\dagger}c_{ \mathbf{r}}-c_{\mathbf{r}+\hat{x}}^{\dagger}c_{\mathbf{r}}-c_{\mathbf{r}-\hat {x}}^{\dagger}c_{\mathbf{r}}-Vc_{\mathbf{r}+\hat{y}}^{\dagger}c_{\mathbf{r}}-Vc _{\mathbf{r}-\hat{y}}^{\dagger}c_{\mathbf{r}}, \tag{14}\]
with an external magnetic field of \(Q/2\pi\) flux quantum per lattice plaquette. Using the Landau gauge, we can re-express the model as a non-Hermitian 1D lattice model:
\[\hat{H}=\sum_{x}(\eta e^{iQx}-1)c_{x}^{\dagger}c_{x+1}-c_{x+1}^{ \dagger}c_{x}\] \[-2V\cos(Qx)c_{x}^{\dagger}c_{x}, \tag{15}\]
which we diagonalize for its quantum spectrum and eigenstates and compare with the complex semiclassical theory. We note that the diagonalization of non-Hermitian Hamiltonians may suffer accumulated numerical errors [7; 39; 64], especially for relatively large systems, e.g., sizable magnetic unit cells in a small magnetic field, where the semiclassical theory excels. Also, these model examples are beyond the framework of non-Bloch band theories [1; 3; 5].
Semiclassically, the energy relation and EOMs are:
\[\epsilon = -2\cos p-2V\cos Qx+\eta e^{i(Qx+p)},\] \[\dot{x} = 2\sin p+i\eta e^{i(Qx+p)},\] \[\dot{p} = -Q[2V\sin Qx+i\eta e^{i(Qx+p)}]. \tag{16}\]
We track the complex \((x,p)\) space trajectories via finite-time steps and evaluate \(\oint p\cdot dx\) over the closed orbits [69; 70; 71; 72; 73; 74; 75], generally obtainable using our strategy mentioned earlier. In Fig. 2, we summarize \(\oint p\cdot dx\) versus the complex energy \(\epsilon\) over nontrivial closed orbits. The real and imaginary parts of the quantization condition \(\oint p\cdot dx=(n+1/2)h\) offer two constraints and a discrete series of energy levels \(\epsilon_{n}\) in the complex \(\epsilon\) plane, which compare consistently with the quantum benchmark. We can also obtain approximate quantum eigenstates corresponding to the qualified orbits. Further details and additional lattice-model examples are in the supplemental materials [78].
The complex semiclassical theory also offers straightforward recipes for studying phases and phase transitions in non-Hermitian quantum systems. For example, the NHSE emerges naturally from the complex semiclassical theory with a soft potential \(V(x)\) characterizing boundaries, as illustrated in the supplemental materials [78]. Further, the interplay between non-Hermitian topological physics and NHSE has received much attention recently[81; 67]. From a complex-semiclassical-theory perspective, these two scenarios are separated by a Lifshitz transition: the former cases possess nontrivial closed loops \(p(T)=p(0)\) accompanied by discrete Landau levels and localized quantum Hall states; in contrast, the latter cases defy a periodic closure, but rather settle with open trajectories \(p(T)=p(0)+\mathbf{G}\), where \(\mathbf{G}=2\pi m\neq 0\) is a reciprocal lattice vector - while the lattice momentum nominally reset to initialization, the
Figure 2: For each complex energy \(\epsilon\), the resulting (left) real and (right) imaginary parts of \(\oint p\cdot dx\), over its closed orbits, if any, determine the satisfiability of the quantization condition. The translucent planes denote \((2n+1)\pi\), \(n\in\mathbb{Z}\) for the real part and \(0\) for the imaginary part, which pinpoints the physical \(\epsilon_{n}\) in excellent consistency with the quantum spectrum (red dots). The vertical values of the red dots are set for straight comparisons. \(V=1.0\), \(\eta=0.1i\), and \(Q=0.02\).
wave packet keeps traversing across the system in the real space.
Navigating over the complex energy \(\epsilon\), we summarize in Fig. 3 the phase diagram of the non-Hermitian lattice model in Eq. 14. For benchmarking, we also examine the quantum eigenstates and spectrum of the model. In particular, we calculate the inverse participation ratio (IPR): \((\sum_{x}|\psi(x)|^{4})^{-1}\)[82] for each eigenstate and characterize its localization properties with the ratio of IPRs as we enlarge the system. The results' consistency is satisfactory: the quantum eigenstate retains a robust IPR, thus its localization [83], in and only in the quantum Hall region determined by the complex semiclassical theory; more implicitly, sharp kinks appear in the complex spectrum near the predicted boundary. Interestingly, we can further generalize the models to complex-valued magnetic field \(Q\), whose physics is also nicely captured by the complex semiclassical theory (Fig. 3 and Ref. [78]).
_Discussions --_We have established a systematic complex semiclassical theory with an analytical continuation of physical variables applicable to non-Hermitian quantum systems. Previous work has also studied EOMs with complex-valued position and momentum [67], yet is restricted to real-valued time and energy levels. We have also provided a physical interpretation for such complex variables in terms of non-unitary evolution, and a strategy for closed orbits and approximate states, and demonstrated applicability and effectiveness on continuous and lattice models. The complex semiclassical theory may also apply beyond the framework of non-Hermitian Hamiltonians. For example, it is straightforward to incorporate into the EOMs friction, such as \(\mathbf{F}_{\mu}=-\mu\mathbf{p}\), which forgoes closed orbits conventionally in real time; the complex semiclassical theory circumvents such difficulties; see examples in Ref. [78]. Therefore, it may potentially provide alternative approaches and insightful pictures toward dissipative or driven quantum systems.
The generalization to multi-band scenario is also a meaningful direction. On the one hand, it introduces Berry phases, which may possess complex values and exotic properties, into the problems. On the other hand, it allows more forms of complex energy relations so that the EOMs may involve multi-valued functions and nontrivial complex geometries such as branch cuts. While we have only considered models with relatively simple complex spaces and closed orbits' winding around certain fixed points, the possibilities of emergent physics in a more general complex semiclassical theory seem fascinating.
_Acknowledgments --_We acknowledge helpful discussions with Ryuichi Shindou, Junren Shi, and Haoshu Li. We also acknowledge support from the National Key R&D Program of China (No.2022YFA1403700) and the National Natural Science Foundation of China (No.12174008 & No.92270102).
## Appendix A More continuous model examples
### Closed orbits of non-Hermitian harmonic oscillators from a quantum perspective
In this subsection, we study the quantum evolution of the wave packets under the Hamiltonian in Eq. 3 in the main text, which offers an alternative, quantum perspective of the orbits in Eq. 5 in the main text. As we have shown in the main text, we can re-express the Hamiltonian:
\[\hat{H} = \omega\left(b^{\dagger}a+1/2\right),\] \[\hat{a} = \frac{2\alpha\hat{x}+(\eta+i\omega)\hat{p}}{2\sqrt{\alpha\omega}},\] \[\hat{b}^{\dagger} = \frac{2\alpha\hat{x}+(\eta-i\omega)\hat{p}}{2\sqrt{\alpha\omega}}, \tag{17}\]
where \(\omega=\sqrt{4\alpha\beta-\eta^{2}}\) is generally complex. Although \(\hat{b}^{\dagger}\neq\hat{a}^{\dagger}\), we still have the commutation relation \([\hat{a},\hat{b}^{\dagger}]=1\), which leads to the ladder algebra:
\[\hat{b}^{\dagger}|n = \sqrt{n+1}|n+1\rangle,\] \[\hat{a}|n = \sqrt{n}|n-1\rangle,\] \[\hat{b}^{\dagger}\hat{a}|n = n|n\rangle,\]
where \(\hat{a}|0\rangle=0\). Therefore, \(|n\rangle\) and \(\epsilon_{n}=(n+1/2)\omega\) are the eigenstates and energy eigenvalues of \(\hat{H}\), respectively, \(n\in\mathbb{Z}\).
Figure 3: The complex semiclassical theory characterizes the phases (bright blue and pink regions) and phase boundary (black curve) for the non-Hermitian lattice model in Eq. 14 based on the existence or absence of nontrivial closed orbits. The dots and their color scales denote the quantum energy spectrum \(\epsilon_{n}\) and each corresponding eigenstate’s ratio of IPRs upon doubling the system size. The insets illustrate the schematic trajectories of the two phases. \(V=0.3\), \(\eta=0.1i\).
Then, we can define a wave-packet state \(|z_{0}\rangle=e^{-|z_{0}|^{2}/2}e^{z_{0}\hat{b}^{\dagger}}|0\rangle\), \(z_{0}\in\mathbb{C}\), whose time evolution under \(\hat{H}\) is straightforward to obtain: \(e^{-i\omega t/2}|z_{t}\rangle\), where \(z_{t}=e^{-i\omega t}z_{0}\). Consequently, the wave packet's center of mass (COM) shows the following time dependence:
\[\langle\hat{x}\rangle(t) = \langle 0|e^{-z_{t}\hat{b}^{\dagger}}e^{|z_{t}|^{2}/2}\hat{x}e^{-| z_{t}|^{2}/2}e^{z_{t}\hat{b}^{\dagger}}|0\rangle \tag{18}\] \[= \langle 0|\hat{x}-z_{t}[\hat{b}^{\dagger},\hat{x}]|0\rangle\] \[\approx \frac{(\omega+i\eta)z_{t}}{2\sqrt{\alpha\omega}}=z_{2}e^{-i \omega t},\] \[\langle\hat{p}\rangle(t) = \langle 0|\hat{x}-z_{t}[\hat{b}^{\dagger},\hat{p}]|0\rangle\] \[\approx \frac{-2i\alpha z_{t}}{2\sqrt{\alpha\omega}}=-\frac{(i\omega+ \eta)z_{2}}{2\beta}e^{-i\omega t},\]
where \(z_{2}=(\omega+i\eta)z_{0}/2\sqrt{\alpha\omega}\). We have neglected \(\langle 0|\hat{x}|0\rangle\) and \(\langle 0|\hat{p}|0\rangle\) as the scale and shape of the wave packet are much less significant than those of the orbit in a semiclassical approximation. These results are fully consistent with the \(z_{2}\) terms in Eq. 5 in the main text obtained via semiclassical equations of motion (EOMs). Note that we have employed inverse, instead of Hermitian conjugate, in the expectation values \(\langle\hat{x}\rangle(t)\) and \(\langle\hat{p}\rangle(t)\), which can now take on complex values.
To obtain the \(z_{1}\) terms, on the other hand, we may start with a different basis of ladder states: \(\hat{b}^{\dagger}|0\rangle^{\prime}=0\), \(\hat{a}|n\rangle^{\prime}=\sqrt{n+1}|n+1\rangle^{\prime}\). The corresponding wave-packet state evolves with an \(e^{i\omega t}\) factor independently from the \(z_{2}\) terms. As we show in the main text and later in the next subsection, the parameterizations are redundant with both \(z_{1}\) and \(z_{2}\), and orbits that differ by a conformal mapping \(z_{1}e^{i\omega t}\to z_{1}^{\prime}e^{i\omega t}\), \(z_{2}e^{-i\omega t}\to z_{2}^{\prime}e^{-i\omega t}\) are equivalent to each other.
### Redundancy and consistency of complex variables
In Eqs. 8 and 9 of the main text, we have established the connection between the complex variables \(x_{0},p_{0}\in\mathbb{C}\) and the wave packet \(|x_{r},p_{r}\rangle\) with real variables \(x_{r},p_{r}\in\mathbb{R}\) and an additional factor \(A\in\mathbb{C}\). Therefore, there exists redundancy in \(x_{0},p_{0}\) with the same \(x_{r},p_{r}\in\mathbb{R}\). They follow the same equality \(x_{r}+ip_{r}=x_{0}+ip_{0}\) and differ by their \(A\) factors, which can be attributed to quantum states' normalization and phase conventions. In this subsection, we demonstrate that the multiple orbits under different conventions describe the same physics consistently.
For example, we consider the continuous model in Eq. 3 in the main text and map out the trajectories following various initializations, which correspond to identical \(x_{r},p_{r}\in\mathbb{R}\). Their subsequent evolution is universal and consistent with each other, differing merely in conventions - the difference in their \(A\) factors carries through the entire orbits as constant ratios; see Fig. 4. The results and conclusions generalize to other models and conventions straightforwardly. Thus, the redundancy over complex variables does not lead to self-contradiction.
### Model examples with higher-order terms
In this subsection, we examine another continuous model - a non-Hermitian quantum model with higher-order \(\hat{x}\):
\[\hat{H}=\beta\hat{x}^{4}+\gamma\hat{p}^{2}+\alpha\hat{a}^{2}, \tag{19}\]
where \(\alpha,\beta,\gamma\in\mathbb{C}\). With the ladder operators \(\hat{x}=\frac{1}{\sqrt{2}}\left(\hat{a}+\hat{a}^{\dagger}\right)\) and \(\quad\hat{p}=\frac{i}{\sqrt{2}}\left(\hat{a}^{\dagger}-\hat{a}\right)\), we can re-express the Hamiltonian:
\[\hat{H} = \frac{\beta}{4}[\hat{a}^{4}+\left(\hat{a}^{\dagger}\right)^{4}+4 \hat{a}^{\dagger}\hat{a}^{3}+4(\hat{a}^{\dagger})^{3}\hat{a}+6(\hat{a}^{ \dagger})^{2}+6\hat{a}^{2} \tag{20}\] \[+ 6n^{2}+6n+3]-\frac{\gamma}{2}[(\hat{a}^{\dagger})^{2}+\hat{a}^{2} -2n-1]+\alpha\hat{a}^{2},\]
which we diagonalize in the occupation number basis \(\hat{n}|n\rangle=n|n\rangle\) numerically (with a truncation at sufficiently large \(n\)) for the quantum benchmarks.
Semiclassically, we begin with the classical energy relation \(\epsilon=\beta x^{4}+\gamma p^{2}+\alpha(x+ip)^{2}/2\) and the resulting EOMs:
\[\dot{x} = i\alpha x+\left(2\gamma-\alpha\right)p,\] \[\dot{p} = -4\beta x^{3}-\alpha x-i\alpha p, \tag{21}\]
which we employ to search for the closed orbits. In the processes, the strategy of \(dt\) phase adjustment toward the initialization we mentioned in the main text is especially helpful; see illustrations in Fig. 5. In particular, the complex energies with qualified quantization conditions are entirely consistent with the quantum benchmarks; see examples in Table. 2.
More generally, the applicability of complex semiclassical theory applies to non-Hermitian systems with both potential and kinetic terms beyond quadratic order. For instance, we consider the following continuous model:
\[\hat{H}=\beta\hat{p}^{4}+\gamma\hat{x}^{4}+\alpha\hat{a}^{2}, \tag{22}\]
Figure 4: We initialize two semiclassical orbits with \(x=2,p=1\) and \(x^{\prime}=2.3+0.3i,p^{\prime}=0.7+0.3i\), which correspond to the same \(x_{r},p_{r}\in\mathbb{R}\), and (left) their wave packets’ COMs evolve consistently, while (right) the ratio between their \(A\) factors carries over as a constant, therefore constitute merely conventions.
where \(\alpha,\beta,\gamma\in\mathbb{C}\). In terms of the ladder operators, the Hamiltonian becomes:
\[\hat{H} = \frac{\beta}{4}[\hat{a}^{4}+\left(\hat{a}^{\dagger}\right)^{4}+4 \hat{a}^{\dagger}\hat{a}^{3}+4(\hat{a}^{\dagger})^{3}\hat{a}+6(\hat{a}^{\dagger })^{2}+6\hat{a}^{2} \tag{23}\] \[+ 6n^{2}+6n+3]+\frac{\gamma}{4}[\hat{a}^{4}+\left(\hat{a}^{\dagger }\right)^{4}-4\hat{a}^{\dagger}\hat{a}^{3}-4(\hat{a}^{\dagger})^{3}\hat{a}\] \[- 6(\hat{a}^{\dagger})^{2}-6\hat{a}^{2}+6n^{2}+6n+3]+\alpha\hat{a} ^{2},\]
which we also diagonalize numerically for the quantum benchmarks.
Semiclassically, the EOMs following classical energy relation are:
\[\dot{x} = 4\gamma p^{3}-\alpha p+i\alpha x,\] \[\dot{p} = -4\beta x^{3}-\alpha x-i\alpha p, \tag{24}\]
which we use to establish closed orbits (Fig. 6). We also evaluate the integral \(\oint pdx\) of these closed orbits, which determines their qualifications as per the quantization conditions. We demonstrate several examples in Table. 3. Interestingly, though relatively small, there are noticeable departures of the qualified orbits' \(\oint pdx\) from the exact half-integer values of \(2\pi\), especially at small \(n\): \(9.518\) versus \(9.425\) for \(n=1\), \(15.774\) versus \(15.708\) for \(n=2\). Such differences contrast sharply with previous continuous model examples, where we have at least four or five digits of accuracy. Physically, the model in Eq. 22 deviates drastically from quadratic forms; thus, the semiclassical approximation suffers at small \(n\), where the wave packets' scale and shape become relatively influential. Indeed, such deviations vanish asymptotically at larger \(n\), e.g., \(n\geq 5\).
These results show that our complex semiclassical theory generally applies to continuous models of non-Hermitian quantum systems.
## Appendix B: Lattice model examples
### Approximate quantum states
In this subsection, we establish approximate quantum states of non-Hermitian quantum systems from the complex semiclassical theory. For example, we consider the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(n\) & 1 & 2 & 5 & 6.39+0.89i \\ \hline \(Re[\epsilon_{n}]\) & 3.622 & 7.037 & 19.861 & 10 \\ \hline \(Im[\epsilon_{n}]\) & -0.558 & -1.240 & -3.936 & 0 \\ \hline \(Re[\oint p\cdot dx]\) & 9.518 & 15.774 & 34.588 & 20.075 \\ \hline \(Im[\oint p\cdot dx]\) & 0.005 & -0.001 & 0.001 & 2.781 \\ \hline Quantum & Yes & Yes & Yes & No \\ \hline \end{tabular}
\end{table}
Table 2: The real and imaginary parts of the energies and geometric phases of closed orbits following the complex semiclassical EOMs are consistent with the quantization condition \(\oint p\cdot dx=(2n+1)\pi\) for quantized, complex energy eigenvalues for the model in Eq. 19. The parameters are \(\alpha=0.3+0.5i\), \(\beta=1.0+0.1i\), and \(\gamma=1.0-0.1i\).
Figure 5: Following the EOMs, the semiclassical trajectories of the model in Eq. 19 (a) form closed loops along the \(t/T\in\mathbb{R}\) direction and (b) end up in open spirals along the \(t\in\mathbb{R}\) direction, where we may still form closed loops by adjusting \(dt\), and the resulting trajectory is in (c). (d) The corresponding trajectories in the complex \(t\) plane show the equivalence between different closed orbits reaching the complex period \(T\). \(\alpha=0.3+0.5i\), \(\beta=1.0+0.1i\), \(\gamma=1.0-0.1i\), and \(\oint p\cdot dx=5\pi\) indicating the complex energy \(\epsilon=7.04+1.24i\) qualifies for a quantum level.
Figure 6: Following the EOMs, the semiclassical trajectories of the quartic model in Eq. 22 (a) form closed loops along the \(t/T\in\mathbb{R}\) direction and (b) end up in open spirals along the \(t\in\mathbb{R}\) direction, where we may still form closed loops by adjusting \(dt\), and the resulting trajectory is in (c). (d) The corresponding trajectories in the complex \(t\) plane show the equivalence between different closed orbits reaching the complex period \(T\). \(\alpha=0.5+0.3i\), \(\beta=1.0+0.1i\), and \(\gamma=1.0+0.1i\). The trajectories’ energy is \(\epsilon=18.65+1.87i\) with a geometric phase \(\oint p\cdot dx=16.0101+0.0002i\approx 5\pi\).
lattice model in Eq. 15 in the main text:
\[\hat{H}=\sum_{x}(\eta e^{iQx}-1)c_{x}^{\dagger}c_{x+1}-c_{x+1}^{ \dagger}c_{x}-2V\cos(Qx)c_{x}^{\dagger}c_{x}, \tag{25}\]
which we diagonalize for its quantum energy eigenvalues and eigenstates. We set \(Q=0.005\), \(V=0.3\), and without loss of generality, we choose its \(n=3\) eigenstate as a benchmark for comparison with the complex semiclassical theory. This eigenstate is sufficiently localized that the non-Hermitian error accumulation does not pose a severe numerical issue.
Semiclassically, we begin with a closed orbit with the target quantization condition \(\oint p\cdot dx=7\pi\). The orbit shapes in the complex \(x\) and \(p\) spaces are in the inset of Fig. 7. Its energy \(\epsilon_{3}=-2.5905+0.0959i\) is consistent with the corresponding quantum energy eigenvalue.
Further, we can extract an approximation of the quantum state as:
\[\psi_{3}(x)\propto\int_{0}^{T}dt\cdot\exp(6\pi it/T)\cdot A_{r} \cdot\phi_{x_{t}+ip_{t}}(x), \tag{26}\]
where \(A_{r}=\exp\left[(|\kappa x_{t}+ip_{t}|^{2}/\kappa-\kappa x_{t}^{2}-p_{t}^{2}/ \kappa)/4\right]\), \(\kappa=Q\sqrt{V}\), and \(\phi_{x_{t}+ip_{t}}(x)\) is a wave packet with momentum \(Im(\kappa x_{t}+ip_{t})\) and position \(Re(x_{t}+ip_{t}/\kappa)\), obtainable via translating \(\phi_{0}(x)\) - a wave packet with zero momentum at the origin. Generally speaking, the shape of \(\phi_{0}(x)\) is unknown, and we infer \(\phi_{0}(x)\) and its anisotropy \(\kappa\) from the low-energy expansion of Eq. 25 as an approximation. Unlike the exactly solvable Eq. 13 in the main text, here we solve the integral numerically for a real-space wave function \(\psi_{3}(x)\), which compares qualitatively consistent with the quantum benchmark (Fig. 7). Given the corresponding closed orbit in the complex semiclassical theory, such an approach for the target quantum eigenstate is straightforwardly generalizable.
### Non-Hermitian skin effect and generalized WKB approximation
In this subsection, we apply the complex semiclassical theory to another non-Hermitian lattice model - the well-known Hatano-Nelson model:
\[\hat{H} = \sum_{x}(t+\gamma)c_{x+1}^{\dagger}c_{x}+\sum_{x}(t-\gamma)c_{x} ^{\dagger}c_{x+1} \tag{27}\] \[= \sum_{k}\left[2t\cos(p)-2i\gamma\sin(p)\right]c_{k}^{\dagger}c_{ k},\]
where we set \(t=1\) and \(\gamma=\pm 0.1\) without loss of generality. In particular, as a soft boundary condition, we introduce the following confining potential:
\[V(x)=U\left[e^{\alpha(x-L/2)}+e^{-\alpha(x+L/2)}\right], \tag{28}\]
whose profile with \(L=60\), \(U=0.2\), and \(\alpha=0.1\) is in Fig. 8a. Note that instead of the common open boundary conditions that are hard walls, we have employed smoother boundaries for better suitability of a semiclassical theory.
We start with the following energy relation and EOMs in the complex semiclassical theory:
\[\epsilon = 2t\cos(p)+2i\gamma\sin(p)+V(x),\] \[\dot{x} = \partial\epsilon/\partial p=-2t\sin(p)-2i\gamma\cos(p),\] \[\dot{p} = -\partial\epsilon/\partial x=\alpha U\left[e^{-\alpha(x+L/2)}-e^ {\alpha(x-L/2)}\right]. \tag{29}\]
Interestingly, we can keep \(x,t\in\mathbb{R}\) with an initializa
Figure 7: The complex semiclassical theory’s approximate state \(\psi_{3}(x)\) compares qualitatively well with the corresponding exact quantum eigenstate. We consider the \(n=3\) eigenstate of the non-Hermitian model in Eq. 25 with \(Q=0.005\) and \(V=0.3\). For simplicity, we only show the amplitude of the real-space wave function. The insets show the orbit shapes in the complex \(x\) and \(p\) spaces. The corresponding energy eigenvalue is \(\epsilon_{3}=-2.5905+0.0959i\), consistent between the exact diagonalization and the complex semiclassical theory.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(n\) & 1 & 5 & 10 & 3.72-0.19i \\ \hline \(Re[\epsilon_{n}]\) & 7.137 & 87.540 & 317.226 & 10 \\ \hline \(Im[\epsilon_{n}]\) & 0.721 & 8.760 & 31.729 & 0 \\ \hline \(Re[\oint p\cdot dx]\) & 9.903 & 34.694 & 66.047 & 11.680 \\ \hline \(Im[\oint p\cdot dx]\) & 0.001 & 1e-04 & 3e-05 & -0.586 \\ \hline Quantum & Yes & Yes & Yes & No \\ \hline \end{tabular}
\end{table}
Table 3: The real and imaginary parts of the energies and geometric phases of closed orbits following the complex semiclassical EOMs are nearly consistent with the quantization condition \(\oint p\cdot dx=(2n+1)\pi\) for quantized, complex energy eigenvalues for the model in Eq. 22, especially for relatively larger values of \(n\). The parameters are \(\alpha=0.5+0.3i\), \(\beta=1.0+0.1i\), and \(\gamma=1.0+0.1i\).
tion that satisfies:
\[\text{Im}(x)=0,\text{Im}(p)=\frac{1}{2}\ln\frac{t-\gamma}{t+\gamma}. \tag{30}\]
Then, following the EOMs in Eq. 29, \(\dot{p}\) becomes purely real, which keeps \(\text{Im}(p)\) constant with \(t\in\mathbb{R}\), and in turn, \(\dot{x}\) remains purely real, Eq. 30 further holds, and so on so forth. Straightforwardly, we can obtain a closed orbit; see Fig. 8b for example.
The complex semiclassical theory is not restricted to such real-valued \(x\); however, such a closed orbit is equally valid, analytically convenient, and, at the same time, intimately connected with the WKB approximation. Conventionally, while the position \(x\) remains real in the WKB approximation, the momentum \(p\) retains a complex description as it becomes purely real or imaginary depending on whether \(E>V(x)\) or \(E<V(x)\), respectively. The semiclassical orbits such as Fig. 8b are just one step more general, as they concern a fully complex \(p\), instead of just real or imaginary. The quantization condition is also consistent with that of the WKB approximation.
Notably, the non-Hermitian skin effect (NHSE) emerges naturally from the complex semiclassical theory and the generalized WKB theory. Although the closed orbit ventures across the entire length of the system (Fig. 8b), the contribution of the wave packets to the target quantum state receives an additional factor \(\exp[-\int\text{Im}(p)dx]\) contributed by the complex-valued geometrical phase - an exponential weight amplification (attenuation) to the right (left) - the NHSE (Fig. 8c). The tendency reverses with a sign change of \(\gamma\). Indeed, \(\text{Im}(p)\) given by Eq. 30 contributes a universal factor:
\[\exp(-0.5x\ln\frac{t-\gamma}{t+\gamma})=(\frac{t+\gamma}{t-\gamma})^{x/2}, \tag{31}\]
at position \(x\), which is indeed consistent with the similarity-transformation embodiment of the NHSE [46]. In the complex semiclassical theory, the factor \(A\) in Eq. 8 of the main text contributes a similar amplitude factor.
## Appendix C: Expectation-value conventions in non-Hermitian quantum systems
We can diagonalize a non-Hermitian Hamiltonian as \(\hat{H}=\sum_{n}\epsilon_{n}\ket{nR}\bra{nL}\), where \(\ket{nR}\) (\(\bra{nL}\) is the right (left) eigenstate with respect to the energy eigenvalue \(\epsilon_{n}\). Correspondingly, the time evolution operator is:
\[U\left(t\right)=\exp\left(-i\hat{H}t\right)=\sum_{n}\exp\left(-i\epsilon_{n}t \right)\ket{nR}\bra{nL}. \tag{32}\]
Alternatively, we can write the diagonalized \(\hat{H}\) in more compact matrix form as \(\Lambda=V^{-1}\hat{H}V\), where the \(n^{th}\) column (row) of \(V\) (\(V^{-1}\)) is exactly \(\ket{nR}\) (\(\bra{nL}\)). \(\epsilon_{n}\), on the diagonal \(\Lambda\), is the biorthogonal expectation value of \(\hat{H}\): \(\epsilon_{n}=\bra{nL}\hat{H}nR\rangle\). Consistently, it is natural to deduce that the expectation value of an operator \(\hat{O}\) is the biorthogonal counterpart, \(\bra{\hat{O}}_{n}=\bra{nL}\hat{O}\ket{nR}\), or more generally, \(\bra{\hat{O}}_{nm}=\bra{nL}\hat{O}\ket{mR}\).
Given an initial quantum state \(\ket{\Phi(0)}\), we can decompose it in the biorthogonal basis \(\ket{\Phi(0)}=\sum_{n}\ket{nR}\bra{nL}\Phi(0)\). It evolves in time as:
\[\ket{\Phi\left(t\right)} =U\left(t\right)\ket{\Phi(0)}\] \[=\sum_{nm}\exp\left(-iE_{n}t\right)\ket{nR}\bra{nL}\ket{mR}\bra{mL }\Phi(0)\] \[=\sum_{n}\exp\left(-iE_{n}t\right)\ket{nR}\bra{nL}\Phi(0)\right), \tag{33}\]
where we have used the property \(\bra{nL}\ket{mR}=\delta_{nm}\) of the biorthogonal basis. Similarly, we have its bra \(\bra{\Phi(0)}\)['s
Figure 8: (a) The profile of the confining potential in Eq. 28 introduces two soft boundaries. The red dashed lines show the setting for \(L=60\), beyond which the potential increases more rapidly. (b) The closed orbit following EOMs in Eq. 29 retains a real-valued \(x\) and a constant \(\text{Im}(p)=\mp 0.1003\) for \(\gamma=\pm 0.1\), respectively, consistent with Eq. 30. The arrows mark the direction of the cyclotron motion. (c) The additional factor and amplitude modification to the wave packet due to the complex momentum offers an alternative picture of the NHSE. Note the log scale along the vertical direction.
time evolution as:
\[\left\langle\Phi(t)\right| =\left\langle\Phi(0)\right|U^{-1}(t)\] \[=\sum_{n}\exp\left(iE_{n}t\right)\left\langle\Phi(0)|nR\right\rangle \left\langle nL\right|, \tag{34}\]
whereas the Hermitian conjugation of \(|\Phi(t)\rangle\) is different:
\[\left\langle\Phi(t)\right|^{\prime}=\sum_{n}\exp\left(iE_{n}^{*}t\right) \left\langle\Phi(0)|nL\right\rangle\left\langle nR\right|. \tag{35}\]
Thus, the amplitude of \(\left\langle\Phi(t)|\Phi(t)\right\rangle\) is conserved, while that of \(\left\langle\Phi(t)^{\prime}|\Phi(t)\right\rangle\) grows or decays exponentially with respect to \(t\):
\[\left\langle\Phi(t)|\Phi(t)\right\rangle =\left\langle\Phi(0)|U^{-1}(t)U(t)|\Phi(0)\right\rangle=1,\] \[\left\langle\Phi(t)|^{\prime}|\Phi(t)\right\rangle =\left\langle\Phi(0)|U^{\dagger}(t)U(t)|\Phi(0)\right\rangle\] \[=\sum_{mn}\exp\left[i(E_{m}^{*}-E_{n})t\right]\left\langle\Phi(0 )|mL\right\rangle\] \[\qquad\times\left\langle mR|nR\right\rangle\left\langle nL|\Phi( 0)\right\rangle. \tag{36}\]
Therefore, the former convention is more convenient in non-Hermitian quantum systems.
Putting both the expectation-value and time-evolution conventions together, we define and evaluate the time evolution of the expectation value of observable \(\hat{O}\) as:
\[\left\langle\hat{O}(t)\right\rangle=\left\langle\Phi(t)|\hat{O}|\Phi(t) \right\rangle=\left\langle\Phi(0)|U^{-1}\hat{O}U(t)|\Phi(0)\right\rangle, \tag{37}\]
which converts back to the common definition in Hermitian systems:
\[\left\langle\hat{O}(t)\right\rangle =\left\langle\Phi(0)|U^{-1}\hat{O}U(t)|\Phi(0)\right\rangle\] \[=\left\langle\Phi(0)|U^{\dagger}\hat{O}U(t)|\Phi(0)\right\rangle =\left\langle\Phi(t)|\hat{O}|\Phi(t)\right\rangle. \tag{38}\]
Whether the expectation value in non-Hermitian quantum systems should be defined and calculated within the biorthogonal basis or merely the right eigenstates remains a subtle matter. When we speak of the NHSE, the focus is solely on (the amplitude of) the right eigenstates: \(\left\langle\psi(x)|^{\prime}|\psi(x)\right\rangle\); in contrast, however, the NHSE disappears when we employ the biorthogonal basis: \(\left\langle\psi(x)|\psi(x)\right\rangle\). Even when it comes to many-body non-Hermitian quantum systems, these two views are both present in the literature: the spatial density of an \(n\)-fermion state \(|\psi_{\mu}(x_{1},\ldots,x_{n})\rangle\) is calculated via \(\left\langle\psi_{\mu}|^{\prime}\hat{n}_{x}|\psi_{\mu}\right\rangle\) with the right eigenstates in Ref. [84], while the non-Hermitian many-body bulk polarization takes the biorthogonal form \(P^{LR}\)[85].
## Appendix D: Physical picture of complex magnetic field \(Q\)
The complex semiclassical theory allows us to locate interesting quantum physics and phenomena from a novel perspective.
For instance, we can re-express the EOMs in Eq. 16 in the main text with three complex variables \(p\), \(Qx\), and \(Qt\):
\[\partial Qx/\partial Qt =2\sin p+i\eta e^{i(Qx+p)},\] \[\partial p/\partial Qt =-[2V\sin Qx+i\eta e^{i(Qx+p)}]. \tag{39}\]
As in the main text, we determine the resulting phases with distinctive properties via the presence or absence of close orbits. Eq. 39 indicates that such physics does not depend on the exact value \(Q\) of the magnetic field or even whether \(Q\) is real or complex. Indeed, it is natural to analytically continue \(Q\) to the complex domain, especially considering that \(p\), \(x\), and \(t\) are already complex variables.
Inserting complex \(Q\) into the lattice model in Eq. 15 in the main text, we obtain the energy spectra and eigenstates via exact diagonalization. Further, we determine the localization properties and phases via the inverse participation ratios (IPRs), which remain consistent with the complex semiclassical theory, as shown in Fig. 3 in the main text as well as Fig. 9.
The physics of such a complex magnetic field deserves further study. While a regular, real-valued magnetic field introduces a geometric phase for any path encircling its magnetic flux, a complex magnetic field allows amplification and attenuation - a "geometric amplitude" effect. Even by itself, this effect is non-unitarity, offering a new source for non-Hermitian quantum physics, potentially realizable in circuit-model simulators with directional dampers and amplifiers [86; 87; 88].
Such a geometric effect also poses direct and nontrivial consequences on the quantization conditions; see the arrows in Fig. 9. When we vary a real-valued magnetic field \(Q\), it alters \(\oint p\cdot x\in\mathbb{R}\), thus the spacings between and locations of the quantum levels along the original curve in the complex \(\epsilon\) space. In comparison, however, a complex \(Q\in\mathbb{C}\) can shift the entire curve \(\oint p\cdot x\in\mathbb{R}\) elsewhere, thus completely recasting the spectrum. Such impacts are explicit in the current model example, where \(\oint p\cdot x=Q^{-1}\oint p\cdot d(Qx)\propto Q^{-1}\).
## Appendix E: complex semiclassical theory beyond Hamiltonians - EOMs with friction
In this subsection, we test the complex semiclassical theory on a damped harmonic oscillator:
\[\dot{x} =p,\] \[\dot{p} =-x-\mu p, \tag{40}\]
where the coefficient \(\mu\) characterizes a friction \(F_{\mu}=-\mu p\). The quantum theory of such a dissipative system has been in continual investigation and controversy for decades and commonly requires external reservoirs to properly handle the dissipation of energy [89].
The EOMs in Eq. 40 are exactly solvable:
\[x =z_{1}e^{-c_{1}t}+z_{2}e^{-c_{2}t},\] \[p =-z_{1}c_{1}e^{-c_{1}t}-z_{2}c_{2}e^{-c_{2}t}, \tag{41}\]
where \(c_{1/2}=(\mu\pm\sqrt{\mu^{2}-4})/2\). When \(\mu<2\), \(c_{1/2}\) are partially imaginary, the oscillatory behavior partially remains, and the system is underdamped; when \(\mu>2\), \(c_{1/2}\) is completely real, and the system becomes overdamped. \(z_{1}\) and \(z_{2}\) are constants determined by the initial conditions.
Assuming \(\mu<2\) and setting \(z_{1}=z_{2}=x_{0}/2\), we obtain the familiar form of a classical mechanical damped oscillator:
\[x(t) =x_{0}e^{-\mu t/2}\cos(\omega t),\] \[p(t) =-x_{0}e^{-\mu t/2}\left[\omega\sin(\omega t)+\mu\cos(\omega t)/2 \right], \tag{42}\]
where \(\omega=\sqrt{1-\mu^{2}/4}\). Due to the damping factor \(e^{-\mu t/2}\), the absence of closed orbit in real \(t\in\mathbb{R}\) is apparent; see Fig. 10b.
In the complex semiclassical theory, on the other hand, the variables \(z_{1}\), \(z_{2}\), \(x\), \(p\), and, importantly, \(t\) may take on complex values. Consequently, we can establish closed orbits straightforwardly in complex time \(t\in\mathbb{C}\), e.g., \(ic_{1}t\in\mathbb{R}\) in Eq. 41. More generally, we can obtain closed orbits numerically via finite-time steps of the EOMs in Eq. 40, followed by the strategy we propose in the main text; see Fig. 10c. We can also establish closed orbits along \(t\propto T\); see Fig. 10a. As long as the winding is identical, different closed orbits are equivalent, e.g., with the same period \(T\approx 2\pi i/c_{1}\).
Unlike the aforementioned dissipation-less models in the main text and the supplemental materials, the EOMs in Eq. 40 face energy losses, and in general, we cannot return both \(x\) and \(p\) to their original complex values. Instead, we require \(a(T)=a(0)\), \(a=x+ip=x_{r}+ip_{r}\) return the real-valued COM position and momentum of the wave packet \(x_{r},p_{r}\in\mathbb{R}\) to their initial conditions. Nevertheless, the additional factor \(A\) augmenting the wave packet, as in Eq. 9 in the main text, generally differs. Accordingly, we may need to modify the quantization conditions so that the geometric phase \(\oint p\cdot dx\) compensates for such differences to establish wave-packet consistency upon cycling around the orbits. Such studies will be fascinating yet beyond the scope of the current work, and we lack a comparable quantum-theory counterpart as our benchmark.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.